These are the top NoSQL Solutions in the market today that are open
source, readily available, with a strong and active community, and
actively making forward progress in development and innovations in the
technology. I’ve provided them here, in no order, with basic
descriptions, links to their main website presence, and with short lists
of some of their top users of each database. Toward the end I’ve
provided a short summary of the database and the respective history of
the movement around No SQL and the direction it’s heading today.
Cassandra is a distributed databases that offers high availability
and scalability. Cassandra supports a host of features around
replicating data across multiple datacenters, high availability,
horizontal scaling for massive linear scaling, fault tolerance and a
focus, like many NoSQL solutions around commodity hardware.
Cassandra is a hybrid key-value & row based database, setup on
top of a configuration focused architecture. Cassandra is fairly easy to
setup on a single machine or a cluster, but is intended for use on a
cluster of machines. To insure the availability of features around fault
tolerance, scaling, et al you will need to setup a minimal cluster, I’d
suggest at least 5 nodes (5 nodes being my personal minimum clustered
database setup, this always seems to be a solid and safe minimum).
Cassandra also has a query language called CQL or Cassandra Query
Langauge. Cassandra also support Apache Projects Hive, Pig with Hadoop
integration for map reduce.
In the book, Seven Databases in Seven Weeks,
the Apache HBase Project is described as a nail gun. You would not use
HBase to catalog your sales list just like you wouldn’t use a nail gun
to build a dollhouse. This is an apt description of HBase.
HBase is a column-oriented database. It’s very good at scaling out. The origins of HBase are rooted in BigTable by Google.
The proprietary database is described in in the 2006 white paper,
“Bigtable: A Distributed Storage System for Structured Data.”
HBase stores data in buckets called tables, the tables contain cells
that are at the intersection of rows and columns. Because of this HBase
has a lot of similar characteristics to a relational database. However
the similarities are only in name.
HBase also has several features that aren’t available in other
databases, such as; versioning, compression, garbage collection and in
memory tables. One other feature that is usually only available in
relational databases is strong consistency guarantees.
The place where HBase really shines however is in queries against enormous datasets.
HBase is designed architecturally to be fault tolerate. It does this
through write-ahead logging and distributed configuration. At the core
of the architecture HBase is built on Hadoop. Hadoop is a sturdy,
scalable computing platform that provides a distribute file system and
mapreduce capabilities.
Who is using it?
Facebook uses HBase for its messaging infrastructure.
Stumpleupon uses it for real-time data storage and analytics.
Twitter uses HBase for data generation around people search & storing logging & monitoring data.
Meetup uses it for site data.
There are many others including Yahoo!, eBay, etc.
MongoDB is built and maintained by a company called 10gen. MongoDB
was released in 2009 and has been rising in popularity quickly and
steadily since then. The name, contrary to the word mongo, comes from
the word humongous. The key goals behind MongoDB are performance and
easy data access.
The architecture of MongoDB is around document database principles.
The data can be queried in an ad-hoc way, with the data persisted in a
nested way. This database also, like most NoSQL databases enforces no
schema, however can have specific document fields that can be queried
off of.
Who is using it?
Foursquare
bit.ly
CERN for collecting data from the large Hadron Collider
Redis stands for Remote Dictionary Service. The most common
capability Redis is known for, is blindingly fast speed. This speed
comes from trading durability. At a base level Redis is a key-value
store, however sometimes classifying it isn’t straight forward.
Redis is a key-value store, and often referred to as a data structure
server with keys that can be string, hashes, lists, sets and sorted
sets. Redis is also, stepping away from only being a key-value store,
into the realm of being a publish-subscribe and queue stack. This makes
Redis one very flexible tool in the tool chest.
Who is using it?
Blizzard (You know, that World of Warcraft game maker) ;)
Another Apache Project, CouchDB is the idealized JSON and REST
document database. It works as a document database full of key-value
pairs with the values a set number of types including nested with other
key-value objects.
The primary mode of querying CouchDB is to use incremental mapreduce to produce indexed views.
One other interesting characteristic about CouchDB is that it’s built
with the idea of a multitude of deployment scenarios. CouchDB might be
deployed to some big servers or may be a mere service running on your
Android Phone or Mac OS-X Desktop.
Like many NoSQL options CouchDB is RESTful in operation and uses JSON to send data to and from clients.
The Node.js Community also has an affinity for Couch since NPM and a
lot of the capabilities of Couch seem like they’re just native to
JavaScript. From the server aspect of the database to the JSON format
usage to other capabilities.
Who uses it?
NPM – Node Package Manager site and NPM uses CouchDB for storing and providing the packages for Node.js.
Couchbase (UPDATED January 18th)
Ok, I realized I’d neglected to add Couchbase (thus
the Jan 18th update), which is an open source and interesting solution
built off of Membase and Couch. Membase isn’t particularly a distributed
database, or database, but between it and couch joining to form
Couchbase they’ve turned it into a distributed database like couch
except with some specific feature set differences.
A lot of the core architecture features of Couch are available, but
the combination now adds auto-sharding clusters, live/hot swappable
upgrades and changes, memchaced APIs, and built in data caching.
Neo4j steps away from many of the existing NoSQL databases with its
use of a graph database model. It stored data as a graph, mathematically
speaking, that relates to the other data in the database. This
database, of all the databases among the NoSQL and SQL world, is very
whiteboard friendly.
Neo4j also has a varied deployment model, being able to deploy to a
small or large device or system. It has the ability to store dozens of
billions of edges and nodes.
Who is using it?
Accenture
Adobe
Lufthansa
Mozilla
…others…
Riak
Riak is a key-value, distributed, fault tolerant, resilient database
written in Erlang. It uses the Riak Core project as a codebase for the
distributed core of the system. I further explained Riak, since yes, I
work for Basho who are the makers of Riak, in a separate blog entry “Riak is… A Big List of Things“. So for a description of the features around Riak check that out.
One of the things you’ll notice with a lot of these databases and the
NoSQL movement in general is that it originated from companies needing
to go “web scale” and RDBMSs just couldn’t handle or didn’t meet the
specific requirements these companies had for the data. NoSQL is in no
way a replacement to relational or SQL databases except in these
specific cases where need is outside of the capability or scope of SQL
& Relational Databases and RDBMSs.
Almost every NoSQL database has origins that go pretty far back, but
the real impetus and push forward with the technology came about with
key efforts at Google and Amazon Web Services. At Google it was with BigTable Paper and at Amazon Web Services it was with the Dynamo Paper.
As time moved forward with the open source community taking over as the
main innovator and development model around big data and the NoSQL
database movement. Today the Apache Project has many of the projects
under its guidance along with other companies like Basho and 10gen.
In the last few years, many of the larger mainstays of the existing
database industry have leapt onto the bandwagon. Companies like Microsoft, Dell, HP and Oracle
have made many strategic and tactical moves to stay relevant with this
move toward big data and nosql databases solutions. However, the
leadership is still outside of these stalwarts and in the hands of the
open source community. The related companies and organizations that are
focused on that community such as 10gen, Basho and the Apache
Organization still hold much of the future of this technology in the
strategic and tactical actions that they take since they’re born from
and significant parts of the community itself.
For an even larger list of almost every known NoSQL Database in existence check out NoSQL Database .org.
Researchers have created software that predicts when and where disease outbreaks might occur based on two decades of New York Times articles and other online data. The research comes from Microsoft and the Technion-Israel Institute of Technology.
The system could someday help aid organizations and others be more
proactive in tackling disease outbreaks or other problems, says Eric Horvitz,
distinguished scientist and codirector at Microsoft Research. “I truly
view this as a foreshadowing of what’s to come,” he says. “Eventually
this kind of work will start to have an influence on how things go for
people.” Horvitz did the research in collaboration with Kira Radinsky, a PhD researcher at the Technion-Israel Institute.
The system provides striking results when tested on historical data.
For example, reports of droughts in Angola in 2006 triggered a warning
about possible cholera outbreaks in the country, because previous events
had taught the system that cholera outbreaks were more likely in years
following droughts. A second warning about cholera in Angola was
triggered by news reports of large storms in Africa in early 2007; less
than a week later, reports appeared that cholera had become established.
In similar tests involving forecasts of disease, violence, and a
significant numbers of deaths, the system’s warnings were correct
between 70 to 90 percent of the time.
Horvitz says the performance is good enough to suggest that a more
refined version could be used in real settings, to assist experts at,
for example, government aid agencies involved in planning humanitarian
response and readiness. “We’ve done some reaching out and plan to do
some follow-up work with such people,” says Horvitz.
The system was built using 22 years of New York Times archives, from 1986 to 2007, but it also draws on data from the Web to learn about what leads up to major news events.
“One source we found useful was DBpedia,
which is a structured form of the information inside Wikipedia
constructed using crowdsourcing,” says Radinsky. “We can understand, or
see, the location of the places in the news articles, how much money
people earn there, and even information about politics.” Other sources
included WordNet, which helps software understand the meaning of words, and OpenCyc, a database of common knowledge.
All this information provides valuable context that’s not available
in news article, and which is necessary to figure out general rules for
what events precede others. For example, the system could infer
connections between events in Rwandan and Angolan cities based on the
fact that they are both in Africa, have similar GDPs, and other factors.
That approach led the software to conclude that, in predicting cholera
outbreaks, it should consider a country or city’s location, proportion
of land covered by water, population density, GDP, and whether there had
been a drought the year before.
Horvitz and Radinsky are not the first to consider using online news
and other data to forecast future events, but they say they make use of
more data sources—over 90 in total—which allows their system to be more
general-purpose.
There’s already a small market for predictive tools. For example, a startup called Recorded Future
makes predictions about future events harvested from forward-looking
statements online and other sources, and it includes government
intelligence agencies among its customers (see “See the Future With a Search”).
Christopher Ahlberg, the company’s CEO and cofounder, says that the new
research is “good work” that shows how predictions can be made using
hard data, but also notes that turning the prototype system into a
product would require further development.
Microsoft doesn’t have plans to commercialize Horvitz and Radinsky’s
research as yet, but the project will continue, says Horvitz, who wants
to mine more newspaper archives as well as digitized books.
Many things about the world have changed in recent decades, but human
nature and many aspects of the environment have stayed the same,
Horvitz says, so software may be able to learn patterns from even very
old data that can suggest what’s ahead. “I’m personally interested in
getting data further back in time,” he says.
Miss your Amiga? Now you can play Prince of Persia, Pinball Dreams and other Amiga hits right in your web browser thanks to the Scripted Amiga Emulator, an Amiga emulator written entirely in JavaScript and HTML5.
To view the emulator, which was written by developer Rupert
Hausberger, you’ll need a browser with support for WebGL and WebAudio,
as well as a few other HTML5 APIs. I tested the emulator in the latest
version of both Chrome and Firefox and it worked just fine.
If you’d like to see the code behind the Scripted Amiga Emulator, head on over to GitHub.
At Velocity 2011, Nicole Sullivan and I introduced CSS Lint,
the first code-quality tool for CSS. We had spent the previous two
weeks coding like crazy, trying to create an application that was both
useful for end users and easy to modify. Neither of us had any
experience launching an open-source project like this, and we learned a
lot through the process.
After some initial missteps, the project finally hit a groove, and it
now regularly get compliments from people using and contributing to CSS
Lint. It’s actually not that hard to create a successful open-source
project when you stop to think about your goals.
(Smashing’s note: Subscribe to the Smashing eBook Library and get immediate unlimited access
to all Smashing eBooks, released in the past and in the future,
including digital versions of our printed books. At least 24 quality
eBooks a year, 60 eBooks during the first year. Subscribe today!)
What Are Your Goals?
These days, it seems that anybody who writes a piece of code ends up
pushing it to GitHub with an open-source license and says, “I’ve
open-sourced it.” Creating an open-source project isn’t just about
making your code freely available to others. So, before announcing to
the world that you have open-sourced something that hasn’t been used by
anyone other than you in your spare time, stop to ask yourself what your
goals are for the project.
The first goal is always to create something useful.
For CSS Lint, our goal was to create an extensible tool for CSS code
quality that could easily fit into a developer’s workflow, whether the
workflow is automated or not. Make sure that what you’re offering is
useful by looking for others who are doing similar projects and figuring
out how large of a user base you’re looking at.
After that, decide why you are open-sourcing the project in the first
place. Is it because you just want to share what you’ve done? Do you
intend to continue developing the code, or is this just
a snapshot that you’re putting out into the world? If you have no
intention of continuing to develop the code, then the rest of this
article doesn’t apply to you. Make sure that the readme file in your
repository states this clearly so that anyone who finds your project
isn’t confused.
If you are going to continue developing the code, do you want to accept contributions
from others? If not, then, once again, this article doesn’t apply to
you. If yes, then you have some work to do. Creating an open-source
project that’s conducive to outside contributions is more work than it
seems. You have to create environments in which those who are unfamiliar
with the project can get up to speed and be productive reasonably
quickly, and that takes some planning.
This article is about starting an open-source project with these goals:
Create something useful for the world.
Continue to develop the code for the foreseeable future.
Accept outside contributions.
Choosing A License
Before you share your code, the most important decision to make is
what license to apply. The open-source license that you choose could
greatly help or hurt your chances of attracting contributors. All
licenses allow you to retain the copyright of the code you produce.
While the concept of licensing is quite complex, there are a few common
licenses and a few basic things you should know about each. (If you are
open-sourcing a project on behalf of a company, please speak to your
legal counsel before choosing a license.)
GPL
The GNU Public License was created for the GNU project and has been
credited with the rise of Linux as a viable operating system. The GPL
license requires that any project using a GPL-licensed component must
also be made available under the GPL. To put it simply, any project
using a GPL-licensed component in any way must also be open-sourced
under the GPL. There is no restriction on the use of GPL-licensed
applications; the restriction only has to do with the modification and
distribution of derived works.
LGPL
The Lesser GPL is a slightly more permissive version of GPL. An
LGPL-licensed component may be linked to from an application without the
application itself needing to be open-sourced under GPL or LGPL. In all
other ways, LGPL is the same as GPL, so any derived works must also be
open-sourced under GPL or LGPL.
MIT
Also called X11, this licence is permissive, allowing for the use and
redistribution of code so long as the license and copyright are included
along with it. MIT-licensed code may be included in proprietary code
without any additional restrictions. Additionally, MIT-licensed code is
GPL-compatible and can be combined with such code to create new
GPL-licensed applications.
BSD3
This is also a permissive license that allows for the use and
redistribution of code as long as the license and copyright are included
with it. In addition, any redistribution of the code in binary form
must include a license and copyright in its available documentation. The
clause that sets BSD3 apart from MIT is the prohibition of using the
copyright holder’s name to promote a product that uses the code. For
example, if I wrote some code and licensed it under BSD3, then an
application that uses that code couldn’t use my name to promote the
product without my permission. BSD3-licensed code is also compatible
with GPL.
There are many other open-source licenses, but these tend to be the most commonly discussed and used.
One thing to keep in mind is that Creative Commons licenses are not
designed to be used with software. All of the Creative Commons licenses
are intended for “creative work,” including audio, images, video and
text. The Creative Commons organization itself recommends
not using Creative Commons licenses for software and instead to use
licenses that have been specifically formulated to cover software, as is
the case with the four options discussed above.
So, which license should you choose? It largely
depends on how you intend your code to be used. Because LGPL, MIT and
BSD3 are all compatible with GPL, that’s not a major concern. If you
want any modified versions of your code to be used only in open-source
software, then GPL is the way to go. If your code is designed to be a
standalone component that may be included in other applications without
modification, then you might want to consider LGPL. Otherwise, the MIT
or BSD3 licenses are popular choices. Individuals tend to favor the MIT
license, while businesses tend to favor BSD3 to ensure that their
company name can’t be used without permission.
To help you decide, look at how some popular open-source projects are licensed:
Another option is to release your code into the public domain.
Public-domain code has no copyright owner and may be used in absolutely
any way. If you have no intention of maintaining control over your code
or you just want to share your code with the world and not continue to
develop it, then consider releasing it into the public domain.
To learn more about licenses, their associated legal issues and how licensing actually works, please read David Bushell’s “Understanding Copyright and Licenses.”
Code Organization
After deciding how to license your open-source project, it’s almost
time to push your code out into the world. But before doing that, look
at how the code is organized. Not all code invites contributions.
If a potential contributor can’t figure out how to read through the
code, then it’s highly unlikely any contribution will emerge. The way
you lay out the code, in terms of file and directory structure as well
as coding style, is a key aspect to consider before sharing it publicly.
Don’t just throw out whatever code you have been writing into the wild;
spend some time figuring out how others will view your code and what
questions they might have.
For CSS Lint, we decided on a basic top-level directory structure of src for source code, lib for external dependencies, and tests for all test code. The src directory is further subdivided into directories that group together related files. All CSS Lint rules are in the rules subdirectory; all output formatters are in the formatters directory; etc. The tests directory is split up into the same subdirectories as src,
thereby indicating the relationship between the test code and the
source code. Over time, we’ve added top-level directories as needed, but
the same basic structure has been in place since the beginning.
Documentation
One of the biggest complaints about open-source projects is the lack
of documentation. Documentation isn’t as fun or exciting as writing
executable code, but it is critical to the success of an open-source
project. The best way to discourage use of and contributions to your
software is to have no documentation. This was an early mistake we made
with CSS Lint. When the project launched, we had no documentation, and
everyone was confused about how to use it. Don’t make the same mistake:
get some documentation ready before pushing the project live.
The documentation should be easy to update and shouldn’t require a
code push, because it will need to be changed very quickly in response
to user feedback. This means that the best place for documentation isn’t
in the same repository as the code. If your code is hosted on GitHub,
then make use of the built-in wiki for each project. All of the CSS Lint documentation
is in a GitHub wiki. If your code is hosted elsewhere, consider setting
up your own wiki or a similar system that enables you to update the
documentation in real time. A good documentation system must be easy to
update, or else you will never update it.
End-User Documentation
Whether you’re creating a command-line program, an application
framework, a utility library or anything else, keep the end user in
mind. The end user is not the person who will be modifying the code;
it’s the one who will be using the code. People were initially confused
about CSS Lint’s purpose and how to use it effectively because of the
lack of documentation. Your project will never gain contributors without
first gaining end users. Satisfied end users are the ones who will end
up becoming contributors because they see the value in what you’ve
created.
Developer Guide
Even if you’ve laid out the code in a logical manner and have decent
end-user documentation, contributions are not guaranteed to start
flowing. You’ll need a developer guide to help get contributors up and
running as quickly as possible. A good developer guide covers the
following:
How to get the source code
Yes, you would hope that contributors would be familiar with how to
check out and clone a repository, but that’s not always the case. A
gentle introduction to getting the source code is always appreciated.
How the code is laid out
Even though the code and directory structures should be fairly
self-explanatory, writing down a description for posterity always helps.
How to set up the build system
If you are using a build system, then you’ll need to include
instructions on how to set it up. These instructions should include
where to get build-time dependencies that aren’t already included in the
repository.
How to run a build
These are the steps necessary to run a development build and to execute unit tests.
How to contribute
Spell out the criteria for contributing to the project. If you require
unit tests, then state that. If you require documentation, mention that
as well. Give people a checklist of things to go over before submitting a
contribution.
I spent a lot of time refining the “CSS Lint Developer Guide”
based on conversations I had had with contributors and questions that
others would ask. As with all documentation, the developer guide should
be a living document that continues to grow as the project grows.
Use A Mailing List
All good open-source projects give people a place to go to ask
questions, and the easiest way to achieve that is by setting up a
mailing list. When we first launched CSS Lint, Nicole and I were
inundated with questions. The problem is that those questions were
coming in through many different channels. People were asking questions
on Twitter as well as emailing each of us directly. That’s exactly the
situation you don’t want.
Setting up a mailing list with Yahoo Groups or Google Groups
is easy and free. Make sure to do that before announcing the project’s
availability, and actively encourage people to use the mailing list to
ask questions. Link to the mailing list prominently on the website (if
you have one) or in the documentation.
The other important part of running a mailing list is to actively
monitor it. Nothing is more frustrating for end users or contributors
than being ignored. If you’ve set up a mailing list, take the time to
monitor it and respond to people who ask questions.
This is the best way to foster a community of developers around the
project. Getting a decent amount of traffic onto the mailing list can
take some time, but it’s worth the effort. Offer advice to people who
want to contribute; suggest to people to file tickets when appropriate
(don’t let the mailing list turn into a bug tracker!); and use the
feedback you get from the mailing list to improve documentation.
Use Version Numbers
One common mistake made with open-source projects is neglecting to
use version numbers. Version numbers are incredibly important for the
long-term stability and maintenance of your project. CSS Lint didn’t use
version numbers when it was first released, and I quickly realized the
mistake. When bugs came in, I had no idea whether people were using the
most recent version because there was no way for them to tell when the
code was released. Bugs were being reported that had already been fixed,
but there was no way for the end user to figure that out.
Stamping each release with an official version number puts a stake in
the ground. When somebody files a bug, you can ask what version they
are using and then check whether that bug has already been fixed. This
greatly reduced the amount of time I spent with bug reports because I
was able to quickly determine whether someone was using the most recent
version.
Unless your project has been previously used and vetted, start the
version number at 0.1.0 and go up incrementally with each release. With
CSS Lint, we increased the second number for planned releases; so, 0.2.0
was the second planned release, 0.3.0 was the third and so on. If we
needed to release a version in between planned releases in order to fix
bugs, then we increased the third number. So, the second unplanned
release after 0.2.0 was 0.2.2. Don’t get me wrong: there are no set
rules on how to increase version numbers in a project, though there are a
couple of resources worth looking at: Apache APR Versioning and Semantic Versioning. Just pick something and stick with it.
In addition to helping with tracking, version numbers do a number of other great things for your project.
Tag Versions in Source Control
When you decide on a new release, use a source-control tag to mark
the state of the code for that release. I started doing this for CSS
Lint as soon as we started using version numbers. I didn’t think much of
it until the first time I forgot to tag a release and a bug was filed
by someone looking for that tag. It turns out that developers like to
check out particular versions of code.
Tie the tag obviously to the version number by including the version
number directly in the tag’s name. With CSS Lint, our tags are in the
format of “v0.9.9.” This will make it very easy for anyone looking
through tags to figure out what those tags mean — including you, because
you’ll be able to better keep track of what changes have been made in
each release.
Change Logs
Another benefit of versioning is in producing change logs. Change
logs are important for communicating version differences to both end
users and contributors. The added benefit of tagging versions and source
control is that you can automatically generate change logs based on
those tags. CSS Lint’s build system automatically creates a change log
for each release that includes not just the commit message but also the
contributor. In this way, the change log becomes a record not only of
code changes, but also of contributions from the community.
Availability Announcements
Whenever a new version of the project is available, announce its
availability somewhere. Whether you do this on a blog or on the mailing
list (or in both places), formally announcing that a new version is
available is very important. The announcement should include any major
changes made in the code, as well as who has contributed those changes.
Contributors tend to contribute more if they get some recognition for
their work, so reward the people who have taken the time to contribute
to your project by acknowledging their contribution.
Managing Contributions
Once you have everything set up, the next step is to figure out how
you will accept contributions. Your contribution model can be as
informal or formal as you’d like, depending on your goals. For a
personal project, you might not have any formal contribution process.
The developer guide would lay out what is necessary in order for the
code to be merged into the repository and would state that as long as a
submission follows that format, then the contribution will be accepted.
For a larger project, you might want to have a more formal policy.
The first thing to look into is whether you will require a
contributor license agreement (CLA). CLAs are used in many large
open-source projects to protect the legal rights of the project. Every
person who submits code to the project would need to fill out a CLA
certifying that any contributed code is original work and that the
copyright for that work is being turned over to the project. CLAs also
give the project owner license to use the contribution as part of the
project, and it warrants that the contributor isn’t knowingly including
code to which someone else has a copyright, patent or other right. jQuery, YUI and Dojo
all require CLAs for code submissions. If you are considering requiring
a CLA from contributors, then getting legal advice regarding it and
your licensing structure would be worthwhile.
Next, you may want to establish a hierarchy of people working on the
project. Open-source projects typically have three primary designations:
Contributor
Anyone who has had source code merged into the repository is considered a
contributor. The contributor cannot access the repository directly but
has submitted patches that have been accepted.
Committer
People who have direct access to the repository are committers. These
people work on features and fix bugs regularly, and they submit code
directly to the repository.
Reviewer
The highest level of contributor, reviewers are commanders who also have
directional impact on the project. Reviewers fulfill their title by
reviewing submissions from contributors and committers, approving or
denying patches, promoting or demoting committers, and generally running
the project.
If you’re going to have a formal hierarchy such as this, you’ll need
to draft a document that describes the role of each type of contributor
and how one may be promoted through the ranks. YUI has just created a
formal “Contributor Model,” along with excellent documentation on roles and responsibilities.
At the moment, CSS Lint doesn’t require a CLA and doesn’t have a
formal contribution model in place, but everyone should consider it as
their open-source project grows.
The Proof
It probably took us about six months from its initial release to get
CSS Lint to the point of being a fully functioning open-source project.
Since then, over a dozen contributors have submitted code that is now
included in the project. While that number might seem small by the
standard of a large open-source project, we take great pride in it.
Getting one external contribution is easy; getting contributions over an
extended period of time is difficult.
And we know that we’ve been doing something right because of the
feedback we receive. Jonathan Klein recently took to the mailing list to
ask some questions and ended up submitting a pull request that was
accepted into the project. He then emailed me this feedback:
I just wanted to say that I think CSS Lint is a model open-source
project — great documentation, easy to extend, clean code, fast replies
on the mailing list and to pull requests, and easily customizable to fit
any environment. Starting to do development on CSS Lint was as easy as
reading a couple of wiki pages, and the fact that you explicitly called
out the typical workflow of a change made the barrier to entry extremely
low. I wish more open-source projects would follow suit and make it
easy to contribute.
Getting emails like this has become commonplace for CSS Lint, and it
can become the norm for your project, too, if you take the time to set
up a sustainable eco-system. We all want our projects to succeed, and we
want people to want to contribute to them. As Jonathan said, make the
barrier to entry as low as possible and developers will find ways to
help out.
The Museum of Modern Art in New York (MoMA) last week announced that
it is bolstering its collection of work with 14 videogames, and plans to
acquire a further 26 over the next few years. And that’s just for
starters. The games will join the likes of Hector Guimard’s Paris Metro
entrances, the Rubik’s Cube, M&Ms and Apple’s first iPod in the
museum’s Architecture & Design department.
The move recognises the design achievements behind each creation, of
course, but despite MoMA’s savvy curatorial decision, the institution
risks becoming a catalyst for yet another wave of awkward ‘are games
art?’ blog posts. And it doesn’t exactly go out of its way to avoid that
particular quagmire in the official announcement.
“Are video games art? They sure are,” it begins, worryingly, before
switching to a more considered tack, “but they are also design, and a
design approach is what we chose for this new foray into this universe.
The games are selected as outstanding examples of interaction design — a
field that MoMA has already explored and collected extensively, and one
of the most important and oft-discussed expressions of contemporary
design creativity.”
Jason Rohrer
MoMA worked with scholars, digital conservation and legal experts,
historians and critics to come up with its criteria and final list of
games, and among the yardsticks the museum looked at for inclusion are
the visual quality and aesthetic experience of each game, the ways in
which the game manipulates or stimulates player behaviour, and even the
elegance of its code.
That initial list of 14 games makes for convincing reading, too:
Pac-Man, Tetris, Another World, Myst, SimCity 2000, Vib-Ribbon, The
Sims, Katamari Damacy, Eve Online, Dwarf Fortress, Portal, flOw, Passage
and Canabalt.
But the wishlist also extends to Spacewar!, a selection of Magnavox
Odyssey games, Pong, Snake, Space Invaders, Asteroids, Zork, Tempest,
Donkey Kong, Yars’ Revenge, M.U.L.E, Core War, Marble Madness, Super
Mario Bros, The Legend Of Zelda, NetHack, Street Fighter II, Chrono
Trigger, Super Mario 64, Grim Fandango, Animal Crossing, and, of course,
Minecraft.
Art, design or otherwise, MoMA’s focused collection is an uncommonly
informed and well-considered list. And their inclusion within MoMA’s
hallowed walls, and the recognition of their cultural and historical
relevance that is implied, is certainly a boon for videogames on the
whole. But reactions to the move have been mixed. The Guardian’s
Jonathan Jones posted a blog
last week titled Sorry MoMA, Videogames Are Not Art, in which he
suggests that exhibiting Pac-Man and Tetris alongside work by Picasso
and Van Gogh will mean “game over for any real understanding of art”.
Canabalt
“The worlds created by electronic games are more like playgrounds
where experience is created by the interaction between a player and a
programme,” he writes. “The player cannot claim to impose a personal
vision of life on the game, while the creator of the game has ceded that
responsibility. No one ‘owns’ the game, so there is no artist, and
therefore no work of art.”
While he clearly misunderstands the capacity of a game to manifest
the personal – and singular – vision of its creator, he nonetheless
raises valid fears that the creative motivations behind many videogames’
– predominantly commercially-driven entertainment – are incompatible
with those of serious art and that their inclusion in established
museums risks muddying its definition. But while many commentators have
fallen into the same trap of invoking comparisons with cubist and
impressionist painters, MoMA has drawn no such parallels.
“We have to keep in mind it’s the design collection that is
snapping up video games,” Passage creator Jason Rorher tells us when we
put the question to him. “This is the same collection that houses Lego,
teapots, and barstools. I’m happy with that, because I primarily think
of myself as a designer. But sadly, even the mightiest games in this
acquisition look silly when stood up next to serious works of art. I
mean what’s the artistic payload of, Passage? ‘You’re gonna die someday.’ You can’t find a sentiment that’s more artistically worn out than that.”
Adam Saltsman
But while he doesn’t see these games’ inclusion as a significant
landmark – in fact, he even raises concerns over bandwagon-hopping –
he’s still elated to have been included.
“I’m shocked to see my little game standing there next to landmarks
like Pac-Man, Tetris, Another World, and… all of them really, all the
way up to Canabalt,” he says. “The most pleasing aspect of it, for me,
is that something I have made will be preserved and maintained into the
future, after I croak. The ephemeral nature of digital-download video
games has always worried me. Heck, the Mac version of Passage has
already been broken by Apple’s updates, and it’s only been five years!”
Talking of Canabalt, creator Adam Saltsman echoes Rohrer’s sentiment:
“Obviously it is a pretty huge honour, but I think it’s also important
to note that these selections are part of the design wing of the museum,
so Tetris won’t exactly be right next to Dali or Picasso! That doesn’t
really diminish the excitement for me though. The MoMA is an incredible
institution, and to have my work selected for archival alongside obvious
masterpieces like Tetris is pretty overwhelming. “
MoMA’s not the only art institution with an interest in videogames,
of course. The Smithsonian American Art Museum ran an exhibition titled
The Art of Video Games earlier this year, while the Barbican has put its
weight behind all manner of events, including 2002?s The History,
Culture and Future of Computer Games, Ear Candy: Video Game Music, and
the touring Game On exhibition.
Eve Online
Chris Melissinos, who was one of the guest curators who put the
Smithsonian exhibition together and subsequently acted as an adviser to
MoMA as it selected its list, doesn’t think such interest is damaging to
art, or indeed a sign of out-of-step institutions jumping on the
bandwagon. It’s simply, he believes, a reaction to today’s culture.
“This decision indicates that videogames have become an important cultural, artistic form of expression in society,” he told the Independent.
“It could become one of the most important forms of artistic
expression. People who apply themselves to the craft view themselves as
[artists], because they absolutely are. This is an amalgam of many
traditional forms of art.”
Of the initial selection, Eve is arguably the most ambitious, and
potentially divisive, selection, but perhaps also the best placed to
challenge Jones’ predispositions on experiential ownership and creative
limitation. It is, after all, renowned for its vociferous,
self-governing player community.
“Eve’s been around for close to a decade, is still growing, and
through its lifetime has won several awards and achievements, but being
acquired into the permanent collection of a world leading contemporary
art and design museum is a tremendous honour for us,” Eve Online
creative director Torfi Frans Ólafsson tells us. “Eve is born out of a
strong ideology of player empowerment and sandbox openness, which
especially in our earlier days was often at the cost of accessibility
and mainstream appeal.
Torfi Frans Ólafsson
“Sitting up there along with industrial design like the original
iPod, and fancy, unergonomic lemon presses tells us that we were right
to stand by our convictions, so in that sense, it’s somewhat of a
vindication of our efforts.”
But how do you present an entire universe to an audience that is
likely to spend a few short minutes looking at each exhibit? Developer
CCP is turning to its many players for help.
“We’ve decided to capture a single day of Eve: Sunday the 9th of
December,” explains Ólafsson. “Through a variety of player made videos,
CCP videos, massive data analysis and info graphics.”
In presenting Eve in this way, CCP and the games players are
collaborating on a strong, coherent vision of the alternative reality
they’ve collectively helped to build, and more importantly, reinforcing
and redefining the notion of authorship. It doesn’t matter whether
you’re an apologist for videogames’ entitlement to the status of art, or
someone who appreciates the aesthetics of their design, the important
thing here is that their cultural importance is recognised. Sure, the
notion of a game exhibit that doesn’t include gameplay might stick in
the craw of some, but MoMA’s interest is clearly broader. Ólafsson isn’t
too worried, either.
“Even if we don’t fully succeed in making the 3.5 million people that
visit the MoMA every year visually grok the entire universe in those
few minutes they might spend checking Eve out, I can promise you it sure
will look pretty there on the wall.”
Personal Comments:
Passage is still available here, a game developed during Gamma256.
Canabalt is available here, while mobile versions are available for few bucks (android, iOS).
Internet-connected devices are clearly the future of controlling everything from your home to your car, but actually getting "the Internet of things" rolling has been slow going. Now a new project looks to brighten those prospects, quite literally, with a smart light socket.
Created by Zach Supalla (who was inspired by his father, who is deaf and uses lights for notifications), the Spark Socket
lets you to connect the light sockets in your home to the Internet,
allowing them to be controlled via PC, smartphone and tablet (iOS and Android
are both supported) through a Wi-Fi connection. What makes this device
so compelling is its simplicity. By simply screwing a normal light bulb
into the Spark Socket, connected to a standard light fixture, you can
quickly begin controlling and programming the lights in your home.
Some of the uses for the Spark Socket include allowing you to have
your house lights flash when you receive a text or email, programming
lights to turn on with certain alarms, and having lights dim during
certain times of the day. A very cool demonstration of how the device
works can be tested by simply visiting this live Ustream page and tweeting #hellospark. We tested it and the light flashed on instantly as soon as we tweeted the hashtag.
The device is currently on Kickstarter, inching closer toward
its $250,000 goal, and if successful will retail for $60 per unit. You
can watch Supalla offer a more detailed description of the product and
how it came to be in the video below.
I believe the day-to-day practice of writing JavaScript is going to
change dramatically for the better when ECMAScript.next arrives. The
coming year is going to be an exciting time for developers as features
proposed or finalised for the next versions of the language start to
become more widely available.
In this post, I will review some of the features I'm personally looking forward to landing and being used in 2013 and beyond.
In Canary, remember that to enable all of the latest JavaScript experiments you should navigate to chrome:flags and use the 'Enable Experimental JavaScript' option.
Alternatively, many ES.next features can be experimented with using Google's Traceur transpiler (useful unit tests with examples here) and there are shims available for other features via projects such as ES6-Shim and Harmony Collections.
Finally, in Node.js (V8), the --harmony flag activates a number of experimental ES.next features including block scoping, WeakMaps and more.
Modules
We're
used to separating our code into manageable blocks of functionality. In
ES.next, A module is a unit of code contained within a module
declaration. It can either be defined inline or within an externally
loaded module file. A skeleton inline module for a Car could be written:
A module instance
is a module which has been evaluated, is linked to other modules or has
lexically encapsulated data. An example of a module instance is:
An export
declaration declares that a local function or variable binding is
visible externally to other modules. If familiar with the module
pattern, think of this concept as being parallel to the idea of exposing
functionality publicly.
Modules import what they wish to use from other modules. Other modules may read the module exports (e.g drive(), miles etc. above) but they cannot modify them. Exports can be renamed as well so their names are different from local names.
Revisiting the export example above, we can now selectively choose what we wish to import when in another module.
Earlier,
we mentioned the concept of a Module Loader API. The module loader
allows us to dynamically load in scripts for consumption. Similar to import, we are able to consume anything defined as an export from such modules.
moduleURL: The string representing a module URL (e.g "car.js")
callback: A callback function which receives the output result of attempting to load, compile and then execute the module
errorCallback: A callback triggered if an error occurs during loading or compilation
Whilst
the above example seems fairly trivial to use, the Loader API is there
to provide a way to load modules in controlled contexts and actually
supports a number of different configuration options. Loader itself is a system provided instance of the API, but it's possible to create custom loaders using the Loader constructor.
What about classes?
I'm not going to be covering ES.next classes in this post in more, but for those wondering how they relate to modules, Alex Russell has previously shared a pretty readable example of how the two fit in – it's not at all about turning JavaScript into Java.
Classes
in ES.next are there to provide a declarative surface for the semantics
we're used to (e.g functions, prototypes) so that developer intent is
expressed instead of the underlying imperative mechanics.
Followed
by today's de-sugared approach that ignores the semantic improvements
brought by ES.next modules over the module pattern and instead
emphasises our reliance of function variants:
All the ES.next version does it makes the code more easy to read. What class means here is function,
or at least, one of the things we currently do with functions. If you
enjoy JavaScript and like using functions and prototypes, such sugar is
nothing to fear in the next version of JavaScript.
Where do these modules fit in with AMD?
If
anything, the landscape for modularization and loading of code on the
front-end has seen a wealth of hacks, abuse and experimentation, but
we've been able to get by so far.
Are ES.next modules a step in
the right direction? Perhaps. My own take on them is that reading their
specs is one thing and actually using them is another. Playing with the
newer module syntax in Harmonizr, Require HM and Traceur,
you actually get used to the syntax and semantics very quickly – it
feels like using a cleaner module pattern but with access to native
loader API for any dynamic module loading required at runtime. That
said, the syntax might feel a little too much like Python for some
peoples tastes (e.g the import statements).
I'm part
of the camp that believe if there's functionality developers are using
broadly enough (e.g better modules), the platform (i.e the browser)
should be trying to offer some of this natively and I'm not alone in
feeling this way. James Burke, who was instrumental in bringing us AMD
and RequireJS has previously said:
I want AMD and
RequireJS to go away. They solve a real problem, but ideally the
language and runtime should have similar capabilities built in. Native
support should be able to cover the 80% case of RequireJS usage, to the
point that no userland "module loader" library should be needed for
those use cases, at least in the browser.
James has
however questioned whether ES.next modules are a sufficient solution. He
covered some more of his thoughts on ES.next modules back in June in ES6 Modules: Suggestions for improvement and later in Why not AMD? for anyone interested in reading more about how these modules fit in with RequireJS and AMD.
Isaac Schlueter has also previously written up thoughts on where ES6 modules fall short that are worth noting. Try them out yourself using some of the options below and see what you think.
The idea behind Object.observe
is that we gain the ability to observe and notify applications of
changes made to specific JavaScript objects. Such changes include
properties being added, updated, removed or reconfigured.
Property
observing is behaviour we commonly find in JavaScript MVC frameworks at
at the moment and is an important component of data-binding, found in
solutions like AngularJS and Ember.
This is a fundamentally
important addition to JS as it could both offer performance improvements
over a framework's custom implementations and allow easier observation
of plain native objects.
Whats the current value? change.object[change.name]
*/
});
});
// Examples
todoModel.label = 'Buy some more milk';
/*
label changed
It was changed by being updated
Its current value is 'Buy some more milk'
*/
todoModel.completeBy = '01/01/2013';
/*
completeBy changed
It was changed by being new
Its current value is '01/01/2013'
*/
delete todoModel.completed;
/*
completed changed
It was changed by being deleted
Its current value is undefined
*/
Availability:
Object.observe will be available in Chrome Canary behind the "Enable
Experimental JS APIs" flag. If you don't feel like getting that setup,
you can also checkout this video by Rafael Weinstein discussing the proposal.
Default
parameter values allow us to initialize parameters if they are not
explicitly supplied. This means that we no longer have to write options = options || {};.
The syntax is modified by allowing an (optional) initialiser after the parameter names:
Block scoping introduces new declaration forms for defining variables scoped to a single block. This includes:
let: which syntactically is quite similar to var, but defines a variable in the current block function, allowing function declarations in nested blocks
const: like let, but is for read-only constant declarations
Using let in place of var
makes it easier to define block-local variables without worrying about
them clashing with variables elsewhere in the same function body. The
scope of a variable that's been declared inside a let statement using var is the same as if it had been declared outside the let statement. These types of variables will still have function scoping.
const availability: FF18, Chrome 24+, SF6, WebKit, Opera 12
Maps and sets
Maps
Many
of you will already be familiar with the concept of maps as we've been
using plain JavaScript objects as them for quite some time. Maps allow
us to map a value to a unique key such that we can retrieve the value
using the key without the pains of prototype-based inheritance.
With the Maps set() method, new name-value pairs are stored in the map and using get(), the values can be retrieved. Maps also have the following three methods:
has(key) : a boolean check to test if a key exists
delete(key) : deletes the key specified from the map
size() : returns the number of stored name-value pairs
As
Nicholas has pointed out before, sets won't be new to developers coming
from Ruby or Python, but it's a feature thats been missing from
JavaScript. Data of any type can be stored in a set, although values can
be set only once. They are an effective means of creating ordered list
of values that cannot contain duplicates.
add(value) – adds the value to the set.
delete(value) – sets the value for the key in the set.
has(value) – returns a boolean asserting whether the value has been added to the set
This
results in O(n) for filtering uniques in an array. Almost all methods
of array unique with objects are O(n^2) (credit goes to Brandon Benvie
for this suggestion).
The
Proxy API will allow us to create objects whose properties may be
computed at run-time dynamically. It will also support hooking into
other objects for tasks such as logging or auditing.
WeakMaps
help developers avoid memory leaks by holding references to their
properties weakly, meaning that if a WeakMap is the only object with a
reference to another object, the garbage collector may collect the
referenced object. This behavior differs from all variable references in
ES5.
A key property of Weak Maps is the inability to enumerate their keys.
Introduces a function for comparison called Object.is. The main difference between === and Object.is are the way NaNs and (negative) zeroes are treated. A NaN is equal to another NaN and negative zeroes are not equal from other positive zeroes.
Array.from:
Converts a single argument that is an array-like object or list (eg.
arguments, NodeList, DOMTokenList (used by classList), NamedNodeMap
(used by attributes property)) into a new Array() and returns it;
ES.next
is shaping up to potentially include solutions for what many of us
consider are missing from JavaScript at the moment. Whilst ES6 is
targeting a 2013 spec release, browsers are already implementing
individual features and it's only a matter of time before their
availability is widespread.
In the meantime, we can use (some)
modern browsers, transpilers, shims and in some cases custom builds to
experiment with features before ES6 is fully here.
For more examples and up to date information, feel free to checkout the TC39 Codex Wiki
(which was a great reference when putting together this post)
maintained by Dave Herman and others. It contains summaries of all
features currently being targeted for the next version of JavaScript.
Google has enjoyed a considerable head start on the mobile-mapping
front, but Apple and Microsoft haven’t been idle. Both companies have
licensed data from a number of services to flesh out their competing map
offerings in an effort to bolster their respective phone platforms and
chip away at Google’s dominance.
But there’s more to a map than getting users to and from work: We
rely on maps to figure out where we are, to find new places, and to plan
trips far beyond our local haunts. Here's a look at which mapping
service offers the best features and functionality.
A tale of three map apps
Google Maps
Google Maps’ greatest strength lies in its robust search
capabilities: Throughout my testing I found that I could type in a
location and (generally) find the business or landmark I was seeking,
whereas Apple and Windows Phone often required me to add a city to my
search query. Google Maps also offers a killer feature in the form of
Street View. If you’ve ever used Google Maps in a browser, you’re likely
familiar with the little yellow Pegman avatar
that gives you a first-person view of the location you’re searching
for. It’s incredibly useful, providing a clear idea of where you’re
heading before you ever arrive.
The robust direction options are another killer feature. All three
mapping services offer directions by car and on foot, but only Google
includes public transportation and biking directions. Public transit
results can be hit or miss, however—many users have reported that bus
schedules and the like don’t necessarily line up with reality, though
I’ve had pretty good luck while using the service in San Francisco.
Google Maps’ Places
functionality serves as a sort of neighborhood-savvy guide: You just tap
the pin icon on the map for a list of places nearby, and filtering
options let you limit searches to locations that are currently open, fit
into a particular price range, or have a minimum review score. Finally,
Google Maps displays reviews that its users have posted for most every
establishment you could search for, from restaurants to police stations.
Apple Maps
Apple’s flyover view is a novel and admittedly attractive attempt at
emulating Google Street View, but ultimately it falls flat. The
vector-based maps certainly are eye-catching; but unless you’re actually
planning on flying a small plane over your destination, the view won’t
offer much in the way of utility. And let’s not forget the often comical
rendering issues that are the subject of at least one Tumblr blog,
where bridges appear to melt into the landscape and some landmarks
disappear entirely. Apple is working on correcting many of these issues,
but they do mar Apple Maps’ presentation.
Apple has tapped into Yelp’s massive user community to find locations
and power its reviews, and that’s a powerful asset in places with an
active Yelp community. Unfortunately, Yelp’s business listings are
mostly limited to larger cities in the United States, so you’re out of
luck if you’re traveling through smaller towns or internationally.
Tapping a business name in the Apple Maps app kicks you out of Maps and into the Yelp app—if
you don’t have it installed, you’ll be prompted to get it. Swapping
between apps can make casual browsing a bit annoying, but Yelp user
reviews are decidedly more numerous and robust than Google’s similar
offerings.
Windows Phone 8
Windows Phone 8 uses Nokia’s mapping engine, but the native Maps application isn’t nearly as robust as the Nokia Drive app
currently offered exclusively to Nokia Lumia owners. The Maps app on
Windows Phone 8 is ultimately the most limited of the bunch: Although it
pulls reviews from sites such as Citysearch and TripAdvisor to fuel its
Buzz section, the app lacks photos or any sort of Street View analogue.
The Buzz section also has far fewer reviews than Yelp or even Google,
which can limit its utility at times.
Windows Phone Local Scout is a bit like Google Places, but
Microsoft's offering takes top honors. It scans the area around you (or
the location you’ve searched for), and lists establishments that are
nearby. Results are divided into four categories. 'Eat + Drink' covers
bars and restaurants, while the Shop section covers, well, shops. The
sections serve up business hours, contact information, and the average
ratings assigned by Citysearch
and TripAdvisor users. The filtering options are fairly extensive; you
can, for example, limit results to restaurants that are open and serving
a particular cuisine, or hardware stores that are currently offering
deals. The 'See + Do' section lists nearby upcoming events and places of
interest—though the unfiltered list is a bit impractical if you’re
exploring casually, with museums and art galleries listed alongside New
Year’s Eve parties and high school reunions. Finally, there’s the For
You section, which couples data from Bing and Facebook to guess what
sorts of venues you might be interested in; my suggestions were largely
limited to bars, which makes sense based on my admittedly sparse
Facebook check-in history.
Offline Maps
A final useful feature that all three services provide is offline
maps. Apple’s implementation is rudimentary: Once you've visited a
location on the map, it and the surrounding areas are cached
automatically to your device. You won’t be able to search without Wi-Fi
or cellular service, but the streets, businesses, and landmarks are
preserved in all their vector-mapped glory.
If you’re planning in advance, the latest version of Google Maps will
let you make sections of a map available offline. Just tap the menu
button, choose make available offline, and select a section of the map you’d like to preserve. Alternatively, you can select My Places from the Maps menu, choose new offline map,
and search for a city to download a snapshot. The service will tell you
exactly how much space the map will take up (in my tests the San
Francisco Bay Area claimed approximately 35MB of storage space), and
then it will download the section you’ve selected. Unfortunately, you
can't search the map without a data connection.
Windows Phone’s brute-force approach is actually my favorite
implementation: You can download entire maps from a number of regions
around the world. Although they take up considerably more space
(California weighs in at just shy of 210MB) and you lose out on most of
their satellite imagery, you’ll have full search and navigation
functionality—even in areas with a spotty data connection.
Search shootout
How do the three services stack up when it comes to finding places
you’d like to visit? I did some testing to find out. My testing method
was rather simple: I typed in the name of a business or landmark, and
examined the results. I’ll start with businesses in San Francisco, home
to TechHive headquarters.
House of Shields
House of Shields is a fairly popular watering hole in the middle of
downtown San Francisco, and (as expected) all three phones had no
trouble finding it and serving up all of the information I could want.
Windows Phone 8’s Buzz section really excelled here, offering a concise
breakdown of user reviews. It didn’t have very many reviews,
unfortunately. If you’re walking about with friends and trying to get a
general idea of a bar’s ambiance, however, the snippets it serves up are
arguably more useful than an average user rating from Yelp or Google.
Tomales Bay Oyster Company
Tomales Bay Oyster Company is a small but lively oyster farm and
picnic area located north of San Francisco. It’s a great place to go if
you’re craving fresh oysters, looking for a beautiful view, or testing a
phone’s mapping app. Google and Apple found the business just fine,
pointing their maps to the same isolated turnoff that hosts this
delectable little dining spot. Both services offered the restaurant’s
phone number, but Google Maps went a bit further, serving up
user-submitted photographs of the location, the restaurant’s website and
business hours, and reviews from Google Maps users. Windows Phone
couldn’t find the business at all, even when I punched in the address
and searched for items of interest in the area. I could spot the picnic
area by zooming in on the map’s aerial photography, but that kind of
information won’t be of much help to most people.
State Bird Provisions
Both iOS and Google Maps found this relatively new restaurant with
ease, supplying reviews, contact information, and business hours. On
Windows Phone 8 I had to add "San Francisco" to my search before I found
the location, and the results included only a phone number and a link
to the website.
San Francisco’s results are nice, but I also branched farther out in my testing.
Totonno’s Pizzeria Napolitano (New York)
Totonno’s Pizzeria Napolitano is a well-regarded pizza joint in New
York, and Google Maps found it effortlessly. Finding the restaurant on
iOS required adding "New York" to my search query, but Apple's map
turned up all of the necessary information with plenty of photos from
Yelp (Google Maps offered only two). Curiously, unless I was looking
directly at a map of New York, the Maps app on Windows Phone 8 couldn’t
track the restaurant down at all. Once it found the establishment, it
gave the necessary contact information and store hours, but served up
decidedly fewer user reviews (and no photos).
Citizen Coffee (Seattle)
What about a place that's a little less renowned? Citizen Coffee, a
cozy coffee shop and eatery in Seattle, is a spot I’ve wandered into a
few times while traveling. Google Maps’ search functionality shone on
this test, narrowing the location down with ease. On Apple Maps, I
needed only to add "Seattle" to my search query to find the place, and
the Yelp support produced a lot of photos that gave a nice idea of the
variety of food, as well as the ambiance of the establishment. (I still
loathe the fact that you need to jump out of the Maps app entirely to
check them out, however.) The location was just as easy to find on
Windows Phone (once I’d added "Seattle" to my search query), but Windows
Phone’s Buzz category once again offered just a few token reviews, and
lacked images.
Sukiyabashi Jiro (Tokyo)
Branching out farther still, I headed to Japan to track down
Sukiyabashi Jiro. The restaurant is the subject of the excellent
documentary film Jiro Dreams of Sushi, and I assumed that it
would be rather easy to track down. Alas, Yelp’s services don’t extend
to Japan, so Apple Maps’ offerings for that country are limited to
addresses and phone numbers—I couldn’t find Sukiyabashi Jiro at all.
Windows Phone 8’s map of Tokyo (and wide swaths of Asia, actually) is
barren, lacking even basic information or street names. Unsurprisingly
enough, Google Maps delivered in my test, offering the correct address,
contact information, and some user reviews.
Tracking down business listings in distant cities and foreign
countries can prove tricky for iOS and Windows Phone, which rely on
licensed services from third parties that don’t have as exhaustive a
reach as Google does. I had no such trouble with famous landmarks,
though Google Maps’ general location-savvy again made it the most useful
of the bunch—most of the time.
Taipei 101 (Taipei, Taiwan)
In my quest to find famous landmarks, I started with Taipei 101,
the world’s second-largest building. The search took a bit of extra
effort on Windows Phone: Oddly, the only query that worked was “Taipei
101, Taipei.” That said, all three services ultimately found the
landmark, though only Google Maps provided listings for many of the
businesses in the area.
Sydney Opera House (Sydney, Australia)
I had better luck tracking down the Sydney Opera House, though
Windows Phone 8’s map directed me a few miles southwest of the actual
landmark. It’s easy enough to pan over to the site (which is labeled
correctly), but Google and Apple Maps both sent me to the right spot on
the first try.
Flatiron Building (New York)
Searching for the iconic Flatiron Building was simple on both Apple
Maps and Windows Phone; on iOS’s standard map view, all of New York’s
landmarks are helpfully labeled and granted large, distinct icons, which
makes casual browsing a breeze. Google initially tried to direct me to
The Flatiron Group, a business situated a few blocks south of the
landmark, but I was able to locate the building eventually by selecting
it from a list of search suggestions.
Fenway Park (Boston)
All three mapping services had no trouble finding Fenway Park, home
of the Boston Red Sox. Once you arrive at the park (virtually), Windows
Phone’s Local Scout offers the easiest way to find nearby
establishments; although you can do a generic search on Apple and Google
Maps, I appreciated being able to scan a list of interesting locales
near the ballpark.
Turn-by-turn navigation
Competent turn-by-turn navigation is a must-have feature for anyone
who hopes to rely on a phone to get around. Unfortunately, Windows Phone
8’s native Maps app currently lacks support for it. If you own a Nokia Lumia phone,
you have access to Nokia’s free Drive app, and the Windows Phone store
offers free and paid alternatives for other Windows Phone devices.
Apple Maps' navigation mode.
That leaves Apple Maps and Google Maps, two excellent offerings with
slightly different implementations. In my tests both services gave
accurate directions: The suggestions and even the alternative routes
they served up were generally similar (in San Francisco, at least). Miss
a turn, and both apps’ robotic narrators will rapidly update their
instructions to get you back on the right track. Both will keep you
abreast of traffic conditions, and will suggest new routes if the
situation looks especially bleak.
The Maps app on iOS provides turn-by-turn navigation if you’re running the latest version of iOS and using an iPhone 4S or iPhone 5 (or an iPad 2
or later). The accuracy of the driving directions is on a par with that
of Google Maps, but the focus on hands-free simplicity can be a
double-edged sword. Setting up a route is easy: Search for a location,
select the car icon, and tap the route button, and Siri will begin to
relay driving instructions.
If you’re focused on getting from point A to point B, this
arrangement can be handy; the phone essentially becomes locked to the
current step on the list of directions to your destination, ignoring all
inputs on the touchscreen unless you leave the app, and even showing
directions on the phone’s lock screen. You need to tap the overview
button to interact with the map, pausing the route in progress; it’s a
small issue, but being able to pan about the map without interrupting
directions can be useful if you’d like to gauge traffic congestion in
the area or keep an eye out for gas stations and the like on the fly.
Google Maps' navigation mode.
Google Maps shines in navigation. When you’re in navigation mode, the
map continues to function normally, so you (or ideally, someone who
isn’t driving) can scan for alternative routes or use the layers menu to
plot landmarks such as ATMs or gas stations on the map. Google Maps
also allows you to create routes that avoid highways and tolls, a simple
but useful feature that Apple and Windows Phone would do well to
emulate.
Truth be told, my only real qualm with Google Maps’ navigation is the
awkward overhead angle the app chooses to relay directions. The angle
can make it a bit difficult to quickly parse the names of upcoming cross
streets and side streets without panning over them on the map, and
futzing with your phone isn’t advisable when you’re driving.
Which one is the winner?
The clear "loser" here is Windows Phone 8, but the maps are largely a
victim of the operating system's own infancy. The services can only
improve with time, as users add reviews and report errors. The Maps app
is constantly evolving, and features such as turn-by-turn navigation are
reportedly on the way. I do love Windows Phone’s minimalistic
presentation and free map downloads. Local Scout is also arguably the
best way to explore a new area, but Microsoft's aerial photography and
satellite images are lackluster in comparison with the competition, and
the overall feature set is limited.
That leaves Google and Apple. iOS’s Maps offering has improved
considerably since leaving Google’s mapping data behind, but the
reliance on Yelp integration leaves much to be desired for users around
the world—to say nothing of the need to switch apps to see most of the
information you’re looking for. Apple’s stylish new vector maps are
admittedly gorgeous, but offer no real utility; I also found it a bit
too easy to slide into the skewed 3D perspective when I was trying to
zoom in on a map, which can be a bit disorienting.
Unsurprisingly, Google Maps takes the crown. It offers the best
search functionality, decidedly better business listings, and robust
navigation options. Features such as Street View and Google user reviews
allow you to get all of the information you need directly from the app.
It isn’t quite as attractive as Apple’s Maps, and Windows Phone’s Local
Scout is clearly more useful than Google Places for exploring your
surroundings, but Google’s near-decade head start keeps it firmly in the
lead.
Choosing sides: Google’s new augmented-reality game,
Ingress, makes users pick a faction—Enlightened or Resistance—and run
around town attacking virtual portals in hopes of attaining world
domination
I’m not usually very political, but I recently joined the Resistance,
fighting to protect the world against the encroachment of a strange,
newly discovered form of energy. Just this week, in fact, I spent hours
protecting Resistance territory and attacking the enemy.
Don’t worry, this is just the gloomy sci-fi world depicted in a new smartphone game called Ingress
created by Google. Ingress is far from your normal gaming app,
though—it takes place, to some degree, in the real world; aspects of the
game are revealed only as you reach different real-world locations.
Ingress’s world is one in which the discovery of so-called
“exotic matter” has split the population into two groups: the
Enlightened, who want to learn how to harness the power of this energy,
and the Resistance, who, well, resist this change. Players pick a side,
and then walk around their city, collecting exotic matter to keep
scanners charged and taking control of exotic-matter-exuding portals in
order to capture more land for their team.
I found the game, which
is currently available only to Android smartphone users who have
received an invitation to play, surprisingly addictive—especially
considering my usual apathy for gaming.
What’s most interesting
about Ingress, though, is what it suggests about Google’s future plans,
which seem to revolve around finding new ways to extend its reach from
the browser on your laptop to the devices you carry with you at all
times. The goal makes plenty of sense when you consider that traditional
online advertising—Google’s bread and butter—could eventually be
eclipsed by mobile, location-based advertising.
Ingress was
created by a group within Google called Niantic Labs—the same team
behind another location-based app released recently (see “Should You Go on Google’s Field Trip?”).
Google
is surely gathering a treasure trove of information about where we’re
going and what we’re doing while we play Ingress. It must also see the
game as a way to explore possible applications for Project Glass, the
augmented-reality glasses-based computer that the company will start
sending out to developers next year. Ingress doesn’t require a
head-mounted display; it uses your smartphone’s display to show a map
view rather than a realistic view of your surroundings. Still, it is
addictive, and is likely to get many more folks interested in
location-based augmented reality, or at least in augmented-reality
games.
Despite its futuristic focus, Ingress sports a sort of
pseudo-retro look, with a darkly hued map that dominates the screen and a
simple pulsing blue triangle that indicates your position. I could only
see several blocks in any direction, which meant I had to walk around
and explore in order to advance in the game.
For a while, I didn’t
know what I was doing, and it didn’t help that Ingress doesn’t include
any street names. New users complete a series of training exercises,
learning the basics of the game, which include capturing a portal,
hacking a portal to snag items like resonators (which control said
portals), creating links of exotic matter between portals to build a
triangular control field that enhances the safety of team members in the
area, and firing an XMP (a “non-polarized energy field weapon,”
according to the glossary) at an enemy-controlled portal.
Confused much? I sure was.
But
I forged ahead, though, hoping that if I kept playing it would make
more sense. I started wandering around looking for portals. Portals are
found in public places—in San Francisco, where I was playing, this
includes city landmarks such as museums, statues, and murals. Resistance
portals are blue, Enlightened ones are green, and there are also some
gray ones out there that remain unclaimed.
I found a link to a larger map
of the Ingress world that I could access through my smartphone browser
and made a list of the best-looking nearby targets. Perhaps this much
planning goes against the exploratory spirit of the game, but it made
Ingress a lot less confusing for me (there’s also a website that doles out clues about the game and its mythology).
Once
I had a plan, I set out toward the portals on my list, all of which
were in the Soma and Downtown neighborhoods of San Francisco. I managed
to capture two new portals at Yerba Buena Gardens—one at a statue of
Martin Luther King, Jr. and another at the top of a waterfall—and link
them together.
Across the street, in front of the Contemporary
Jewish Museum, I hacked an Enlightened portal and fired an XMP at it,
weakening its resonators. I was then promptly attacked. I fled, figuring
I wouldn’t be able to take down the portal by myself.
A few hours
later, much of my progress was undone by a member of Enlightened
(Ingress helpfully sends e-mail notifications about such things). I was
surprised by how much this pissed me off—I wanted to get those portals
back for the Resistance, but pouring rain and the late hour stopped me.
Playing
Ingress was a lot more fun than I expected, and from the excited
chatter in the game’s built-in chat room, it was clear I wasn’t the only
one getting into it.
On my way back from a meeting, I couldn’t
help but keep an eye out for portals, ducking into an alley to attack
one near my office. Later, I found myself poring over the larger map on
my office computer, looking at the spread of portals and control fields
around the Bay Area.
As it turns out, my parents live in an area
dominated by the Enlightened. So I guess I’ll be busy attacking enemy
portals in my hometown this weekend.
If you’re a software developer—or if you follow the work of software developers—you’ve probably heard of TouchDevelop,
a Microsoft Research app that enables you to write code for your phone
using scripts on your phone. Its ability to bring the excitement of
programming to Windows Phone 7 has reaped lots of enthusiasm from the
development community over the past year or so.
Now, the team behind TouchDevelop has taken things a step further, with a web app that can work on any Windows 8
device with a touchscreen. You can write Windows Store apps simply by
tapping on the screen of your device. The web app also works with a
keyboard and mouse, but the touchscreen capability means that the
keyboard is not required. To learn more, watch this video.
This reimplementation of TouchDevelop went live just in time for Build,
Microsoft’s annual conference that helps developers learn how to take
advantage of Windows 8. The conference is being held Oct. 30-Nov. 2 in
Redmond, Wash.
The TouchDevelop web app, which requires Internet Explorer 10,
enables developers to publish their scripts so they can be shared with
others using TouchDevelop. As with the Windows Phone version, a
touchdevelop.com cloud service enables scripts to be published and
queried, and when you log in with the same credentials, all of your
scripts are synchronized between all your platforms and devices.
While
in the TouchDevelop web app, users can navigate to the properties of an
installed script already created. Videos describing editor operation of
the TouchDevelop web app are available on the project’s webpage.
TouchDevelop shipped as a Windows Phone app about a year and a half ago and has seen strong downloads and reviews in the Windows Phone Store.
“Our
TouchDevelop app for Windows Phone has been downloaded more than
200,000 times,” Tillmann says, “and more than 20,000 users have logged
in with a Windows Live ID or via Facebook.”
Since
the app became available, Tillmann and his RiSE colleagues have been
astounded by the creativity the user base has demonstrated. Further
Windows 8 developer excitement will be on display during Build, which is being streamed to audiences worldwide.