To translate one language into another, find the linear
transformation that maps one to the other. Simple, say a team of Google
engineers
Computer
science is changing the nature of the translation of words and
sentences from one language to another. Anybody who has tried BabelFish or Google Translate will know that they provide useful translation services but ones that are far from perfect.
The
basic idea is to compare a corpus of words in one language with the
same corpus of words translated into another. Words and phrases that
share similar statistical properties are considered equivalent.
The
problem, of course, is that the initial translations rely on
dictionaries that have to be compiled by human experts and this takes
significant time and effort.
Now Tomas Mikolov and a couple of
pals at Google in Mountain View have developed a technique that
automatically generates dictionaries and phrase tables that convert one
language into another.
The new technique does not rely on versions
of the same document in different languages. Instead, it uses data
mining techniques to model the structure of a single language and then
compares this to the structure of another language.
“This method
makes little assumption about the languages, so it can be used to extend
and refine dictionaries and translation tables for any language pairs,”
they say.
The new approach is relatively straightforward. It
relies on the notion that every language must describe a similar set of
ideas, so the words that do this must also be similar. For example, most
languages will have words for common animals such as cat, dog, cow and
so on. And these words are probably used in the same way in sentences
such as “a cat is an animal that is smaller than a dog.”
The same
is true of numbers. The image above shows the vector representations of
the numbers one to five in English and Spanish and demonstrates how
similar they are.
This is an important clue. The new trick is to represent an entire language using the relationship between its words. The
set of all the relationships, the so-called “language space”, can be
thought of as a set of vectors that each point from one word to another.
And in recent years, linguists have discovered that it is possible to
handle these vectors mathematically. For example, the operation ‘king’ –
‘man’ + ‘woman’ results in a vector that is similar to ‘queen’.
It
turns out that different languages share many similarities in this
vector space. That means the process of converting one language into
another is equivalent to finding the transformation that converts one
vector space into the other.
This turns the problem of translation
from one of linguistics into one of mathematics. So the problem for the
Google team is to find a way of accurately mapping one vector space
onto the other. For this they use a small bilingual dictionary compiled
by human experts–comparing same corpus of words in two different
languages gives them a ready-made linear transformation that does the
trick.
Having identified this mapping, it is then a simple matter
to apply it to the bigger language spaces. Mikolov and co say it works
remarkably well. “Despite its simplicity, our method is surprisingly
effective: we can achieve almost 90% precision@5 for translation of
words between English and Spanish,” they say.
The method can be
used to extend and refine existing dictionaries, and even to spot
mistakes in them. Indeed, the Google team do exactly that with an
English-Czech dictionary, finding numerous mistakes.
Finally, the
team point out that since the technique makes few assumptions about the
languages themselves, it can be used on argots that are entirely
unrelated. So while Spanish and English have a common Indo-European
history, Mikolov and co show that the new technique also works just as
well for pairs of languages that are less closely related, such as
English and Vietnamese.
That’s a useful step forward for the
future of multilingual communication. But the team says this is just the
beginning. “Clearly, there is still much to be explored,” they
conclude.
Ref: arxiv.org/abs/1309.4168: Exploiting Similarities among Languages for Machine Translation
Last Wednesday, the Fed
announced that it would not be tapering its bond buying program. This
news was released at precisely 2 p.m. in Washington "as measured by the
national atomic clock." It takes seven milliseconds for this information
to get to Chicago. However, several huge orders that were based on the
Fed's decision were placed on Chicago exchanges two to three
milliseconds after 2 p.m. How did this happen?
CNBC has the story here,
and the answer is: We don't know. Reporters get the Fed release early,
but they get it in a secure room and aren't permitted to communicate
with the outside world until precisely 2 p.m. Still, maybe someone
figured out a way to game the embargo. It would certainly be worth a ton
of money. Investigations are ongoing, but Neil Irwin has this to say:
"In the meantime, there's another useful lesson out of the whole
episode. It is the reality of how much trading activity, particularly of
the ultra-high-frequency variety is really a dead weight loss for
society.
…There is a role in [capital] markets for traders whose work is more
speculative…But when taken to its logical extremes, such as computers
exploiting five millisecond advantages in the transfer of market-moving
information, it's much less clear that society gains anything…In the
high-frequency trading business, billions of dollars are spent on
high-speed lines, programming talent, and advanced computers by funds
looking to capitalize on the smallest and most fleeting of mispricings.
Those are computing resources and insanely intelligent people who could
instead be put to work making the Internet run faster for everyone, or
figuring out how to distribute electricity more efficiently, or really
anything other than trying to figure out how to trade gold futures on
the latest Fed announcement faster than the speed of light."
Yep. I'm not sure what to do about it, though. A tiny transaction tax
still seems like a workable solution, although there are several
real-world issues with it. Worth a look, though.
In a related vein, let's talk a bit more about this seven millisecond
figure. That might very well be how long it takes a signal to travel
from Washington, DC, to Chicago via a fiber-optic cable, but in fact the
two cities are only 960 kilometers apart. At the speed of light, that's
3.2 milliseconds. A straight line path would be a bit less, perhaps 3
milliseconds. So maybe someone has managed to set up a neutrino
communications network that transmits directly through the earth. It
couldn't transfer very much information, but if all you needed was a few
dozen bits (taper/no taper, interest rates up/down, etc.) it might work
a treat. Did anyone happen to notice an extra neutrino flux in the
upper Midwest corridor at 2 p.m. last Wednesday? Perhaps Wall Street has
now co-opted not just the math geek community, and not just the physics
geek community, but the experimental physics geek community. Wouldn't that be great?
An example of a chemical program. Here, A, B and C are different chemical species.
Similar to using Python or Java to write code for a computer,
chemists soon could be able to use a structured set of instructions to
“program” how DNA molecules interact in a test tube or cell.
A team led by the University of Washington has developed a
programming language for chemistry that it hopes will streamline efforts
to design a network that can guide the behavior of chemical-reaction
mixtures in the same way that embedded electronic controllers guide
cars, robots and other devices. In medicine, such networks could serve
as “smart” drug deliverers or disease detectors at the cellular level.
Chemists and educators teach and use chemical reaction networks, a
century-old language of equations that describes how mixtures of
chemicals behave. The UW engineers take this language a step further and
use it to write programs that direct the movement of tailor-made
molecules.
“We start from an abstract, mathematical description of a chemical
system, and then use DNA to build the molecules that realize the desired
dynamics,” said corresponding author Georg Seelig,
a UW assistant professor of electrical engineering and of computer
science and engineering. “The vision is that eventually, you can use
this technology to build general-purpose tools.”
Currently, when a biologist or chemist makes a certain type of
molecular network, the engineering process is complex, cumbersome and
hard to repurpose for building other systems. The UW engineers wanted to
create a framework that gives scientists more flexibility. Seelig
likens this new approach to programming languages that tell a computer
what to do.
“I think this is appealing because it allows you to solve more than
one problem,” Seelig said. “If you want a computer to do something else,
you just reprogram it. This project is very similar in that we can tell
chemistry what to do.”
Humans and other organisms already have complex networks of
nano-sized molecules that help to regulate cells and keep the body in
check. Scientists now are finding ways to design synthetic systems that
behave like biological ones with the hope that synthetic molecules could
support the body’s natural functions. To that end, a system is needed
to create synthetic DNA molecules that vary according to their specific
functions.
The new approach isn’t ready to be applied in the medical field, but
future uses could include using this framework to make molecules that
self-assemble within cells and serve as “smart” sensors. These could be
embedded in a cell, then programmed to detect abnormalities and respond
as needed, perhaps by delivering drugs directly to those cells.
Seelig and colleague Eric Klavins, a UW associate professor of electrical engineering, recently received $2 million
from the National Science Foundation as part of a national initiative
to boost research in molecular programming. The new language will be
used to support that larger initiative, Seelig said.
Co-authors of the paper are Yuan-Jyue Chen, a UW doctoral student in
electrical engineering; David Soloveichik of the University of
California, San Francisco; Niranjan Srinivas at the California Institute
of Technology; and Neil Dalchau, Andrew Phillips and Luca Cardelli of
Microsoft Research.
The research was funded by the National Science Foundation, the
Burroughs Wellcome Fund and the National Centers for Systems Biology.
Google faces financial sanctions in France after failing to comply
with an order to alter how it stores and shares user data to conform to
the nation's privacy laws.
The enforcement follows an analysis led by European data protection
authorities of a new privacy policy that Google enacted in 2012,
France's privacy watchdog, the Commission Nationale de L'Informatique et des Libertes, said Friday on its website.
Google was ordered in June by the CNIL to comply with French data
protection laws within three months. But Google had not changed its
policies to comply with French laws by a deadline on Friday, because the
company said that France's data protection laws did not apply to users
of certain Google services in France, the CNIL said.
The company "has not implemented the requested changes," the CNIL said.
As a result, "the chair of the CNIL will now designate a rapporteur
for the purpose of initiating a formal procedure for imposing sanctions,
according to the provisions laid down in the French data protection
law," the watchdog said. Google could be fined a maximum of €150,000
($202,562), or €300,000 for a second offense, and could in some
circumstances be ordered to refrain from processing personal data in
certain ways for three months.
What bothers France
The CNIL took issue with several areas of Google's data policies, in
particular how the company stores and uses people's data. How Google
informs users about data that it processes and obtains consent from
users before storing tracking cookies were cited as areas of concern by
the CNIL.
In a statement, Google said that its privacy policy respects European
law. "We have engaged fully with the CNIL throughout this process, and
we'll continue to do so going forward," a spokeswoman said.
Google is also embroiled with European authorities in an antitrust
case for allegedly breaking competition rules. The company recently
submitted proposals to avoid fines in that case.
Google is turning 15 tomorrow and, fittingly, it’s celebrating the occasion
by announcing a couple of new features for Google Search. The mobile
search interface, for example, is about to get a bit of a redesign with
results that are clustered on cards “so you can focus on the answers
you’re looking for.”
Those
answers, Google today announced, are also getting better. Thanks to its
Knowledge Graph, the company continues to push to give users answers
instead of just links, and with today’s update, it’s now featuring the
ability to use the Knowledge Graph to compare things. If you want to
compare the nutritional value of olive oil to butter, for example,
Google Search will now give you a comparison chart with lots of details.
The same holds true for other things, including dog breed and celestial
objects. Google says it plans to expand this feature to more things
over time.
Also new in this update is the ability to use Knowledge Graph to
filter results. Say you ask Google: “Tell me about Impressionist
artists.” Now, you’ll see who these artists are, and a new bar on top of
the results will allow you dive in to learn more about them and to
switch to learn more about abstract art, for example.
On mobile, Google is now making it easier to use your voice to set
reminders and have those synced between devices. So you can say “Ok
Google, Remind me to buy butter at Safeway” on your Nexus tablet and
when you walk into the store with your iPhone, you’ll get that reminder.
To enable this, Google will roll out a new version of its Search app
for iPhone and iPad in the next few weeks.
With regard to notifications, it’s also worth noting that Google is now adding Google Now push notifications to its iPhone app, which will finally make Google Now useful on Apple’s platform.
IBM on Thursday announced a new computer programming framework that
draws inspiration from the way the human brain receives data, processes
it, and instructs the body to act upon it while requiring relatively
tiny amounts of energy to do so.
"Dramatically different from traditional software, IBM's new programming
model breaks the mold of sequential operation underlying today's von
Neumann architectures and computers. It is instead tailored for a new
class of distributed, highly interconnected, asynchronous, parallel,
large-scale cognitive computing architectures," IBM said in a statement
introducing recent advances made by its Systems of Neuromorphic Adaptive
Plastic Scalable Electronics (SyNAPSE) project.
IBM and research partners Cornell University and iniLabs have completed
the second phase of the approximately $53 million project. With $12
million in new funding from the Defense Advanced Research Projects
Agency (DARPA), IBM said work is set to commence on Phase 3, which will
involve an ambitious plan to develop intelligent sensor networks built
on a "brain-inspired chip architecture" using a "scalable,
interconnected, configurable network of 'neurosynaptic cores'."
"Architectures and programs are closely intertwined and a new
architecture necessitates a new programming paradigm," Dr. Dharmendra
Modha, principal investigator and senior manager, IBM Research, said in a statement.
"We are working to create a FORTRAN for synaptic computing chips. While
complementing today's computers, this will bring forth a fundamentally
new technological capability in terms of programming and applying
emerging learning systems."
Going forward, work on the project will focus on honing a programming
language for the SyNAPSE chip architecture first shown by IBM in 2011,
with an agenda of using the new framework to deal with "big data"
problems more efficiently.
IBM listed the following tools and systems it has developed with its partners towards this end:
Simulator: A multi-threaded, massively parallel and highly
scalable functional software simulator of a cognitive computing
architecture comprising a network of neurosynaptic cores.
Neuron Model: A simple, digital, highly parameterized spiking
neuron model that forms a fundamental information processing unit of
brain-like computation and supports a wide range of deterministic and
stochastic neural computations, codes, and behaviors. A network of such
neurons can sense, remember, and act upon a variety of spatio-temporal,
multi-modal environmental stimuli.
Programming Model: A high-level description of a "program"
that is based on composable, reusable building blocks called "corelets."
Each corelet represents a complete blueprint of a network of
neurosynaptic cores that specifies a based-level function. Inner
workings of a corelet are hidden so that only its external inputs and
outputs are exposed to other programmers, who can concentrate on what
the corelet does rather than how it does it. Corelets can be combined to
produce new corelets that are larger, more complex, or have added
functionality.
Library: A cognitive system store containing designs and
implementations of consistent, parameterized, large-scale algorithms and
applications that link massively parallel, multi-modal, spatio-temporal
sensors and actuators together in real-time. In less than a year, the
IBM researchers have designed and stored over 150 corelets in the
program library.
Laboratory: A novel teaching curriculum that spans the
architecture, neuron specification, chip simulator, programming
language, application library and prototype design models. It also
includes an end-to-end software environment that can be used to create
corelets, access the library, experiment with a variety of programs on
the simulator, connect the simulator inputs/outputs to
sensors/actuators, build systems, and visualize/debug the results.
It’s been quite a year of surprises from Google. Before the company’s
annual developer conference in May, we anticipated at least an
incremental version of Android to hit the scene. Instead, we
encountered a different game plan—Google
not only started offering stock features like its keyboard as separate,
downloadable apps for other Android handset users, but it’s also
offering stock Android versions of non-Nexus-branded hardware like Samsung's Galaxy S4 and the HTC One
in the Google Play store. So if you’d rather not deal with OEM overlays
and carrier restrictions, you can plop down some cash and purchase
unlocked, untainted Android hardware.
But the OEM-tied handsets aren't all bad. Sometimes the
manufacturer’s Android offerings tack on a little extra something to the
device that stock or Nexus Android hardware might not. These perks
include things like software improvements and hardware
enhancements—sometimes even thoughtful little extra touches. We’ll take a
look at four of the major manufacturer overlays available right now to
compare how they stack up to stock Android. Sometimes the differences
are obvious, especially when it comes to the interface and user
experience. You may be wondering what the overall benefit is to sticking
with a manufacturer’s skin. The reasons for doing so can be very
compelling.
A brief history of OEM interfaces
Why do OEM overlays happen in the first place? iOS and Windows Phone 8
don’t have to deal with this nonsense, so what's the deal with Android?
Well, Android was unveiled in 2007 alongside the Open Handset Alliance
(a consortium of hardware, software, and carriers to help further
advance open standards for mobile devices). The mission was to keep the
operating system open and accessible to all so users could mostly do
whatever they wanted to do with it. As Samsung VP of Product Planning
and Marketing Nick DiCarlo told Gizmodo,
“Google has induced a system where some of the world's largest
companies—the biggest handset manufacturer and a bunch of other really
big ones—are also investing huge money behind their ecosystem. It's a
really powerful and honestly pretty brilliant business model.”
The main issue with all of these different companies using the same
software for their hardware is one of differentiation—how does Samsung
or HTC or LG or Sony make an Android phone that doesn't have the same
look, feel, and features as the competition's similarly specced phones?
Putting their own skins, software, and services on top of Android gives
them access to the good parts of Google's ecosystem (in most cases, the
Google Play store and the surrounding software ecosystem) while
theoretically helping them stand out from other phones on the shelf.
These skins haven't exactly been received with open arms. For many
enthusiasts, skinned Android sometimes means that outstanding hardware
is bogged down by all these extra offerings that manufacturers think
will make their handset more appealing. But the overlays—or skins, as
they’re often referred to—usually change the way the interface looks and
acts. Sometimes it introduces new features that don't already exist on
Android.
Samsung's TouchWiz overlay in its beginning stages.
For Samsung, its Android interface domination began with the Samsung
Behold II, which ran the first incarnation of the Samsung's TouchWiz
Android UI. The name previously referred to Samsung's own proprietary
operating system for its phones. Reviewers weren't too excited about
Samsung's iteration of the Android interface, with sites like CNET writing
that the “TouchWiz interface doesn't really add much to the user
experience and in fact, at times, hinders it." Sometimes it still feels
that way.
Samsung is notorious for packing in a breadth of features, but its
aesthetics are often lacking. But again, that’s subjective, and it all
comes down to the user. I’ve been using the last version of TouchWiz on a
Galaxy S III. While I’ve had some instances where it was frustrating,
I’ve come to appreciate some of the extra perks that shine through.
HTC's Sense UI back in the day
HTC's Android skin is called the Sense UI, and it was introduced in
2009. I’ve had some experience with it in my earlier Android days when I
had an HTC Incredible, but it’s come a long way since then. According
to Gizmodo,
the interface's roots go back to 2007, when HTC had to alter its
software for Windows Mobile to make it touch-sensitive so that it
wouldn't just be limited to stylus input. It was eventually ported over
to the HTC Hero, which was also the first Android device to feature a
manufacturer’s interface overlay.
As for LG and Sony, their interfaces have less storied histories (and
less prominent branding). Each borrows moves from what the two major
players do and then implements the ideas a little better or a little
worse. It’s an interesting dynamic, but the big theme here is choice:
there is so much to choose from when you’re an Android user that it can
be overwhelming if you’re not entirely sure of where to go next. Brand
loyalty and past experiences can only go so far as the constant stream
of updates and releases means manufacturers seek new directions nonstop.
Each manufacturer puts some flair on its version of Android.
Samsung's TouchWiz Nature UX 2.0, for instance, features a bubble blue
interface with bright, vibrant colors and drop shadows to accompany
every icon. LG’s Optimus UI uses... a similar aesthetic. But both
interfaces allow you to customize the font style and size from within
the Settings menu, even if the end product could ultimately end up as a
garish looking interface.
Among the selection of manufacturers, Sony and HTC have been the most
successful in designing an Android interface that complements the
chassis on their respective flagship devices. HTC's Sense UI has always
been one of my favorites for its overall sleekness and simplicity.
Though it's not as barebones as the stock Android interface, Sense 5.0
now sports a thin, narrow font with modern-looking iconography that
pairs well with its latest handset, the HTC One.
Sony's interface doesn't have an official name, but it can be referred to as the Xperia interface.
Sony's interface is not only pleasing to use, it also matches the
general design philosophy that the company continues to maintain
throughout its lifespan. Though it has no official alias, the Xperia
interface showcases clean lines and extra offerings that don’t
completely sour the overall user experience.
In this comparison, we're taking a look at recent phones from all of
the manufacturers: a Nexus 4 and a Samsung Galaxy S 4 equipped with
Android 4.2.2. The HTC One, LG Optimus G Pro, and Sony Xperia Z are all
still on Android 4.1.2. We were unable to actually get any hands-on time
with the latest Android 4.2 update to the HTC One.
We didn't include Motorola in the gang because the company is
undergoing a massive makeover right now. Plus, its last flagship handset
was the Droid Razr Maxx HD, which debuted back in October and has
relatively dated specifications. We have some high hopes for what might
be in store for Motorola’s future, especially with Google’s immediate
backing, but we’re waiting to see what’s to come of that acquisition
later this year.
Lock screens, home screens, and settings, too
Home screens and lock screens are perhaps the most important element
of a user interface because that's what the user will deal with the
most. Think about it: every time you turn on your phone, you see the
lock screen. We need to consider how many swipes it takes to get to the
thing you want to do from the time you unlock your phone to the next
executable action. (And hopefully there are some shortcuts that we can
implement along the way to save time.)
The lock screen on the stock version of Android 4.2.2 Jelly Bean.
On stock Android 4.2.2 Jelly Bean, Google included a
plethora of options for users to interact with their phone right from
the lock screen. The Jelly Bean update that hit late last year added
lock screen widgets and quick, swipe-over access to the camera
application. The lock screen can also be disengaged entirely if you
would rather have instant access to your applications by hitting the
power button.
The home screen on the Galaxy S 4.
The lock screen on Samsung's TouchWiz.
In TouchWiz, Samsung kept the ability to add multiple widgets to the
lock screen, including one that provides one-touch access to favorite
applications and the ability to swipe over to the camera app. You can
also add a clock or a personal message to the lock screen. TouchWiz even
takes it a step further through the use of wake-up commands, which let
you check for missed calls and messages by simply uttering a phrase at
the lock screen. To actually unlock the device, you can choose to swipe
you finger across the lock screen or take advantage of common Android
features like Face unlock.
TouchWiz Easy mode is easy to set up...
..and still offers access to all apps.
As for the home screen, Samsung continues to offer the standard Android
experience here, except that it tacks on a few extra perks. Hitting the
Menu button on the home screen will bring up extra options, like the
ability to create a folder or easily edit and remove apps from a
particular page. There's also an "Easy" home screen mode, which dials
down the interface to a scant few options for those users with limited
smartphone experience—like your technophobic parents, for instance. Easy
mode will display bigger buttons and limit the interface to three home
screens, though basic app functionality remains.
Sense 5's home screen.
Sense 5's lock screen.
The HTC One just recently received an update for Android 4.2.2,
though we didn't receive it in time to update our unit for this article.
Regardless, HTC's Sense 5 is a far cry from its interface of yore—but
that’s not a bad thing. It features a flatter, more condensed design
with a new font that makes it appear more futuristic than other
interfaces.
And don't forget to choose your lock screen.
Choose your home screen—any home screen.
Rather than implement a lock screen widget feature, HTC lets users
choose between five different lock screen modes, including a Photo album
mode, music mode, and productivity mode, which lets you glance at your
notifications.
HTC Sense's BlinkFeed aggregates information for at-a-glance viewing.
Sense’s home screen can be a bit of a wreck if you’re looking for
something simplistic. Its BlinkFeed feature takes up an entire page.
Though it’s meant for aggregating news sites and social networks that
you set up yourself, you can’t link up your favorite RSS feeds. By
default, it’s your home page when you unlock the phone. You can change
the home page in Sense 5 so that you don’t have to actually use the
feature, but it can’t be entirely removed.
The home screen.
The lock screen for the Optimus UI.
LG's lock screen is just as busy and impacted as Samsung's, with a
Settings menu that's almost as disjointed. Like Samsung's, it offers
different unlock screen effects, varying clocks and shortcuts, and the
ability to display owner information in case you lose your phone. As for
the home screen, Optimus UI offers a home backup and restore option, as
well as the ability to display the home screen in constant Portrait
view.
Sony's home screen.
Sony's Xperia Z handset is currently limited to Android 4.1.2. Its
settings are laid out like stock Android, but there is no ability to add
widgets or different clocks to the lock screen. We'd expect this sort
of thing to become available when the phone is updated to Android 4.2.
Notifications
Samsung's TouchWiz notifications panel.
In the latest version of Jelly Bean, Google introduced Quick
Settings, meant to provide quick access to frequently used settings.
This is one area where the OEMs were ahead of Google—many of the
interfaces have integrated at least a few different quick settings
options for a while now.
LG's crowded notifications panel.
Samsung's TouchWiz has been the most successful in implementing these
features. You can scroll through a carousel of options or display them
as a grid by pressing a button. LG’s Optimus UI borrows this same idea
but overcrowds the Notifications panel with extra features like Qslide
(more on this later). Even on the Optimus G Pro’s 5.5-inch display, the
Quick Settings panel feels too congested to quickly find what you want
without glancing over everything else first.
Sense UI's Notifications panel.
Xperia Z's Notification panel.
Sony and HTC kept it simple with their Notifications shade for the
most part. Our HTC One and Xperia Z are running Android 4.1.2, so
there's no built-in Quick Settings panel to take advantage of. Neither
OEM implements its own version of the feature. The One's Android 4.2.2
update reportedly adds quick settings, and we assume that the same will
be true of the Xperia Z when it gets its update.
App drawers
Android 4.2.2 stock app drawer.
Sony's Xperia Z app drawer.
The only OEM overlay that keeps it as simplistic and straightforward
as stock Android is Sony. You can categorize how you want to display
apps and in what order, but beyond that there's not much else between
you and your applications. You can uninstall applications by
long-pressing them and dragging them to the "remove" icon that appears
rather than dragging them to one of the home screens.
LG's app drawer.
Samsung's app drawer.
That's not to say what Samsung and LG provide isn't user friendly.
Both manufacturers offer additional options once you head into the
applications drawer. LG enables you to sort alphabetically or by
download date, and you can increase icon size and change the wallpaper
just for the application menu. You can also choose which applications to
hide in case you would rather not be reminded of all the bloatware that
sometimes comes with handsets.
As for the Galaxy S 4, Samsung offers a quick hit button for the Play
Store. You can view your apps by category or by type, and you can share
applications, which essentially advertises the Play Store link to
various social networks.
Sense 5's app drawer.
HTC's interface falls short when it comes to the application launcher. As Phil Nickinson from Android Central put it,
the HTC Sense 5 application drawer makes simple look complicated. The
grid size, for instance, appears narrower and takes up precious space.
Rather than make good use of the screen resolution, HTC displays the
apps in a 3-by-4 icon grid by default, with the weather and clock widget
taking up a huge chunk at the top. You also can't long-press on an
application to place it on a home screen. Instead, you have to drag it
up to the top left corner and then select the shortcut button to finally
place it on the home screen—apparently, this feature is fixed in
the HTC One’s Android 4.2.2 update (but we've yet to try it ourselves).
It’s a bit of an ordeal to make the app drawer feel "normal" as defined
by the rest of the Android OEMs. In general, we feel like Sense takes
the most work to achieve some level of comfort.
Dialers
LG's Optimus UI includes a setting that lets you switch the side the
dialer is on so that it's easier to use with just one thumb.
The Optimus UI offers an information-dense Dialer app. You can sift
through logs, mark certain contacts as favorites, and peruse through
your contacts right from within the app. LG offers an option to make the
dialer easy to use one-handed.
HTC's dialer application.
Although the Sense UI’s dialer layout feels a bit cramped, you can
still bring up a contact from your address book by simply typing in a
few letters of their name. You can cycle through your favorites, your
contacts, and different groups. However, it would be more convenient if
the Contacts screen worked like a carousel rather than a back-and-forth
type of menu screen. Extra dialer settings also take up quite a bit of
space at the top of the screen rather than being nested in menus as they
are in other phones. Each screen in the Dialer has a different set of
settings, which can be a bit confusing. Sense UI’s dialer app—and its
overall interface—has a bit of a learning curve to it, but at least the
aesthetic is nice.
The TouchWiz dialer on a Galaxy S 4 running Android 4.2.2.
The TouchWiz dialer on a Galaxy S III running Android 4.1.2.
Samsung’s Dialer interface also has a favorites list and a separate
tab for the keypad, but otherwise It’s much more simplistic looking than
the rest of TouchWiz. Sony kept the Dialer relatively untouched too,
adding just an extra tab for favorites.
The stock Android dialer app.
In the end, Jelly Bean has the best, cleanest-looking dialer
application. While the extra categories are a good idea for users with a
hefty number of contacts and groups of people to compartmentalize,
sometimes minimizing the number of options is better—especially in the
case of an operating system that is wholly barebones to begin with.
Apps, apps, apps
Google apps as far as the eye can see.
Google packs up a suite of applications with the stock version of
Android to perfectly complement the company's offerings. On the Nexus 4
and the Google Play editions of the Samsung Galaxy S 4 and HTC One,
you'll encounter applications like Google Calendar, Mail, Currents,
Chrome, and Earth. Not all of the manufacturer’s handsets come with this
whole set of apps out of the box. The regular edition of the Galaxy S
4, for instance, doesn't include Google Earth or Currents, but it does
have the Gmail and Maps application. Some carriers will package up
handsets with their own suite of apps, and carriers might also include
things like a backup application or an app that lets you check on your
minutes and data usage.
For the most part, however, these handsets do include their own
versions of calendar and mail apps (though the Google Calendar app is
available in the Play store, the non-Gmail email client isn't). They
have camera apps with software tweaks that are compatible with the
hardware contained on the device. Some handsets also have a bunch of
extra features just because; Samsung is especially fond of this.
Camera applications
The stock camera app interface on the Google Play edition of the HTC One.
Andrew Cunningham
Perhaps the biggest differentiator between interfaces is the camera
features and controls. We'll start with the HTC One, with its Ultrapixel
camera and myriad features. You can use things like Zoe to stitch
together several still images and create a Thrasher-like action
photo or just combine two slightly mediocre photos to make one worthy of
sharing on social networks. The Ultrapixel camera also automatically
adjusts exposure and the like to produce a fine looking photo, as we
found out when we tested it out in our comparative review of the Google Play edition of the HTC One and the standard version.
The stock camera interface for the Nexus 4.
Andrew Cunningham
The stock Android camera application isn't totally devoid of
features, however. Android 4.2.2's new camera UI has scroll-up controls
that make it easy to quickly switch between things like Exposure and
Scene settings. And while it could certainly use a little oomph
like HTC introduced with its camera application, there is such a thing
as too much good stuff—especially when you look at what LG and Samsung
have crammed into their camera interfaces.
So many different camera modes to choose from on the Galaxy S 4.
Samsung's TouchWiz-provided camera application is a bit of a mess. On
the Galaxy S 4, you'll have to deal with buttons and features
splattered all over the place. There's a Mode button that lets you cycle
between the 12 different camera modes—things like Panorama, HDR, Beauty
face (which enhances the facial features of your subject), Sound &
shot (which shoots a photo and then some audio to accompany it), and
Drama (which works a little like HTC's Zoe feature). Those camera
options are all fun to use from time to time, and you can change the
menu screen from carousel to grid view so that you're not too
overwhelmed by the breadth of options. But there are still so many
buttons lining the sides of the viewfinder on the display.
There's also a general Settings button in the bottom corner that will
expand out with more icons, taking up even more room on the screen.
Below that are two buttons for switching between the front- and
rear-facing cameras, as well as the ability to use the dual-camera
functionality while snapping a photo. TouchWiz is great at offering a
bunch of choices, but it can get a bit exhausting. If there were a more
concise way of making these available, it would make the Camera
application less intimidating to use.
I'm not entirely sure what all of the symbols in LG's camera application stand for.
LG's Optimus UI camera application isn't any better. Rather than
allowing the entire screen to be used as a viewfinder with icons that
lay on top of it like with TouchWiz, the menu options take up the top
and bottom third of the screen. It’s nice that LG offers a little symbol
in the preview window to let you know how your battery life is doing
and which mode you’re shooting in, but figuring out what symbol does
what takes a bit of time. At least with Samsung’s large catalog of
offerings, there’s an explicit description about what each camera
function does.
Sony's camera interface is easier to use, but the preview window is still surrounded by buttons and things.
Sony’s camera app is a bit easier to navigate. It has found the right
medium between simplicity and feature-filled functionality.
Unfortunately for its hardware, that doesn't translate over to how well
the camera actually performs. But the interface is something that other
OEMs should strive for: a straightforward, easy-to-use preview window
that’s just what Goldilocks was looking for.
Where the camera application really matters is with handsets like the
HTC One. That extra bit of software that HTC packs up with its One is
essential for its camera functionality to operate at its prime. As our
own Andrew Cunningham put it in his review of the Google Play edition of the HTC One:
...there are the HTC-specific features that the
"ultrapixel" camera on the [GPe] One lacks, namely Zoe (which can stitch
together several still images to convey action), the ability to stitch
together "highlight movies" from short videos on your phone, and pretty
much any feature that lets you combine two unsatisfactory photos to get
one satisfactory one (like Always Smile or Object Removal). These have
been replaced by a slightly tweaked version of the stock Android camera,
which we assume will make it to the Nexus phones and tablets in the
next release of Android.
Additionally, the Google Play edition One had some image quality
issues when it shot in automatic shooting mode. The standard One—which
is fueled by HTC's Sense 5—can adjust its exposure based on its
surroundings. The Google Play edition—the stock Android 4.2.2
version—doesn't have those same software tweaks in its camera
application.
In some ways, this is actually the best argument for why you would
consider an OEM-tied Android handset over an unlocked, stock one: the
software has been tweaked to work best with that specific handset's
internals.
Calendars
In May, Google Calendar got a makeover in addition to color-coded
functionality in order to vary days and events for each calendar. The
new interface was particularly focused on streamlining the design
aesthetic across all Google applications, and while the update didn't
introduce too many new features, it did make Google's Calendar app a
little more palatable.
Samsung's TouchWiz calendar application.
Samsung's calendar application is not the prettiest thing to look at,
but it's certainly feature-filled. Users can switch between six
different calendar view modes and four different view styles, including
the ability to view the calendar in a list or pop-up form. There are
also a number of minor settings that you can individually adjust,
including the ability to select what day your week should start on.
Sense 5's calendar app.
HTC's Sense uses its own proprietary calendar application, too. You
can sync your accounts, choose the first day of the week, set the
default time zone (and another if you travel frequently), and even
display the weather within the calendar app. At the bottom of the
screen, Sense will show upcoming events at a glance, and you can tap to
add more throughout the day. You can also sort your calendar by meeting
invitations.
LG's calendar app.
Sony's calendar app.
LG's Optimus UI employs a similar interface to Sense UI’s with the
calendar view at the top and tasks for meeting invitations available at
the bottom. Still, it doesn't feel as informative as what stock Android
and Samsung are putting forward. Sony provides a Calendar app that uses a
similar icon to Google's stock app, and while the interface looks the
same, it doesn't have the color-coding abilities of Google Cal.
Mail
The Nexus 4 and Google Play edition handsets come with their own
suite of Google-branded applications, including two e-mail apps: Gmail
and a nondescript Email app. While Gmail is a little more
feature-filled, with the option to use things like Priority inbox, the
Email app's interface appears a little barren. You can add Exchange,
Yahoo!, Hotmail, and other POP3/IMAP accounts to it or add your Gmail
account to keep them synced up in one app. However, with the app's
straightforward nature, there's not much else to it. Mail applications
offered by other manufacturers don't veer too far from what's here,
though. They essentially offer the same basic functionality and settings
across the board.
...the server settings are annoyingly featured at the end.
From one interface to the other...
Annoyingly, most of the Mail apps, like Samsung's and LG's, bury
server settings at the very end of the settings panel. But all of them
take a page out of stock Android's book by providing a combined inbox
view, which is especially helpful if you're juggling between a myriad of
different accounts.
Samsung features a combined inbox view, just like stock Android's.
Stock Android's combined inbox view.
HTC's e-mail application has the same design as its other OEM-offered
proprietary applications, but it's much easier to navigate than other
apps. The Settings button resides at the top of the page with a bunch of
options, including the ability to add an account or set an "out of
office" message. From there, you can even access additional settings.
HTC also provides a couple of different sync settings, for instance, to
help preserve battery life.
Sony's Mail app is so simple.
Sony's Mail application is nice and easy as well. You can hit the
settings button in the bottom right corner to mark a piece as unread,
star it, move it to another folder, or access the general settings
panel.
Other apps
With each OEM overlay comes a whole set of applications that could
prove to be useful in the end. Except Samsung's, that is—the
manufacturer has bundled a huge set of features and capabilities that
feel redundant and might even suck the life out of your battery if
you're not careful.
Samsung includes its own app store with TouchWiz.
The Galaxy S 4 comes preloaded with a bunch of applications,
including Samsung's own app store, entertainment hub, and app that
enables you to access content on your phone from a desktop computer. The
only perk of signing up for a Samsung account and going that route is
being able to track where your phone is in case you lose or misplace it.
Samsung can back up phone data too.
But there's more where this came from. Samsung includes a remote
control application for your television called Samsung WatchON, but it's
only compatible if you have an active cable or satellite television
subscription. There's also S Health, which helps you manage your
lifestyle and well-being. And if you're a hands-free type of user, you
can take advantage of gesture-based functionality like Air gestures or
enable the screen to keep track of your eye movement.
It will pop up in a desktop-like window on your screen.
Select the "small app" you want to use from the running apps screen.
Samsung isn't the only offender when it comes to stuffing
applications and features onto its flagship handsets, however. Sony
oversaturated its own music and movie store, and it has a Walkman
music-playing application alongside Google Music. When you hold down the
Menu button, there's a row of "small apps" that gives you quick access
to things like the browser, a calculator, and a timer. We covered these
briefly in the Xperia Tablet Z review, but they work the same on the
Xperia Z handset: once you launch the "small" app, the app will appear
in a pop out screen on top of the interface. It's kind of like
multitasking.
The Qslide multitasking app lets you do things like take notes while you're doing other stuff.
LG, on the other hand, lets the carrier pack it with apps. LG then
includes a multitasking feature called Qslide. You can choose from
several Qslide-compatible applications that pop up over an open
application in order to do things like leave a note and use the
calculator. It also has a setting that turns off the screen when you're
not looking.
Among all phones, there are a great number of features and apps to
adapt to using. While I don't find things like Samsung's Air gestures
and Smart scroll gimmicky, it can get frustrating to venture into the
apps drawer only to find it crowded with icons of applications that you
will never use. Some of these features and apps are things you'd never
find packed up with the stock version of Android—because they're not
always necessary.
The future
We're still waiting to hear about Android 4.3 and what it will bring
to the mobile platform. Every time Google launches an update, you can
bet that the manufacturers will follow suit with their interfaces (you
know, eventually). That's what causes the biggest conundrum for
Android users. I had things like Quick Settings already available on my
Galaxy S III before Google natively implemented them into Android 4.2
Jelly Bean, but for the OEMs that didn't build their own version of this
feature, I'm constantly at the mercy of my carrier and the
manufacturer. They dictate when I'll receive my update for the latest
version of Jelly Bean.
The biggest gripe about OEM overlays is that each company is selling its own brand of Android rather than Google's. Remember the
Android update alliance? That didn't work out too well in the end.
Carriers and hardware makers aren't keeping their promises, and as that
trickles down to the consumer, it eventually confuses the public. As
Google attempts to implement interface and performance standards,
manufacturers will go ahead and hire a team to make Android look
virtually unrecognizable. Samsung's app and media store sort of feels
like an insult at times, but it knows that it doesn't have all the clout
to make strides with its own mobile operating system.
There is a silver lining to all this, at least for purists: why
should you even bother with OEM interfaces when you can now purchase two
of the most popular handsets with stock Android on them? Google has
already said that it will be working with the providers of Google Play
edition phones to provide timely updates, so you certainly don't have to
worry about fragmentation or waiting around to get the latest version
of Android.
The OEMs have provided several different experiences for both
hardcore and novice Android users alike, which has only contributed to
the proliferation of the platform. Here at Ars, we prefer the stock
version of Android on a Google-backed handset like the Nexus 4 and
Google Play editions. Even if they're not chock full of perks and
applications, they'll receive the most timely updates from Android
headquarters, and their interfaces are mostly free of the cruft you get
from the OEMs. For many consumers, it might not matter when Google
chooses to update the phone, but for us, we like to know that Google is
pushing through the software updates without any setbacks.
In the end, choosing an OEM-branded version of Android means that
you're a prisoner of that manufacturer's timeline—an especially
unfortunate situation when that manufacturer decides to stop supporting
software updates altogether. We've said it time and time again—in the
end, it's really your experience that will determine which interface
suits you best. So as far as the future of Android goes, it's not just
in Google's hands.
Fed up with the NSA’s infringement of privacy, an internet user by the name of Sang Mun has developed a font which cannot be read by computers.
Called ‘ZXX’, which is used by the Library of Congress to
state that a document has “no linguistic content”, the font is garbled
up in such a way that computers with Optical Character Recognition (OCR)
will not be able to recognize it.
Available in four “disguises”, this font uses camouflage
techniques to trick the computers of governments and corporations into
thinking that no useful information can be collated from people, while
remaining readable to the human eye.
The font developer urges users to fight against this infringement of privacy, and has made this font free for all users on his website.
Steve Swanson was a typical 21-year-old computer nerd with a
very atypical job. It was the summer of 1989, and he’d just earned a
math degree from the College of Charleston. He tended toward T-shirts
and flip-flops and liked Star Trek: The Next Generation. He
also spent most of his time in the garage of his college statistics
professor, Jim Hawkes, programming algorithms for what would become the
world’s first high-frequency trading firm, Automated Trading Desk.
Hawkes had hit on an idea to make money on the stock market using
predictive formulas designed by his friend David Whitcomb, who taught
finance at Rutgers University. It was Swanson’s job to turn Whitcomb’s
formulas into computer code. By tapping market data beamed in through a
satellite dish bolted to the roof of Hawkes’s garage, the system could
predict stock prices 30 to 60 seconds into the future and automatically
jump in and out of trades. They named it BORG, which stood for Brokered
Order Routing Gateway. It was also a reference to the evil alien race in
Star Trek that absorbed entire species into its cybernetic hive mind.
Among
the BORG’s first prey were the market makers on the floors of the
exchanges who manually posted offers to buy and sell stocks with
handwritten tickets. Not only did ATD have a better idea of where prices
were headed, it executed trades within one second—a snail’s pace by
today’s standards, but far faster than what anyone else was doing then.
Whenever a stock’s price changed, ATD’s computers would trade on the
offers humans had entered in the exchange’s order book before they could
adjust them, and then moments later either buy or sell the shares back
to them at the correct price. Bernie Madoff’s firm was then Nasdaq’s (NDAQ)
largest market maker. “Madoff hated us,” says Whitcomb. “We ate his
lunch in those days.” On average, ATD made less than a penny on every
share it traded, but it was trading hundreds of millions of shares a
day. Eventually the firm moved out of Hawkes’s garage and into a
$36 million modernist campus on the swampy outskirts of Charleston,
S.C., some 650 miles from Wall Street.
By 2006 the firm traded
between 700 million and 800 million shares a day, accounting for upwards
of 9 percent of all stock market volume in the U.S. And it wasn’t alone
anymore. A handful of other big electronic trading firms such as Getco,
Knight Capital Group, and Citadel were on the scene, having grown out
of the trading floors of the mercantile and futures exchanges in Chicago
and the stock exchanges in New York. High-frequency trading was
becoming more pervasive.
The definition of HFT varies, depending on whom you ask. Essentially,
it’s the use of automated strategies to churn through large volumes of
orders in fractions of seconds. Some firms can trade in microseconds.
(Usually, these shops are trading for themselves rather than clients.)
And HFT isn’t just for stocks: Speed traders have made inroads in
futures, fixed income, and foreign currencies. Options, not so much.
Back in 2007, traditional trading firms were rushing to automate. That year, Citigroup (C)
bought ATD for $680 million. Swanson, then 40, was named head of Citi’s
entire electronic stock trading operation and charged with integrating
ATD’s systems into the bank globally.
By 2010, HFT accounted for
more than 60 percent of all U.S. equity volume and seemed positioned to
swallow the rest. Swanson, tired of Citi’s bureaucracy, left, and in
mid-2011 opened his own HFT firm. The private equity firm Technology
Crossover Ventures gave him tens of millions to open a trading shop,
which he called Eladian Partners. If things went well, TCV would kick in
another multimillion-dollar round in 2012. But things didn’t go well.
For
the first time since its inception, high-frequency trading, the bogey
machine of the markets, is in retreat. According to estimates from
Rosenblatt Securities, as much as two-thirds of all stock trades in the
U.S. from 2008 to 2011 were executed by high-frequency firms; today it’s
about half. In 2009, high-frequency traders moved about 3.25 billion
shares a day. In 2012, it was 1.6 billion a day. Speed traders aren’t
just trading fewer shares, they’re making less money on each trade.
Average profits have fallen from about a tenth of a penny per share to a
twentieth of a penny.
According to Rosenblatt, in 2009 the entire HFT industry
made around $5 billion trading stocks. Last year it made closer to
$1 billion. By comparison, JPMorgan Chase (JPM)
earned more than six times that in the first quarter of this year. The
“profits have collapsed,” says Mark Gorton, the founder of Tower
Research Capital, one of the largest and fastest high-frequency trading
firms. “The easy money’s gone. We’re doing more things better than ever
before and making less money doing it.”
“The margins on trades
have gotten to the point where it’s not even paying the bills for a lot
of firms,” says Raj Fernando, chief executive officer and founder of
Chopper Trading, a large firm in Chicago that uses high-frequency
strategies. “No one’s laughing while running to the bank now, that’s for
sure.” A number of high-frequency shops have shut down in the past
year. According to Fernando, many asked Chopper to buy them before going
out of business. He declined in every instance.
One of HFT’s objectives has always been to make the market more
efficient. Speed traders have done such an excellent job of wringing
waste out of buying and selling stocks that they’re having a hard time
making money themselves. HFT also lacks the two things it needs the
most: trading volume and price volatility. Compared with the deep,
choppy waters of 2009 and 2010, the stock market is now a shallow,
placid pool. Trading volumes in U.S. equities are around 6 billion
shares a day, roughly where they were in 2006. Volatility, a measure of
the extent to which a share’s price jumps around, is about half what it
was a few years ago. By seeking out price disparities across assets and
exchanges, speed traders ensure that when things do get out of whack,
they’re quickly brought back into harmony. As a result, they tamp down
volatility, suffocating their two most common strategies: market making
and statistical arbitrage.
Market-making firms facilitate trading by quoting both a bid and a
sell price. They profit off the spread in between, which these days is
rarely more than a penny per share, so they rely on volume to make
money. Arbitrage firms take advantage of small price differences between
related assets. If shares of Apple (AAPL)
are trading for slightly different prices across any of the 13 U.S.
stock exchanges, HFT firms will buy the cheaper shares or sell the more
expensive ones. The more prices change, the more chances there are for
disparities to ripple through the market. As things have calmed,
arbitrage trading has become less profitable.
To some extent, the
drop in volume may be the result of high-frequency trading scaring
investors away from stocks, particularly after the so-called Flash Crash
of May 6, 2010, when a big futures sell order filled by computers
unleashed a massive selloff. The Dow Jones industrial average dropped
600 points in about five minutes. As volatility spiked, most
high-frequency traders that stayed in the market that day made a
fortune. Those that turned their machines off were blamed for
accelerating the selloff by drying up liquidity, since there were fewer
speed traders willing to buy all those cascading sell orders triggered
by falling prices.
For two years, the Flash Crash was HFT’s biggest black eye. Then last
August, Knight Capital crippled itself. Traders have taken to calling
the implosion “the Knightmare.” Until about 9:30 a.m. on the morning of
Aug. 1, 2012, Knight was arguably one of the kings of HFT and the
largest trader of U.S. stocks. It accounted for 17 percent of all
trading volume in New York Stock Exchange (NYX)-listed stocks, and about 16 percent in Nasdaq listings among securities firms.
When the market opened on Aug. 1, a new piece of trading software that
Knight had just installed went haywire and started aggressively buying
shares in 140 NYSE-listed stocks. Over about 45 minutes that morning,
Knight accidentally bought and sold $7 billion worth of shares—about
$2.6 million a second. Each time it bought, Knight’s algorithm would
raise the price it was offering into the market. Other firms were happy
to sell to it at those prices. By the end of Aug. 2, Knight had spent
$440 million unwinding its trades, or about 40 percent of the company’s
value before the glitch.
Knight is being
acquired by Chicago-based Getco, one of the leading high-frequency
market-making firms, and for years considered among the fastest. The
match, however, is one of two ailing titans. On April 15, Getco revealed
that its profits had plunged 90 percent last year. With 409 employees,
it made just $16 million in 2012, compared with $163 million in 2011 and
$430 million in 2008. Getco and Knight declined to comment for this
story.
Getco’s woes say a lot about another wound to
high-frequency trading: Speed doesn’t pay like it used to. Firms have
spent millions to maintain millisecond advantages by constantly updating
their computers and paying steep fees to have their servers placed next
to those of the exchanges in big data centers. Once exchanges saw how
valuable those thousandths of a second were, they raised fees to locate
next to them. They’ve also hiked the prices of their data feeds. As
firms spend millions trying to shave milliseconds off execution times,
the market has sped up but the racers have stayed even. The result:
smaller profits. “Speed has been commoditized,” says Bernie Dan, CEO of
Chicago-based Sun Trading, one of the largest high-frequency
market-making trading firms.
No one knows that better than Steve
Swanson. By the time he left Citi in 2010, HFT had become a crowded
space. As more firms flooded the market with their high-speed
algorithms, all of them hunting out inefficiencies, it became harder to
make money—especially since trading volumes were steadily declining as
investors pulled out of stocks and poured their money into bonds.
Swanson was competing for shrinking profits against hundreds of other
speed traders who were just as fast and just as smart. In
September 2012, TCV decided not to invest in the final round. A month
later, Swanson pulled the plug.
Even as the money has
dried up and HFT’s presence has declined, the regulators are arriving in
force. In January, Gregg Berman, a Princeton-trained physicist who’s
worked at the Securities and Exchange Commission since 2009, was
promoted to lead the SEC’s newly created Office of Analytics and
Research. His primary task is to give the SEC its first view into what
high-frequency traders are actually doing. Until now the agency relied
on the industry, and sometimes even the financial blogosphere, to learn
how speed traders operated. In the months after the Flash Crash, Berman
met with dozens of trading firms, including HFT firms. He was amazed at
how much trading data they had, and how much better their view of the
market was than his. He realized that he needed better systems and
technologies—and that the best place to get them was from the speed
traders themselves.
Last fall the SEC said it would pay Tradeworx, a high-frequency
trading firm, $2.5 million to use its data collection system as the
basic platform for a new surveillance operation. Code-named Midas
(Market Information Data Analytics System), it scours the market for
data from all 13 public exchanges.
Midas went live in February.
The SEC can now detect anomalous situations in the market, such as a
trader spamming an exchange with thousands of fake orders, before they
show up on blogs like Nanex and ZeroHedge. If Midas sees something odd,
Berman’s team can look at trading data on a deeper level, millisecond by
millisecond. About 100 people across the SEC use Midas, including a
core group of quants, developers, programmers, and Berman himself.
“Around the office, Gregg’s group is known as the League of
Extraordinary Gentlemen,” said Brian Bussey, associate director for
derivatives policy and trading practices at the SEC, during a panel in
February. “And it is one group that is not made up of lawyers, but
instead actual market and research experts.” It’s early, but there’s
evidence that Midas has detected some nefarious stuff. In March the Financial Times
reported that the SEC is sharing information with the FBI to probe
manipulative trading practices by some HFT firms. The SEC declined to
comment.
On March 12, the day the Futures Industry Association
annual meeting kicked off at the Boca Raton Resort & Club,
regulators from the U.S. Commodity Futures Trading Commission, and also
from Europe, Canada, and Asia, gathered in a closed-door meeting. At the
top of the agenda was “High-Frequency Trading—Controlling the Risks.”
Europeans are already
clamping down on speed traders. France and Italy have both implemented
some version of a trading tax. The European Commission is debating a
euro zone-wide transaction fee.
In the U.S., Bart Chilton, a
commissioner of the CFTC, has discussed adding yet more pressure. At the
Boca conference the evening after the meeting took place, sitting at a
table on a pink veranda, he explained his recent concern. According to
Chilton, the CFTC has uncovered some “curious activity” in the markets
that is “deeply disturbing and may be against the law.” Chilton, who
calls the high-frequency traders “cheetahs,” said the CFTC needs to
rethink how it determines whether a firm is manipulating markets.
Under
the CFTC’s manipulation standard, a firm has to have a large share of a
particular market to be deemed big enough to engage in manipulative
behavior. For example, a firm that owns 20 percent of a company’s stock
might be able to manipulate it. Since they rarely hold a position longer
than several seconds, speed traders might have at most 1 percent or
2 percent of a market, but due to the outsize influence of their speed,
they can often affect prices just as much as those with bigger
footprints—particularly when they engage in what Chilton refers to as
“feeding frenzies,” when prices are volatile. “We may need to lower the
bar in regard to cheetahs,” says Chilton. “The question is whether
revising that standard might be a way for us to catch cheetahs
manipulating the market.”
Recently the CFTC has deployed its own
high-tech surveillance system, capable of viewing market activity in
hundredths of a second, and also tracing trades back to the firms that
execute them. This has led the CFTC to look into potential manipulation
in the natural gas markets and review something called “wash trading,”
where firms illegally trade with themselves to create the impression of
activity that doesn’t really exist.
In May, Chilton proposed a
.06¢ fee on futures and swaps trades. The tax is meant to calm the
market and fund CFTC investigations. Democrats in Congress would go
further. Iowa Senator Tom Harkin and Oregon Representative Peter DeFazio
want a .03 percent tax on nearly every trade in nearly every market in
the U.S.
As profits have shrunk, more HFT firms are resorting to
something called momentum trading. Using methods similar to what Swanson
helped pioneer 25 years ago, momentum traders sense the way the market
is going and bet big. It can be lucrative, and it comes with enormous
risks. Other HFTs are using sophisticated programs to analyze news wires
and headlines to get their returns. A few are even scanning Twitter
feeds, as evidenced by the sudden selloff that followed the Associated
Press’s hacked Twitter account reporting explosions at the White House
on April 23. In many ways, it was the best they could do.
Wikipedia is constantly growing, and it is written by people around the world. To illustrate this, we created a map of recent changes on Wikipedia, which displays the approximate location of unregistered users and the article that they edit.
Unregistered Wikipedia users
When an unregistered user makes a contribution to Wikipedia,
he or she is identified by his or her IP address. These IP addresses
are translated to the contributor’s approximate geographic location. A study by Fabian Kaelin in 2011 noted that unregistered users make approximately 20% of the edits on English Wikipedia [edit: likely closer to 15%, according to more recent statistics], so Wikipedia’s stream of recent changes includes many other edits that are not shown on this map.
You may see some users add non-productive or disruptive content to Wikipedia. A survey in 2007 indicated
that unregistered users are less likely to make productive edits to the
encyclopedia. Do not fear: improper edits can be removed or corrected by other users, including you!
How it works
This map listens to live feeds of Wikipedia revisions, broadcast using wikimon. We built the map using a few nice libraries and services, including d3, DataMaps, and freegeoip.net. This project was inspired by WikipediaVision’s (almost) real-time edit visualization.