The world of Big Data is one of pervasive data collection and
aggressive analytics. Some see the future and cheer it on; others rebel.
Behind it all lurks a question most of us are asking — does it really
matter? I had a chance to find out recently, as I got to see what
Acxiom, a large-scale commercial data aggregator, had collected about
me.
At least in theory large-scale data collection matters quite a bit. Large data sets can be used to create social network maps
and can form the seeds for link analysis of connections between
individuals. Some see this as a good thing; others as a bad one — but
whatever your viewpoint, we live in a world which sees increasing power
and utility in Big Data’s large-scale data sets.
Of course, much of the concern is about government collection. But
it’s difficult to assess just how useful this sort of data collection by
the government is because, of course, most governmental data collection
projects are classified. The good news, however, is that we can begin
to test the utility of the program in the private sector arena. A useful
analog in the private sector just became publicly available and it’s
both moderately amusing and instructive to use it as a lens for thinking
about Big Data.
Acxiom is one of the
largest commercial, private sector data aggregators around. It collects
and sells large data sets about consumers — sometimes even to the
government. And for years it did so quietly, behind the scene — as one
writer put it “mapping the consumer genome.” Some saw this as rather ominous; others as just curious. But it was, for all of us, mysterious. Until now.
In September, the data giant made available to the public a portion of its data set. They created a new website — Abouthedata.com
— where a consumer could go to see what data the company had collected
about them. Of course, in order to access the data about yourself you
had to first verify your own identity (I had to send in a photocopy of
my driver’s license), but once you had done so, it would be possible to
see, in broad terms, what the company thought it knew about you — and
how close that knowledge was to reality.
I was curious, so I thought I would go explore myself and see what it
was they knew and how accurate they were. The results were at times
interesting, illuminating and mundane. Here are a few observations:
To begin with, the fundamental purpose of the data collection is to
sell me things — that’s what potential sellers want to know about
potential buyers and what, say, Amazon might want to know about me. So I
first went and looked at a category called “Household Purchase Data” —
in other words what I had recently bought.
It turns out that I buy … well … everything. I buy food, beverages,
art, computing equipment, magazines, men’s clothing, stationary, health
products, electronic products, sports and leisure products, and so
forth. In other words, my purchasing habits were, to Acxiom, just an
undifferentiated mass. Save for the notation that I had bought an
antique in the past and that I have purchased “High Ticket Merchandise,”
it seems that almost everything I bought was something that most any
moderately well-to-do consumer would buy.
I do suppose that the wide variety of purchases I made is, itself,
the point — by purchasing so widely I self-identify as a “good”
consumer. But if that’s the point then the data set seems to miss the
mark on “how good” I really am. Under the category of “total dollars
spent,” for example, it said that I had spent just $1,898 in the past
two years. Without disclosing too much about my spending habits in this
public forum, I think it is fair to say that this is a significant
underestimate of my purchasing activity.
The next data category of “Household Interests” was equally
unilluminating. Acxiom correctly said I was interested in computers,
arts, cooking, reading and the like. It noted that I was interested in
children’s items (for my grandkids) and beauty items and gardening (both
my wife’s interest, probably confused with mine). Here, as well, there
was little differentiation, and I assume the breadth of my interests is
what matters rather that the details. So, as a consumer, examining what
was collected about me seemed to disclose only a fairly anodyne level of
detail.
[Though I must object to the suggestion that I am an Apple user J.
Anyone who knows me knows I prefer the Windows OS. I assume this was
also the result of confusion within the household and a reflection of my
wife’s Apple use. As an aside, I was invited throughout to correct any
data that was in error. This I chose not to do, as I did not want to
validate data for Acxiom – that’s their job not mine—and I had no real
interest in enhancing their ability to sell me to other marketers. On
the other hand I also did not take the opportunity they offered to
completely opt-out of their data system, on the theory that a moderate
amount of data in the world about me may actually lead to being offered
some things I want to purchase.]
Things became a bit more intrusive (and interesting) when I started
to look at my “Characteristic Data” — that is data about who I am. Some
of the mistakes were a bit laughable — they pegged me as of German
ethnicity (because of my last name, naturally) when, with all due
respect to my many German friends, that isn’t something I’d ever say
about myself. And they got my birthday wrong — lord knows why.
But some of their insights were at least moderately invasive of my
privacy, and highly accurate. Acxiom “inferred” for example, that I’m
married. They identified me accurately as a Republican (but notably not
necessarily based on voter registration — instead it was the party I was
“associated with by voter registration or as a supporter”). They knew
there were no children in my household (all grown up) and that I run a
small business and frequently work from home. And they knew which sorts
of charities we supported (from surveys, online registrations and
purchasing activity). Pretty accurate, I’d say.
Finally, it was completely unsurprising that the most accurate data
about me was closely related to the most easily measurable and widely
reported aspect of my life (at least in the digital world) — namely, my
willingness to dive into the digital financial marketplace.
Acxiom knew that I had several credit cards and used them regularly.
Acxiom knew that I had several credit cards and used them regularly. It had a broadly accurate understanding of my household total income range [I’m not saying!].
They also knew all about my house — which makes sense since real
estate and liens are all matters of public record. They knew I was a
home owner and what the assessed value was. The data showed, accurately,
that I had a single family dwelling and that I’d lived there longer
than 14 years. It disclosed how old my house was (though with the rather
imprecise range of having been built between 1900 and 1940). And, of
course, they knew what my mortgage was, and thus had a good estimate of
the equity I had in my home.
So what did I learn from this exercise?
In some ways, very little. Nothing in the database surprised me, and
the level of detail was only somewhat discomfiting. Indeed, I was more
struck by how uninformative the database was than how detailed it was —
what, after all, does anyone learn by knowing that I like to read?
Perhaps Amazon will push me book ads, but they already know I like to
read because I buy directly from them. If they had asserted that I like
science fiction novels or romantic comedy movies, that level of detail
might have demonstrated a deeper grasp of who I am — but that I read at
all seems pretty trivial information about me.
I do, of course, understand that Acxiom has not completely lifted the
curtains on its data holdings. All we see at About The Data is summary
information. You don’t get to look at the underlying data elements. But
even so, if that’s the best they can do ….
In fact, what struck me most forcefully was (to borrow a phrase from
Hannah Arendt) the banality of it all. Some, like me, see great promise
in big data analytics as a way of identifying terrorists or tracking
disease. Others, with greater privacy concerns, look at big data and see
Big Brother. But when I dove into one big data set (albeit only
partially), held by one of the largest data aggregators in the world,
all I really became was a bit bored.
Maybe that’s what they wanted as a way of reassuring me. If so, Acxiom succeeded, in spades.
Qualcomm
is readying a new kind of artificial brain chip, dubbed neural
processing units (NPUs), modeling human cognition and opening the door
to phones, computers, and robots that could be taught in the same ways
that children learn. The first NPUs are likely to go into production by
2014, CTO Matt Grob confirmed at the MIT Technology Review
EmTech conference, with Qualcomm in talks with companies about using
the specialist chips for artificial vision, more efficient and
contextually-aware smartphones and tablets, and even potentially brain
implants.
According to Grob, the advantage of NPUs over traditional chips like
Qualcomm’s own Snapdragon range will be in how they can be programmed.
Instead of explicitly instructing the chips in how processing should
take place, developers would be able to teach the chips by example.
“This ‘neuromorphic’ hardware
is biologically inspired – a completely different architecture – and
can solve a very different class of problems that conventional
architecture is not good at,” Grob explained of the NPUs. “It really
uses physical structures derived from real neurons – parallel and
distributed.”
As a result, “this is a kind of machine that can learn, and be
programmed without software – be programmed the way you teach your kid”
Grob predicted.
In fact, Qualcomm already has a learning machine in its labs that uses the same sort of biologically-inspired programming system
that the NPUs will enable. A simple wheeled robot, it’s capable of
rediscovering a goal location after being told just once that it’s
reached the right point.
However it’s not only robots that can learn which will benefit
from the NPUs, Qualcomm says. “We want to make it easier for
researchers to make a part of the brain” Grob said, bringing abilities
like classification and prediction to a new generation of electronics.
That might mean computers
that are better able to filter large quantities of data to suit the
particular needs of the user at any one time, smartphone assistants like
Google Now with supercharged contextual intuition, and autonomous cars
that can dynamically recognize and understand potential perils in the
road ahead.
The first partnerships actually implementing NPUs in that way are
likely to come in 2014, Grob confirmed, with Qualcomm envisaging hugely
parallel arrays of the chips being put into practice to model how humans
might handle complex problems.
Massachusetts Institute of Technology researchers have developed a
device that can see through walls and pinpoint a person with incredible
accuracy. They call it the “Kinect of the future,” after Microsoft’s
Xbox 360 motion-sensing camera.
Shown publicly this week for the first time, the project from MIT’s
Computer Science and Artificial Laboratory (CSAIL) used three radio
antennas spaced about a meter apart and pointed at a wall.
Photo: Nick BarberMIT has developed way to pinpoint the location of someone through a wall using just radio signals.
A desk cluttered with wires and circuits generated and interpreted the
radio waves. On the other side of the wall, a person walked around the
room and the system represented that person as a red dot on a computer
screen. The system tracked the movements with an accuracy of plus or
minus 10 centimeters, which is about the width of an adult hand.
Fadel Adib, a Ph.D student on the project, said that gaming could be one
use for the technology, but that localization is also very important.
He said that Wi-Fi localization, or determining someone’s position based
on Wi-Fi, requires the user to hold a transmitter, like a smartphone
for example.
“What we’re doing here is localization through a wall without requiring
you to hold any transmitter or receiver [and] simply by using
reflections off a human body,” he said. “What is impressive is that our
accuracy is higher than even state of the art Wi-Fi localization.”
He said that he hopes further iterations of the project will offer a real-time silhouette rather than just a red dot.
In the room where users walked around there was white tape on the floor
in a circular design. The tape on the floor was also in the virtual
representation of the room on the computer screen. It wasn’t being used
an aid to the technology, rather it showed onlookers just how accurate
the system was. As testers walked on the floor design their actions were
mirrored on the computer screen.
One of the drawbacks of the system is that it can only track one moving
person at a time and the area around the project needs to be completely
free of movement. That meant that when the group wanted to test the
system they would need to leave the room with the transmitters as well
as the surrounding area; only the person being tracked could be nearby.
At the CSAIL facility the researchers had the system set up between two
offices, which shared an interior wall. In order to operate it,
onlookers needed to stand about a meter or two outside of both of the
offices as to not create interference for the system.
Photo: Nick BarberAn MIT project can track a user with an accuracy of +/- 10 centimeters.
The system can only track one person at a time, but that doesn’t mean
two people can’t be in the same room at once. As long as one person is
relatively still the system will only track the person that is moving.
The group is working on making the system even more precise. “We now
have an initial algorithm that can tell us if a person is just standing
and breathing,” Adib said. He was also able to show how raising an arm
could also be tracked using radio signals. The red dot would move just
slightly to the side where the arm was raised.
Adib also said that unlike previous versions of the project that used
Wi-Fi, the new system allows for 3D tracking and could be useful in
telling when someone has fallen at home.
The system now is quite bulky. It takes up an entire desk that is strewn
with wires and then there’s also the space used by the antennas.
“We can put a lot of work into miniaturizing the hardware,” said
research Zach Kabelac, a masters student at MIT. He said that the
antennas don’t need to be as far apart as they are now.
“We can actually bring these closer together to the size of a Kinect
[sensor] or possibly smaller,” he said. That would mean that the system
would “lose a little bit of accuracy,” but that it would be minimal.
The researchers filed a patent this week and while there are no
immediate plans for commercialization the team members were speaking
with representatives from major wireless and component companies during
the CSAIL open house.
Apple may be forced to abandon its proprietary 30-pin dock charger (shown above) if European politicians get their way.
Members of the European Parliament’s internal market committee on
Thursday voted unanimously for a new law mandating a universal mobile
phone charger. The MEPs want all radio equipment devices and their
accessories, such as chargers, to be interoperable to cut down on
electronic waste.
German MEP Barbara Weiler said she wanted to see an end to “cable chaos”.
This is not the first attempt to set a standard for universal phone
chargers. In 2009 the European Commission, the International
Telecommunications Union (ITU) and leading mobile phone manufacturers
drew up a voluntary agreement based on the micro USB connector.
However Apple, which sold nine million units of the iPhone 5s and 5c in
just three days last week, has not adhered to the agreement despite
signing up.
The draft law also lays down rules for other radio equipment, such as
car door openers or modems, to ensure that they do not interfere with
each other. The committee also cut some red tape, by deleting a rule
that would have required manufacturers to register certain categories of
devices before placing them on the market.
The committee is now expected to begin informal negotiations with the
European Council in order to move the legislative process along quickly.
Once upon a time there were things called jobs, and they were well understood. People went to work for companies, in offices or in factories. There were exceptions — artists, aristocrats, entrepeneurs — but they were rare.
Laws, regulations, and statistics were based on this assumption; but,
increasingly, what people do today doesn’t fit neatly into that
anachronistic 1950s rubric. I’ve had the pleasure of trying to explain
to border officials that my “job” consisted of contracting in Country A
for a client in Country B, while also writing books and selling apps. I
don’t recommend it.
This disconnect will just keep getting worse. The so-called “sharing economy” mediated by sites and apps like Lyft, TaskRabbit, Thumbtack, Postmates,
Mechanical Turk, etc etc etc., replaces “consistent work for a single
employer” with “an agglomeration of short-term/one-time gigs.” That
doesn’t really map to the old-economy assumptions at all. And even
relatively high-skill professions are now being nibbled at by
shared-economy software; consider Disrupt winner YourMechanic.
I say “so-called” because, let’s face it, “sharing economy” is mostly
spin. It mostly consists of people who have excess disposable income
hiring those who do not; it’s pretty rare to vacillate across that
divide. Far more accurate to call it the “servant economy.” (Not to be confused with the “patronage economy” — Kickstarter, Indiegogo — which deserves its own post.)
It’s not surprising that relatively-wealthy techies like me have
created apps and services which make relatively-wealthy techies’ lives a
little better, instead of solving the real and hard problems faced by poor people. But it is a little surprising that these apps effectively echo what’s happening on a massive scale in the corporate world.
Did you know that “the hiring rate of temp workers is five times that of hiring overall in the past year” and “The number of temps has jumped more than 50 percent since the recession ended”? Meanwhile, in the UK, “The median hourly earnings for the self-employed are £5.58, less than half the £11.21 earned by employees.”
This “ephemeral workforce” phenomenon isn’t just
American; the UK has also set records in the contingently employed.
Something profoundly structural is going on. Even healthier economic
growth won’t make it go away.
We already know how software will eat manufacturing (robots and 3D
printing) and transportation (self-driving vehicles.) This new servant
economy shows us how software will eat much of the service sector; by
turning turn many of its existing full-time jobs into a disconnected
cloud of temporary gigs.
In many ways this is inarguably a good thing. I may not think much of Uber’s CEO’s politics
but I think even less of the insane medallion system that rules taxi
industries across America for no good reason. (Anyone who believes taxi
companies’ claims that they’re safer probably also believes the TSA’s
claims that security theater keeps you safe.) I applaud the leveling of that demented regulatory wall.
What’s more, when the New Temps no longer require companies like
Manpower to connect them to their actual employers, but can pick and
choose on the fly among competing third parties, that too will be a huge
benefit for all concerned. It’s entertaining to read Manpower’s CEO
dismissal of this trend as “somewhat niche…I don’t think it’s going to
take over the world” in a recent Wall Street Journal piece. I suspect that quote will sound fantastically dense in ten years’ time.
And yet this trend makes me uneasy. The slow transformation of a huge
swathe of the economy from steady jobs to an ever-shifting maelstrom of
short-term contracts with few-to-no benefits, for which an ever-larger
pool of people will compete thanks to ever-lower barriers to entry, in a
sector where most jobs are already poorly paid…does this sound
to you like it will decrease inequality and increase social mobility?
Maybe, it certain specialized high-skill areas. But across the spectrum?
I doubt it.
It does sound like it will reduce prices…but, unlike
Wal-Mart, servant-economy providers are rarely servant-economy
customers. (As prices drop, their incomes drop too, keeping the
now-cheaper services still out of reach; a vicious circle.) The people
who benefit are, surprise, surprise, the techies, the professionals, the
bankers, the steadilydwindlingmiddle class. You know. People like you and me. And, of course, the companies hiring the armies of temps.
I don’t want to sound like a pessimistic Luddite; I do believe that
this will ultimately be better than the status quo for most people. But
it seems to me that — like many of the other economic shifts triggered
by new technologies, as I’ve been arguingforsometime — the vast majority of the benefits will accrue to a small and shrinking fraction of the population.
Is that inequality such a bad thing? If the techno-economic tide is
lifting all boats, does it really matter if it lifts the yachts higher
than the fishing boats, and the super-yachts into the stratosphere? It
seems to me that the answer depends in large part on whether the fishing
boats have any realistic prospect of achieving yachtdom:
Unfortunately, social mobility is actually significantly lower
in America than in other rich nations…and so far I see no reason to
believe that the combination of tomorrow’s technology and today’s
economic architecture will change that. In fact I have a nasty gut
feeling that the opposite is true, both in America and worldwide.
With its first computer based on the extremely low-power Quark
processor, Intel is tapping into the 'maker' community to figure out
ways the new chip could be best used.
The chip maker announced the Galileo computer -- which is a board
without a case -- with the Intel Quark X1000 processor on Thursday. The
board is targeted at the community of do-it-yourself enthusiasts who
make computing devices ranging from robots and health monitors to home
media centers and PCs.
The Galileo board should become widely available for under $60 by the
end of November, said Mike Bell, vice president and general manager of
the New Devices Group at Intel.
Bell hopes the maker community will use the board to build prototypes
and debug devices. The Galileo board will be open-source, and the
schematics will be released over time so it can be replicated by
individuals and companies.
Bell's New Devices Group is investigating business opportunities in
the emerging markets of wearable devices and the "Internet of things."
The chip maker launched the extremely low-power Quark processor for such
devices last month.
Intel's Quark processor.
"People want to be able to use our chips to do creative things," Bell
said. "All of the coolest devices are coming from the maker community."
But at around $60, the Galileo will be more expensive than the
popular Raspberry Pi, which is based on an ARM processor and sells for
$25. The Raspberry Pi can also render 1080p graphics, which Intel's
Galileo can't match.
Making inroads in the enthusiast community
Questions also remain on whether Intel's overtures will be accepted
by the maker community, which embraces the open-source ethos of a
community working together to tweak hardware designs. Intel has made a
lot of contributions to the Linux OS, but has kept its hardware designs
secret. Intel's efforts to reach out to the enthusiast community is
recent; the company's first open-source PC went on sale in July.
Intel is committed long-term to the enthusiast community, Bell said.
Intel also announced a partnership with Arduino, which provides a
software development environment for the Galileo motherboard. The
enthusiast community has largely relied on Arduino microcontrollers and
boards with ARM processors to create interactive computing devices.
The Galileo is equipped with a 32-bit Quark SoC X1000 CPU, which has a
clock speed of 400MHz and is based on the x86 Pentium Instruction Set
Architecture. The Galileo board supports Linux OS and the Arduino
development environment. It also supports standard data transfer and
networking interfaces such as PCI-Express, Ethernet and USB 2.0.
Intel has demonstrated its Quark chip running in eyewear and a
medical patch to check for vitals. The company has also talked about the
possibility of using the chip in personalized medicine, sensor devices
and cars.
Intel hopes creating interactive computing devices with Galileo will
be easy. Writing applications for the board is as simple as writing
programs to standard microcontrollers with support for the Arduino
development environment.
"Essentially it's transparent to the development," Bell said.
Intel is shipping out 50,000 Galileo boards for free to students at over 1,000 universities over the next 18 months.
The war veteran who recoils at the sound of a car backfiring and the
recovering drug addict who feels a sudden need for their drug of choice
when visiting old haunts have one thing in common: Both are victims of
their own memories. New research indicates those memories could actually
be extinguished.
A new study from the Massachusetts Institute of Technology found a
gene called Tet1 can facilitate the process of memory extinction. In the
study, mice were put in a cage that delivered an electric shock. Once
they learned to fear that cage, they were then put in the same cage but
not shocked. Mice with the normal Tet1 levels no longer feared the cage
once new memories were formed without the shock. Mice with the Tet1 gene
eliminated continued to fear the cage even when there was no shock
delivered.
“We learned from this that the animals defective in the Tet1 gene are
not capable of weakening the fear memory,” Le-Huei Tsai, director of
MIT's Picower Institute for Learning and Memory, told Discovery News.
“For more than a half century it has been documented that gene
expression and protein synthesis are essential for learning and forming
new memories. In this study we speculated that the Tet1 gene regulates
chemical modifications to DNA.”
The MIT researchers found that Tet1 changes levels of DNA
methylation, the process of causing a chemical reaction. When
methylation is prominent, the process of learning new memories is more
efficient. When methylation is weaker, the opposite is true.
“The results support the notion that once a fear memory is formed, to
extinguish that memory a new memory has to form,” Tsai said. “The new
memory competes with the old memory and eventually supersedes the old
memory.”
Experts in the study of memory and anxiety agree.
“This is highly significant research in that it presents a completely
new mechanism of memory regulation and behavior regulation,” said
Jelena Radulovic, a professor of bipolar disease at Northwestern
University. The mechanism of manipulating DNA is likely to affect many
other things. Now the question will be whether there will be patterns
that emerge, whether there will be side effects on moods and emotions
and other aspects. But the findings have real relevance.”
Radulovic, who was not directly involved in the study, says the
primary significance of the findings have to do with eliminating fear.
“The results show us a very specific paradigm of learned reduction of
fear,” she said. “This could mean that interference with the Tet1 gene
and modification of DNA could be an important target to reduce fear in
people with anxiety disorders.”
For her part, Tsai is most encouraged at the ability to approach
anxiety disorders at the molecular and cellular levels inside the brain.
“We can now see the bio-chemical cascade of events in the process of
memory formation and memory extinction,” said Tsai. “Hopefully this can
lead to new drug discoveries.”
Meanwhile, research in memory extinction is progressing quickly,
largely due to new discoveries through traditional experimentation,
augmented by advances in technology, Tsai said.
Elsewhere, parallel research is focusing more on physiological
processes that cause memories, rather than epigenetics (the study of how
genes are turned on or off). At the Scripps Research Institute,
researchers are studying what causes a methamphetamine addict to relapse
when confronted with familiar triggers that a person associates with
drug use.
“Substance users who are trying to stay clean, when exposed to the
environment where they used the drug have all kinds of associations and
memories in their minds that are strong enough to elicit cravings,” said
Courtney Miller, an assistant professor at the Scripps Research
Institute, who led the research. “The idea is to try to selectively disrupt the dangerous memories but not lose other memories." “We taught rodents to press a lever to get an infusion of meth, and that
puts the drug delivery in the animal’s control,” Miller told Discovery News.
“They were put in an environment that was unique to them every day for
two weeks, where they could press the lever and get meth. They learned
to associate that environment with the meth, the place where they could
‘use.’”
The animals were then injected with a chemical that inhibited actin polymerization and placed back in their home environment.
“The process of actin polymerization happens when neurons contact
each other, and that is how information is passed,” Miller said. “Think
of it like a Lego project. There are little pieces that contact each
other. The receiving point on a neuron, called a dendritic spine,
enlarges when a memory is stored. It gives more surface areas so you can
have more neurotransmission.
"Actin controls that, enlarges the spine and keeps it large. In a
normal memory, pieces come off the top and circle around and add on to
the bottom very slowly. In a meth memory the piece comes off the top,
wraps around and comes back much faster. We gave a drug that takes the
pieces away and they are not added back on. The point of contact falls
apart and the memory is lost.”
The process, called depolymerization, means that memories are no longer stored.
Longtime memory researchers are highly supportive of Miller’s findings.
“The findings here are real game changers,” said Gary S. Lynch,
professor of psychiatry and human behavior at University of California
School of Medicine. “What this points to is a completely new strategy
for treatment of addiction. For the past 10 years there have been many
challenges to the notion that memories are cemented in. But this study
shows that memory really is still a dynamic, malleable business and that
there can be another way of dealing with dependency.”
Lynch is particularly taken with the study’s findings regarding the role of actin.
“Actin is the most prevalent protein in the body,” said Lynch, who
has studied memory issues for more than 30 years. “Now to find that it
is so critical to dependency is breathtaking in its implications.”
In the future, it's possible the process can be generalized to other addictions, such as nicotine, Miller said.
As for how distant that future may be, Tsai believes we are still
many years from applying the current research to human beings with
psychiatric disorders.
“I would like to believe that through cognitive behavior therapy or
some new medication, eventually — not five or 10 years from now, but
eventually — a lot of the mechanisms are going to be solved,” Tsai said.
“We’ll know how good memories form, how bad memories form. But the
brain is an organ that is not very accessible to manipulation, unlike
most other organs. My prediction is that progress on memory research,
including memory extinction, will speed up considerably because of the
emerging technology.”
That technology, Tsai says, includes a new 3-D, high-resolution brain
imaging called CLARITY, developed by a research team at Stanford
University. CLARITY essentially makes it possible to view the brain in a
transparent way, allowing researchers to see in detail its complex fine
wiring and essential features.
To translate one language into another, find the linear
transformation that maps one to the other. Simple, say a team of Google
engineers
Computer
science is changing the nature of the translation of words and
sentences from one language to another. Anybody who has tried BabelFish or Google Translate will know that they provide useful translation services but ones that are far from perfect.
The
basic idea is to compare a corpus of words in one language with the
same corpus of words translated into another. Words and phrases that
share similar statistical properties are considered equivalent.
The
problem, of course, is that the initial translations rely on
dictionaries that have to be compiled by human experts and this takes
significant time and effort.
Now Tomas Mikolov and a couple of
pals at Google in Mountain View have developed a technique that
automatically generates dictionaries and phrase tables that convert one
language into another.
The new technique does not rely on versions
of the same document in different languages. Instead, it uses data
mining techniques to model the structure of a single language and then
compares this to the structure of another language.
“This method
makes little assumption about the languages, so it can be used to extend
and refine dictionaries and translation tables for any language pairs,”
they say.
The new approach is relatively straightforward. It
relies on the notion that every language must describe a similar set of
ideas, so the words that do this must also be similar. For example, most
languages will have words for common animals such as cat, dog, cow and
so on. And these words are probably used in the same way in sentences
such as “a cat is an animal that is smaller than a dog.”
The same
is true of numbers. The image above shows the vector representations of
the numbers one to five in English and Spanish and demonstrates how
similar they are.
This is an important clue. The new trick is to represent an entire language using the relationship between its words. The
set of all the relationships, the so-called “language space”, can be
thought of as a set of vectors that each point from one word to another.
And in recent years, linguists have discovered that it is possible to
handle these vectors mathematically. For example, the operation ‘king’ –
‘man’ + ‘woman’ results in a vector that is similar to ‘queen’.
It
turns out that different languages share many similarities in this
vector space. That means the process of converting one language into
another is equivalent to finding the transformation that converts one
vector space into the other.
This turns the problem of translation
from one of linguistics into one of mathematics. So the problem for the
Google team is to find a way of accurately mapping one vector space
onto the other. For this they use a small bilingual dictionary compiled
by human experts–comparing same corpus of words in two different
languages gives them a ready-made linear transformation that does the
trick.
Having identified this mapping, it is then a simple matter
to apply it to the bigger language spaces. Mikolov and co say it works
remarkably well. “Despite its simplicity, our method is surprisingly
effective: we can achieve almost 90% precision@5 for translation of
words between English and Spanish,” they say.
The method can be
used to extend and refine existing dictionaries, and even to spot
mistakes in them. Indeed, the Google team do exactly that with an
English-Czech dictionary, finding numerous mistakes.
Finally, the
team point out that since the technique makes few assumptions about the
languages themselves, it can be used on argots that are entirely
unrelated. So while Spanish and English have a common Indo-European
history, Mikolov and co show that the new technique also works just as
well for pairs of languages that are less closely related, such as
English and Vietnamese.
That’s a useful step forward for the
future of multilingual communication. But the team says this is just the
beginning. “Clearly, there is still much to be explored,” they
conclude.
Ref: arxiv.org/abs/1309.4168: Exploiting Similarities among Languages for Machine Translation
Projection mapping, where ordinary objects become surfaces for moving images, is an increasingly common video technique in applications like music videos, phone commercials, and architectural light shows — and now a new film shows what can happen when you add robots to the mix. In Box, a performance artist works with transforming panels hoisted by industrial machineryin
a dazzling demonstration of projection mapping's mind-bending
possibilities. Every effect is captured in-camera, and each section
eventually reveals how the robot arms were used.
It's the work of San Francisco
studio Bot & Dolly, which believes its new technology can "tear down
the fourth wall" in the theater. "Through large-scale robotics,
projection mapping and software engineering, audiences will witness the
trompe l'oeil effect pushed to new boundaries," says creative director
Tarik Abdel-Gawad. "We believe this methodology has tremendous potential
to radically transform visual art forms and define new genres of
expression." Box is an effective demonstration of the studio's
projection mapping system, but it works in its own right as an
enthralling piece of art.
In Linden Lab's vast experiment, the end has no end
Do you remember Second Life?
Set up by developer Linden Lab in 2003, it was the faithful replication
of our modern world where whoring, drinking, and fighting were
acceptable. It was the place where big brands moved in as neighbors and
hawked you their wares online. For many, it was the future — our lives
were going to be lived online, as avatars represented us in nightclubs,
bedrooms, and banks made of pixels and code.
In the mid-2000s, every self-respecting media outlet sent reporters to the Second Life world to cover the parallel-universe beat. The BBC, (now Bloomberg) Businessweek, and NBC Nightly News all devoted time and coverage to the phenomenon. Amazon, American Apparel, and Disney set up shop in Second Life,
aiming to capitalize on the momentum it was building — and to play to
the in-world consumer base, which at one point in 2006 boasted a GDP of
$64 million.
Of course, stratospheric
growth doesn’t continue forever, and when the universe’s expansion
slowed and the novelty of people living parallel lives wore off, the
media moved on. So did businesses — but not users. Linden Lab doesn’t
share historical user figures, but it says the population of Second Life has been relatively stable for a number of years.
You might not have heard a peep about it since the halcyon days of 2006, but that doesn’t mean Second Life
has gone away. Far from it: this past June it celebrated its 10th
birthday, and it is still a strong community. A million active users
still log on and inhabit the world every month, and 13,000 newbies drop
into the community every day to see what Second Life is about. I was one of them, and I found out that just because Second Life is no longer under the glare of the media’s spotlight, it doesn’t mean the culture inside the petri dish isn’t still growing.
Packing tape and pyrotechnics
One of Second Life’s
million-strong population is Fee Berry, a 55-year-old mother of three
children who lives in Middlesex, a leafy suburb of London, England. And
though her Second Life avatar, Caliandris Pendragon, is cool and calm, I’ve caught her at a bad time.
“I’m moving house,” she
explains. In the background I can hear boxes being heaved back and
forth, tape unspooling and being wrapped around packaged items. At one
point in our conversation she has to ask her son to keep the noise down.
Berry became a stay-at-home mom after the birth of her first son and started gaming in 1998, playing Riven, a more puzzle-centric sequel to Myst,
a popular adventure game first released in 1993. Both were developed by
Cyan Worlds, at the time simply called Cyan. A friend introduced Berry
to Riven when she bought a second-hand Apple Macintosh; she was
initially wary, telling the friend, “I don’t think I like those sorts
of things.” She finished the game within three weeks.
She stuck with games produced by Cyan for the next six years, graduating to Uru, their MMO adventure game. When Cyan discontinued support for Uru Live,
the online section of the game, Berry, like many others, moved on to an
alternative. As with everyone entering their Second Life, she was
dropped from the sky. Her feet first hit the turf of the new virtual
world on February 12th, 2004.
"I can shrug off my role as a mother."
“It’s like every toy you ever
had, all rolled into one,” she tells me in awed tones, recalling the
power of the game to keep her playing nearly a decade on. It’s also
liberating, she explains, allowing her to forget about the kids, the
responsibilities, and the extra few inches she’d rather not have. It
lets her cut free.
In Second Life she
doesn’t have to be a graying 55-year-old mom; she can keep the bright
eyes and warm smile, but can pinch, tuck, and pluck the other bits so
that she becomes 25-year-old Pendragon, a vampish babe with full lips,
long jet black hair, and heavy eyeliner.
"It's like every toy you ever had, all rolled into one."
“I can shrug off my role as a mother,” she explains. “I can swear or misbehave in Second Life in a way I couldn’t in real life.”
The second-ever person I meet in Second Life, in a drop-off zone, proves that point. HOUSE Chemistry’s been in Second Life
for nearly six years. The 28-year-old lives in New Orleans, and may or
may not look like his in-universe avatar: a 6-foot-7-inch-tall man
wearing all black, with thick brown dreadlocks down to his waist — he
won’t say. Regardless, HOUSE Chemistry’s warm and welcoming, and seems
to enjoy taking me under his wing, explaining the universe to me.
When I ask him what he does in Second Life,
I’m expecting him to advise me to talk to people, make friends, and
take some classes. He replies a little differently: “Anything I want.
Walk near me. I’ll set this place on fire, watch.”
And so he does, under a clock showing 11:26, on one of Second Life’s
introductory islands. Truthfully, I’m not impressed: it’s a pretty
poor-quality animation with blocky gray smoke and weirdly flesh-colored
balls I presume are meant to represent the actual flames. Still, I
politely show my admiration and ask him whether he’d want to set stuff
on fire in real life.
“No,” he says. There’s a brief pause. “Take your time. You’ll learn how to do all kinds of cool shit.”
I try and move the
conversation on, asking HOUSE Chemistry what he does in real life. “I
build things,” he replies. I’m intrigued by this person who builds
things in real life, then sets polygonal representations of them on fire
in Second Life, and say so out loud. He ignores it, moves on, shuts down the conversation.
“You got it now,” he says. “Enjoy.”
“You a wife or a men?”
The concept of an avatar in the sense we know today first emerged in the 1980s from the LucasArts game Habitat and the cyberpunk novels of the time. Philip Rosedale, who created Second Life,
describes an avatar as “the representation of your chosen embodied
appearance to other people in a virtual world” — one that often blunts
the harsh edges and tones fat into muscle.
There are people like Berry
who use their second lives as a way to play a different role, a smudged
mirror reflection of themselves — and that’s great. But there are those
who believe that identity in Second Life is too opaque.
On my first day in-universe I
meet Larki Merlin, a 40-something German Second Lifer who likes to
punctuate his conversation with written-word emoticons. “I am all time
on big smile,” are his first words to me. His next words are to the
point: “You a wife or a men?” Merlin’s asking that for a good reason; he
stepped away from Second Life two years ago “for a long time — too many crazy people, only sex and lies. 50% of the girls are in rl [real life] boys.”
This might not be far off the truth: Berry tells me that at one point Linden Lab said six of every ten women in Second Life were men behind their avatars. One of the most famous women in Second Life, Jade Lily, is a male member of the US Air Force named Keith Morris. Morris real-life married another Second Lifer, Coreina Grace (real name Meghan Sheehy) in 2009.
Despite this, Merlin’s back,
but he admits that there are slim pickings in the universe: he’s met
maybe two of a hundred friends in Second Life — and “you waste 200 hours to find them.” He’s back, but barely.
"There’s huge areas of Second Life that just look like suburbia and people will build a house and put a TV in it."
Every story has two sides. I asked Berry about her experience in Second Life: has it made her more comfortable, more confident? Has it changed her first life persona in any way?
There’s a long pause. “Err…
It’s made me realize other people are not as scary as they appear to
be.” The first person Berry ever encountered in a virtual world was in Uru. “And I ran away,” she admits softly.
“I don’t know what I was
afraid of, really. But they spoke to me and I ran away, because it was a
stranger.” As a woman, Berry says, the interaction was completely the
opposite of what she’d been taught: “You wouldn’t strike up a
conversation with an unknown male because there are dangers associated
with that.” But when she plucked up the courage to stay and chat, “it
made me realize I’d been frightened of that 5 percent instead of
realizing 95 percent are decent.”
When mainstream media outlets touched down in Second Life seven years ago they tended to focus on the strangeness of it all. People were having sex through a game
and dressing up as foxes and kittens. The reality, says Tom
Boellstorff, a professor of anthropology at the University of
California, Irvine, is more prosaic: “Humans already live many different
kinds of life: online is just one more of those kinds of lives.”
“You can do anything in Second Life,”
Boellstorff continues, his voice rising in a lilt. “You can do crazy
stuff. You can be a ball of light or you can be 500 feet tall, or you
can be a child, or a dog, or whatever.”
You can do all that. But most people?
“There’s huge areas of Second Life
that just look like suburbia and people will build a house and put a TV
in it,” he says. “They’ll watch TV with their friends online.” An
entire world of opportunities out there and people choose to be couch
potatoes. It is, eerily, just like real life.
Ghost towns and boom towns
“We thought of Second Life
as complementing your first life,” Hunter Walk, one of the original
Linden Lab team members working on the universe from its launch, tells
me. It was conceived as a space that gave you a set of choices that were
missing from reality. “In your first life you don’t necessarily get to
fly. Here you can fly. In your first life you can’t choose what you look
like. Here you can choose what you look like — and it’s malleable.”
That changeability extended
right back to the developers. “The story of the internet in general is
one of unintended consequences,” begins Boellstorff. “It’s about
repurposing and doing things the original designers did not design for.”
As the custodians of an internet-based community, Second Life’s
developers were little different. When they began sketching out the
universe early in development, Linden Lab deliberately left things
open-ended. “The early users showed us the way to where the community
was,” explains Walk.
That community is now being
overlooked, believes Berry, who began working for Linden Lab making
textures and music in June 2008, and was fired in June 2013
after a dispute over money. “After five years working quite closely
with them, I still don’t feel I really know what the culture is,” she
says. “They simply never seem to understand their own product. It’s
ludicrous that they don’t understand how people use Second Life, what they like it for, what they want it for.”
There’s no such thing as an
average Second Lifer, but some people just don’t get it, no matter how
long they spend in-world. Berry tried, years back, to convince her
mother and siblings to join the world. “I’ve had very little luck. If I
can’t get them to try it they’re obviously not going to understand it.
And it’s really hard to explain it to anybody else.”
A giant bubble floated down from on high. “Step in,” she said
For the longest time I didn’t
get it. I’d spent several weeks pottering about, teleporting from one
place to another. I stood on a dock of a bay, overlooking an azure sea
and hearing the whistle of the wind. I walked through a cold, gun-metal
gray futuristic world full of walkways that reminded me of any number of
first-person shooters. I’d chased a woman, inexplicably sprinting, arms
flailing, through the palazzos of Milan, looking at the fashion
boutiques. I’d visited London — in reality a tired collection of worn
cliches, a cardboard cut-out of the Beatles crossing the street down
from a roundabout with a red telephone box on one corner. It was kind of
cool, but it was also corny.
Then Berry invited me to
Nemesis. It’s where she lives in-universe, all rolling green hills and
gated houses. Berry — or Pendragon, as she was in this world — wanted to
show me just how magical Second Life could get.
She had in her possession Starax’s Wand. Created by a user, it was at the time the most expensive item a user could buy in Second Life.
Clever coding meant that if its possessor mentioned certain words
in-game — “money,” for example — the universe would change around it (a
briefcase full of cash would descend from the heavens and spit out
greenbacks, for example).
The wand has been largely
outmoded by updates, but some commands still work. We were standing
outside the perimeter wall of Berry’s house, green grass beneath our
feet. Her avatar hunched over and moved her hands on an invisible
keyboard: the animation shows when the real person is typing. In the
chat box appeared a word.
“Bubble.”
A giant bubble floated down
from on high. “Step in,” she said. I did. And the bubble rose, and I saw
a bird’s eye view of Nemesis. I was suspended in mid-air in a giant
bubble, and could roll over the shoreline high above the sea. I couldn’t
help but smile; finally, I’d found my niche.
People come to the Second Life
universe for different reasons: some go there to escape their reality
and to stretch the boundaries of their lives in ways forbidden by the
constraints of their bodies or the norms of society. Some go to meet
friends and family; there are some who want to create buildings,
paintings, and whole new worlds. And some — big companies and small
entrepreneurs — hope to make a living.
There’s no such thing as an average Second Lifer, but some people just don’t get it
Even after the deluge dried up there’s a booming economy in Second Life:
Berry began taking meetings in 2006 with companies looking to extend
their reach into the universe. Her knowledge of the world was her
selling point, helping companies avoid missteps in this strange, new
place. “Reportedly Adidas spent a million dollars on their sim in Second Life,”
Berry says with a laugh. What it got them was a single store selling
sneakers. Problem was, the sneakers slowed down the universe: “Anybody
running an event would say if you’ve got Adidas trainers on, take them
off because they were lagging the sim so bad!” Ironically, Berry says,
it was when the big companies descended on Second Life that the
place felt most like a ghost town, and not a boom town: they didn’t get
the ethos, didn’t engage, and left empty offices and buildings.
Berry’s earnings from Second Life
have varied enormously: a poor year can see her earn £5,000 ($7,600)
for her consultancy work, as well as creating music and textures for
avatars and locations in-world (a few years ago she specialized in
providing Christmas trees to those looking to get into the festive
spirit). “It’s not a fortune,” she explains. “I haven’t earned a lot of
money from it.” But it pays the bills.
Second Life isn’t a
whole new world — that’s something everyone, from Berry, to Walk, to
Boellstorff, has been keen to stress. For those truly committed, who
have property, and cash, and a business, and money invested in the
universe, it’s simply an ongoing extension of their lives: “That’s why
we chose the name,” Walk says.
Settling a civilization
Second Life has
survived its first 10 years, but every society rises and — inevitably —
falls. So what of Linden Lab’s creation? Will people still be living
Second Lives in 2023?
“I wouldn’t be surprised to see Second Life
around for quite a while,” says Hunter Walk. It’s been seven years
since he left the prosaically crazy universe, but he still remains on
its periphery. For a couple of years after leaving Linden Lab he
occasionally dropped back in on the world, teleporting from place to
place and checking out the sights. “It never quite got to the point
where it was something I’d be able to integrate into my life,” he says
regretfully. Instead, he now reads about it, takes pictures, and watches
videos.
Tom Boellstorff looks to history for precedent. LambdaMOO
was the original MOO (object-oriented MUD, a multi-user dungeon game).
Set up so long ago that its creator, Pavel Curtis, can’t remember
whether it went online in 1990 or 1991, it lives on today through the
benevolence and hard work of a core group of volunteers that refuses to
let the world die.
Fee Berry’s less sure. Resident for nearly a decade, she’s seen a lot of areas of Second Life fall victim to the decay that’s part of a relentlessly forward-looking world: “They haven’t really preserved the history of Second Life, as far as I can see, and don’t really rate it as anything worth saving. I think that’s a shame.”
Her 'Second Life' relationship became a real-life romance
Fired by Linden Lab and exasperated at the direction the universe is taking, she’s spending more time in OpenSim, a financially free and less constrained version of the Second Life
architecture, working on paid projects. There’s one drawback: it
doesn’t have a strong enough community or economy — yet. If it gets
those, it wins hands down, she says.
But that doesn’t mean she’s quite done with Linden Lab. She starts extolling the virtues of OpenSim, but brings it back to Second Life.
“I hope to get a better work–life balance, and to be able to spend entertainment — leisure time — in Second Life,”
she says. I get the sense that deep down, she’s made such a strong
connection that she’s permanently a resident there. After all, her Second Life
relationship with partner Oclee Hornet became a real-life romance. “He
had a bald avatar, which is quite unusual in any world,” she says. “I
was interested to know why.” Berry spent most of May in Rotterdam, where
Hornet — real name Eelco Osseweijer — lives. The two own a two-story
red brick home together in Second Life, on which they spend
$295 a month for the freehold to the land. “There’s a possibility we
will live together [in real life] at some stage in the future,” Berry
explains.
Despite it all, I ask her, despite the changes, and the intractability, despite the disputes and the stagnancy, you’re still a Second Life fan?
“Oh yeah,” she says. There’s a
pause and her voice grows richer, the kind of alteration in voice that
only comes when speaking through a genuine, heartfelt, and involuntary
smile.