Massachusetts Institute of Technology researchers have developed a
device that can see through walls and pinpoint a person with incredible
accuracy. They call it the “Kinect of the future,” after Microsoft’s
Xbox 360 motion-sensing camera.
Shown publicly this week for the first time, the project from MIT’s
Computer Science and Artificial Laboratory (CSAIL) used three radio
antennas spaced about a meter apart and pointed at a wall.
Photo: Nick BarberMIT has developed way to pinpoint the location of someone through a wall using just radio signals.
A desk cluttered with wires and circuits generated and interpreted the
radio waves. On the other side of the wall, a person walked around the
room and the system represented that person as a red dot on a computer
screen. The system tracked the movements with an accuracy of plus or
minus 10 centimeters, which is about the width of an adult hand.
Fadel Adib, a Ph.D student on the project, said that gaming could be one
use for the technology, but that localization is also very important.
He said that Wi-Fi localization, or determining someone’s position based
on Wi-Fi, requires the user to hold a transmitter, like a smartphone
for example.
“What we’re doing here is localization through a wall without requiring
you to hold any transmitter or receiver [and] simply by using
reflections off a human body,” he said. “What is impressive is that our
accuracy is higher than even state of the art Wi-Fi localization.”
He said that he hopes further iterations of the project will offer a real-time silhouette rather than just a red dot.
In the room where users walked around there was white tape on the floor
in a circular design. The tape on the floor was also in the virtual
representation of the room on the computer screen. It wasn’t being used
an aid to the technology, rather it showed onlookers just how accurate
the system was. As testers walked on the floor design their actions were
mirrored on the computer screen.
One of the drawbacks of the system is that it can only track one moving
person at a time and the area around the project needs to be completely
free of movement. That meant that when the group wanted to test the
system they would need to leave the room with the transmitters as well
as the surrounding area; only the person being tracked could be nearby.
At the CSAIL facility the researchers had the system set up between two
offices, which shared an interior wall. In order to operate it,
onlookers needed to stand about a meter or two outside of both of the
offices as to not create interference for the system.
Photo: Nick BarberAn MIT project can track a user with an accuracy of +/- 10 centimeters.
The system can only track one person at a time, but that doesn’t mean
two people can’t be in the same room at once. As long as one person is
relatively still the system will only track the person that is moving.
The group is working on making the system even more precise. “We now
have an initial algorithm that can tell us if a person is just standing
and breathing,” Adib said. He was also able to show how raising an arm
could also be tracked using radio signals. The red dot would move just
slightly to the side where the arm was raised.
Adib also said that unlike previous versions of the project that used
Wi-Fi, the new system allows for 3D tracking and could be useful in
telling when someone has fallen at home.
The system now is quite bulky. It takes up an entire desk that is strewn
with wires and then there’s also the space used by the antennas.
“We can put a lot of work into miniaturizing the hardware,” said
research Zach Kabelac, a masters student at MIT. He said that the
antennas don’t need to be as far apart as they are now.
“We can actually bring these closer together to the size of a Kinect
[sensor] or possibly smaller,” he said. That would mean that the system
would “lose a little bit of accuracy,” but that it would be minimal.
The researchers filed a patent this week and while there are no
immediate plans for commercialization the team members were speaking
with representatives from major wireless and component companies during
the CSAIL open house.
Apple may be forced to abandon its proprietary 30-pin dock charger (shown above) if European politicians get their way.
Members of the European Parliament’s internal market committee on
Thursday voted unanimously for a new law mandating a universal mobile
phone charger. The MEPs want all radio equipment devices and their
accessories, such as chargers, to be interoperable to cut down on
electronic waste.
German MEP Barbara Weiler said she wanted to see an end to “cable chaos”.
This is not the first attempt to set a standard for universal phone
chargers. In 2009 the European Commission, the International
Telecommunications Union (ITU) and leading mobile phone manufacturers
drew up a voluntary agreement based on the micro USB connector.
However Apple, which sold nine million units of the iPhone 5s and 5c in
just three days last week, has not adhered to the agreement despite
signing up.
The draft law also lays down rules for other radio equipment, such as
car door openers or modems, to ensure that they do not interfere with
each other. The committee also cut some red tape, by deleting a rule
that would have required manufacturers to register certain categories of
devices before placing them on the market.
The committee is now expected to begin informal negotiations with the
European Council in order to move the legislative process along quickly.
Once upon a time there were things called jobs, and they were well understood. People went to work for companies, in offices or in factories. There were exceptions — artists, aristocrats, entrepeneurs — but they were rare.
Laws, regulations, and statistics were based on this assumption; but,
increasingly, what people do today doesn’t fit neatly into that
anachronistic 1950s rubric. I’ve had the pleasure of trying to explain
to border officials that my “job” consisted of contracting in Country A
for a client in Country B, while also writing books and selling apps. I
don’t recommend it.
This disconnect will just keep getting worse. The so-called “sharing economy” mediated by sites and apps like Lyft, TaskRabbit, Thumbtack, Postmates,
Mechanical Turk, etc etc etc., replaces “consistent work for a single
employer” with “an agglomeration of short-term/one-time gigs.” That
doesn’t really map to the old-economy assumptions at all. And even
relatively high-skill professions are now being nibbled at by
shared-economy software; consider Disrupt winner YourMechanic.
I say “so-called” because, let’s face it, “sharing economy” is mostly
spin. It mostly consists of people who have excess disposable income
hiring those who do not; it’s pretty rare to vacillate across that
divide. Far more accurate to call it the “servant economy.” (Not to be confused with the “patronage economy” — Kickstarter, Indiegogo — which deserves its own post.)
It’s not surprising that relatively-wealthy techies like me have
created apps and services which make relatively-wealthy techies’ lives a
little better, instead of solving the real and hard problems faced by poor people. But it is a little surprising that these apps effectively echo what’s happening on a massive scale in the corporate world.
Did you know that “the hiring rate of temp workers is five times that of hiring overall in the past year” and “The number of temps has jumped more than 50 percent since the recession ended”? Meanwhile, in the UK, “The median hourly earnings for the self-employed are £5.58, less than half the £11.21 earned by employees.”
This “ephemeral workforce” phenomenon isn’t just
American; the UK has also set records in the contingently employed.
Something profoundly structural is going on. Even healthier economic
growth won’t make it go away.
We already know how software will eat manufacturing (robots and 3D
printing) and transportation (self-driving vehicles.) This new servant
economy shows us how software will eat much of the service sector; by
turning turn many of its existing full-time jobs into a disconnected
cloud of temporary gigs.
In many ways this is inarguably a good thing. I may not think much of Uber’s CEO’s politics
but I think even less of the insane medallion system that rules taxi
industries across America for no good reason. (Anyone who believes taxi
companies’ claims that they’re safer probably also believes the TSA’s
claims that security theater keeps you safe.) I applaud the leveling of that demented regulatory wall.
What’s more, when the New Temps no longer require companies like
Manpower to connect them to their actual employers, but can pick and
choose on the fly among competing third parties, that too will be a huge
benefit for all concerned. It’s entertaining to read Manpower’s CEO
dismissal of this trend as “somewhat niche…I don’t think it’s going to
take over the world” in a recent Wall Street Journal piece. I suspect that quote will sound fantastically dense in ten years’ time.
And yet this trend makes me uneasy. The slow transformation of a huge
swathe of the economy from steady jobs to an ever-shifting maelstrom of
short-term contracts with few-to-no benefits, for which an ever-larger
pool of people will compete thanks to ever-lower barriers to entry, in a
sector where most jobs are already poorly paid…does this sound
to you like it will decrease inequality and increase social mobility?
Maybe, it certain specialized high-skill areas. But across the spectrum?
I doubt it.
It does sound like it will reduce prices…but, unlike
Wal-Mart, servant-economy providers are rarely servant-economy
customers. (As prices drop, their incomes drop too, keeping the
now-cheaper services still out of reach; a vicious circle.) The people
who benefit are, surprise, surprise, the techies, the professionals, the
bankers, the steadilydwindlingmiddle class. You know. People like you and me. And, of course, the companies hiring the armies of temps.
I don’t want to sound like a pessimistic Luddite; I do believe that
this will ultimately be better than the status quo for most people. But
it seems to me that — like many of the other economic shifts triggered
by new technologies, as I’ve been arguingforsometime — the vast majority of the benefits will accrue to a small and shrinking fraction of the population.
Is that inequality such a bad thing? If the techno-economic tide is
lifting all boats, does it really matter if it lifts the yachts higher
than the fishing boats, and the super-yachts into the stratosphere? It
seems to me that the answer depends in large part on whether the fishing
boats have any realistic prospect of achieving yachtdom:
Unfortunately, social mobility is actually significantly lower
in America than in other rich nations…and so far I see no reason to
believe that the combination of tomorrow’s technology and today’s
economic architecture will change that. In fact I have a nasty gut
feeling that the opposite is true, both in America and worldwide.
With its first computer based on the extremely low-power Quark
processor, Intel is tapping into the 'maker' community to figure out
ways the new chip could be best used.
The chip maker announced the Galileo computer -- which is a board
without a case -- with the Intel Quark X1000 processor on Thursday. The
board is targeted at the community of do-it-yourself enthusiasts who
make computing devices ranging from robots and health monitors to home
media centers and PCs.
The Galileo board should become widely available for under $60 by the
end of November, said Mike Bell, vice president and general manager of
the New Devices Group at Intel.
Bell hopes the maker community will use the board to build prototypes
and debug devices. The Galileo board will be open-source, and the
schematics will be released over time so it can be replicated by
individuals and companies.
Bell's New Devices Group is investigating business opportunities in
the emerging markets of wearable devices and the "Internet of things."
The chip maker launched the extremely low-power Quark processor for such
devices last month.
Intel's Quark processor.
"People want to be able to use our chips to do creative things," Bell
said. "All of the coolest devices are coming from the maker community."
But at around $60, the Galileo will be more expensive than the
popular Raspberry Pi, which is based on an ARM processor and sells for
$25. The Raspberry Pi can also render 1080p graphics, which Intel's
Galileo can't match.
Making inroads in the enthusiast community
Questions also remain on whether Intel's overtures will be accepted
by the maker community, which embraces the open-source ethos of a
community working together to tweak hardware designs. Intel has made a
lot of contributions to the Linux OS, but has kept its hardware designs
secret. Intel's efforts to reach out to the enthusiast community is
recent; the company's first open-source PC went on sale in July.
Intel is committed long-term to the enthusiast community, Bell said.
Intel also announced a partnership with Arduino, which provides a
software development environment for the Galileo motherboard. The
enthusiast community has largely relied on Arduino microcontrollers and
boards with ARM processors to create interactive computing devices.
The Galileo is equipped with a 32-bit Quark SoC X1000 CPU, which has a
clock speed of 400MHz and is based on the x86 Pentium Instruction Set
Architecture. The Galileo board supports Linux OS and the Arduino
development environment. It also supports standard data transfer and
networking interfaces such as PCI-Express, Ethernet and USB 2.0.
Intel has demonstrated its Quark chip running in eyewear and a
medical patch to check for vitals. The company has also talked about the
possibility of using the chip in personalized medicine, sensor devices
and cars.
Intel hopes creating interactive computing devices with Galileo will
be easy. Writing applications for the board is as simple as writing
programs to standard microcontrollers with support for the Arduino
development environment.
"Essentially it's transparent to the development," Bell said.
Intel is shipping out 50,000 Galileo boards for free to students at over 1,000 universities over the next 18 months.
The war veteran who recoils at the sound of a car backfiring and the
recovering drug addict who feels a sudden need for their drug of choice
when visiting old haunts have one thing in common: Both are victims of
their own memories. New research indicates those memories could actually
be extinguished.
A new study from the Massachusetts Institute of Technology found a
gene called Tet1 can facilitate the process of memory extinction. In the
study, mice were put in a cage that delivered an electric shock. Once
they learned to fear that cage, they were then put in the same cage but
not shocked. Mice with the normal Tet1 levels no longer feared the cage
once new memories were formed without the shock. Mice with the Tet1 gene
eliminated continued to fear the cage even when there was no shock
delivered.
“We learned from this that the animals defective in the Tet1 gene are
not capable of weakening the fear memory,” Le-Huei Tsai, director of
MIT's Picower Institute for Learning and Memory, told Discovery News.
“For more than a half century it has been documented that gene
expression and protein synthesis are essential for learning and forming
new memories. In this study we speculated that the Tet1 gene regulates
chemical modifications to DNA.”
The MIT researchers found that Tet1 changes levels of DNA
methylation, the process of causing a chemical reaction. When
methylation is prominent, the process of learning new memories is more
efficient. When methylation is weaker, the opposite is true.
“The results support the notion that once a fear memory is formed, to
extinguish that memory a new memory has to form,” Tsai said. “The new
memory competes with the old memory and eventually supersedes the old
memory.”
Experts in the study of memory and anxiety agree.
“This is highly significant research in that it presents a completely
new mechanism of memory regulation and behavior regulation,” said
Jelena Radulovic, a professor of bipolar disease at Northwestern
University. The mechanism of manipulating DNA is likely to affect many
other things. Now the question will be whether there will be patterns
that emerge, whether there will be side effects on moods and emotions
and other aspects. But the findings have real relevance.”
Radulovic, who was not directly involved in the study, says the
primary significance of the findings have to do with eliminating fear.
“The results show us a very specific paradigm of learned reduction of
fear,” she said. “This could mean that interference with the Tet1 gene
and modification of DNA could be an important target to reduce fear in
people with anxiety disorders.”
For her part, Tsai is most encouraged at the ability to approach
anxiety disorders at the molecular and cellular levels inside the brain.
“We can now see the bio-chemical cascade of events in the process of
memory formation and memory extinction,” said Tsai. “Hopefully this can
lead to new drug discoveries.”
Meanwhile, research in memory extinction is progressing quickly,
largely due to new discoveries through traditional experimentation,
augmented by advances in technology, Tsai said.
Elsewhere, parallel research is focusing more on physiological
processes that cause memories, rather than epigenetics (the study of how
genes are turned on or off). At the Scripps Research Institute,
researchers are studying what causes a methamphetamine addict to relapse
when confronted with familiar triggers that a person associates with
drug use.
“Substance users who are trying to stay clean, when exposed to the
environment where they used the drug have all kinds of associations and
memories in their minds that are strong enough to elicit cravings,” said
Courtney Miller, an assistant professor at the Scripps Research
Institute, who led the research. “The idea is to try to selectively disrupt the dangerous memories but not lose other memories." “We taught rodents to press a lever to get an infusion of meth, and that
puts the drug delivery in the animal’s control,” Miller told Discovery News.
“They were put in an environment that was unique to them every day for
two weeks, where they could press the lever and get meth. They learned
to associate that environment with the meth, the place where they could
‘use.’”
The animals were then injected with a chemical that inhibited actin polymerization and placed back in their home environment.
“The process of actin polymerization happens when neurons contact
each other, and that is how information is passed,” Miller said. “Think
of it like a Lego project. There are little pieces that contact each
other. The receiving point on a neuron, called a dendritic spine,
enlarges when a memory is stored. It gives more surface areas so you can
have more neurotransmission.
"Actin controls that, enlarges the spine and keeps it large. In a
normal memory, pieces come off the top and circle around and add on to
the bottom very slowly. In a meth memory the piece comes off the top,
wraps around and comes back much faster. We gave a drug that takes the
pieces away and they are not added back on. The point of contact falls
apart and the memory is lost.”
The process, called depolymerization, means that memories are no longer stored.
Longtime memory researchers are highly supportive of Miller’s findings.
“The findings here are real game changers,” said Gary S. Lynch,
professor of psychiatry and human behavior at University of California
School of Medicine. “What this points to is a completely new strategy
for treatment of addiction. For the past 10 years there have been many
challenges to the notion that memories are cemented in. But this study
shows that memory really is still a dynamic, malleable business and that
there can be another way of dealing with dependency.”
Lynch is particularly taken with the study’s findings regarding the role of actin.
“Actin is the most prevalent protein in the body,” said Lynch, who
has studied memory issues for more than 30 years. “Now to find that it
is so critical to dependency is breathtaking in its implications.”
In the future, it's possible the process can be generalized to other addictions, such as nicotine, Miller said.
As for how distant that future may be, Tsai believes we are still
many years from applying the current research to human beings with
psychiatric disorders.
“I would like to believe that through cognitive behavior therapy or
some new medication, eventually — not five or 10 years from now, but
eventually — a lot of the mechanisms are going to be solved,” Tsai said.
“We’ll know how good memories form, how bad memories form. But the
brain is an organ that is not very accessible to manipulation, unlike
most other organs. My prediction is that progress on memory research,
including memory extinction, will speed up considerably because of the
emerging technology.”
That technology, Tsai says, includes a new 3-D, high-resolution brain
imaging called CLARITY, developed by a research team at Stanford
University. CLARITY essentially makes it possible to view the brain in a
transparent way, allowing researchers to see in detail its complex fine
wiring and essential features.
To translate one language into another, find the linear
transformation that maps one to the other. Simple, say a team of Google
engineers
Computer
science is changing the nature of the translation of words and
sentences from one language to another. Anybody who has tried BabelFish or Google Translate will know that they provide useful translation services but ones that are far from perfect.
The
basic idea is to compare a corpus of words in one language with the
same corpus of words translated into another. Words and phrases that
share similar statistical properties are considered equivalent.
The
problem, of course, is that the initial translations rely on
dictionaries that have to be compiled by human experts and this takes
significant time and effort.
Now Tomas Mikolov and a couple of
pals at Google in Mountain View have developed a technique that
automatically generates dictionaries and phrase tables that convert one
language into another.
The new technique does not rely on versions
of the same document in different languages. Instead, it uses data
mining techniques to model the structure of a single language and then
compares this to the structure of another language.
“This method
makes little assumption about the languages, so it can be used to extend
and refine dictionaries and translation tables for any language pairs,”
they say.
The new approach is relatively straightforward. It
relies on the notion that every language must describe a similar set of
ideas, so the words that do this must also be similar. For example, most
languages will have words for common animals such as cat, dog, cow and
so on. And these words are probably used in the same way in sentences
such as “a cat is an animal that is smaller than a dog.”
The same
is true of numbers. The image above shows the vector representations of
the numbers one to five in English and Spanish and demonstrates how
similar they are.
This is an important clue. The new trick is to represent an entire language using the relationship between its words. The
set of all the relationships, the so-called “language space”, can be
thought of as a set of vectors that each point from one word to another.
And in recent years, linguists have discovered that it is possible to
handle these vectors mathematically. For example, the operation ‘king’ –
‘man’ + ‘woman’ results in a vector that is similar to ‘queen’.
It
turns out that different languages share many similarities in this
vector space. That means the process of converting one language into
another is equivalent to finding the transformation that converts one
vector space into the other.
This turns the problem of translation
from one of linguistics into one of mathematics. So the problem for the
Google team is to find a way of accurately mapping one vector space
onto the other. For this they use a small bilingual dictionary compiled
by human experts–comparing same corpus of words in two different
languages gives them a ready-made linear transformation that does the
trick.
Having identified this mapping, it is then a simple matter
to apply it to the bigger language spaces. Mikolov and co say it works
remarkably well. “Despite its simplicity, our method is surprisingly
effective: we can achieve almost 90% precision@5 for translation of
words between English and Spanish,” they say.
The method can be
used to extend and refine existing dictionaries, and even to spot
mistakes in them. Indeed, the Google team do exactly that with an
English-Czech dictionary, finding numerous mistakes.
Finally, the
team point out that since the technique makes few assumptions about the
languages themselves, it can be used on argots that are entirely
unrelated. So while Spanish and English have a common Indo-European
history, Mikolov and co show that the new technique also works just as
well for pairs of languages that are less closely related, such as
English and Vietnamese.
That’s a useful step forward for the
future of multilingual communication. But the team says this is just the
beginning. “Clearly, there is still much to be explored,” they
conclude.
Ref: arxiv.org/abs/1309.4168: Exploiting Similarities among Languages for Machine Translation
Projection mapping, where ordinary objects become surfaces for moving images, is an increasingly common video technique in applications like music videos, phone commercials, and architectural light shows — and now a new film shows what can happen when you add robots to the mix. In Box, a performance artist works with transforming panels hoisted by industrial machineryin
a dazzling demonstration of projection mapping's mind-bending
possibilities. Every effect is captured in-camera, and each section
eventually reveals how the robot arms were used.
It's the work of San Francisco
studio Bot & Dolly, which believes its new technology can "tear down
the fourth wall" in the theater. "Through large-scale robotics,
projection mapping and software engineering, audiences will witness the
trompe l'oeil effect pushed to new boundaries," says creative director
Tarik Abdel-Gawad. "We believe this methodology has tremendous potential
to radically transform visual art forms and define new genres of
expression." Box is an effective demonstration of the studio's
projection mapping system, but it works in its own right as an
enthralling piece of art.
In Linden Lab's vast experiment, the end has no end
Do you remember Second Life?
Set up by developer Linden Lab in 2003, it was the faithful replication
of our modern world where whoring, drinking, and fighting were
acceptable. It was the place where big brands moved in as neighbors and
hawked you their wares online. For many, it was the future — our lives
were going to be lived online, as avatars represented us in nightclubs,
bedrooms, and banks made of pixels and code.
In the mid-2000s, every self-respecting media outlet sent reporters to the Second Life world to cover the parallel-universe beat. The BBC, (now Bloomberg) Businessweek, and NBC Nightly News all devoted time and coverage to the phenomenon. Amazon, American Apparel, and Disney set up shop in Second Life,
aiming to capitalize on the momentum it was building — and to play to
the in-world consumer base, which at one point in 2006 boasted a GDP of
$64 million.
Of course, stratospheric
growth doesn’t continue forever, and when the universe’s expansion
slowed and the novelty of people living parallel lives wore off, the
media moved on. So did businesses — but not users. Linden Lab doesn’t
share historical user figures, but it says the population of Second Life has been relatively stable for a number of years.
You might not have heard a peep about it since the halcyon days of 2006, but that doesn’t mean Second Life
has gone away. Far from it: this past June it celebrated its 10th
birthday, and it is still a strong community. A million active users
still log on and inhabit the world every month, and 13,000 newbies drop
into the community every day to see what Second Life is about. I was one of them, and I found out that just because Second Life is no longer under the glare of the media’s spotlight, it doesn’t mean the culture inside the petri dish isn’t still growing.
Packing tape and pyrotechnics
One of Second Life’s
million-strong population is Fee Berry, a 55-year-old mother of three
children who lives in Middlesex, a leafy suburb of London, England. And
though her Second Life avatar, Caliandris Pendragon, is cool and calm, I’ve caught her at a bad time.
“I’m moving house,” she
explains. In the background I can hear boxes being heaved back and
forth, tape unspooling and being wrapped around packaged items. At one
point in our conversation she has to ask her son to keep the noise down.
Berry became a stay-at-home mom after the birth of her first son and started gaming in 1998, playing Riven, a more puzzle-centric sequel to Myst,
a popular adventure game first released in 1993. Both were developed by
Cyan Worlds, at the time simply called Cyan. A friend introduced Berry
to Riven when she bought a second-hand Apple Macintosh; she was
initially wary, telling the friend, “I don’t think I like those sorts
of things.” She finished the game within three weeks.
She stuck with games produced by Cyan for the next six years, graduating to Uru, their MMO adventure game. When Cyan discontinued support for Uru Live,
the online section of the game, Berry, like many others, moved on to an
alternative. As with everyone entering their Second Life, she was
dropped from the sky. Her feet first hit the turf of the new virtual
world on February 12th, 2004.
"I can shrug off my role as a mother."
“It’s like every toy you ever
had, all rolled into one,” she tells me in awed tones, recalling the
power of the game to keep her playing nearly a decade on. It’s also
liberating, she explains, allowing her to forget about the kids, the
responsibilities, and the extra few inches she’d rather not have. It
lets her cut free.
In Second Life she
doesn’t have to be a graying 55-year-old mom; she can keep the bright
eyes and warm smile, but can pinch, tuck, and pluck the other bits so
that she becomes 25-year-old Pendragon, a vampish babe with full lips,
long jet black hair, and heavy eyeliner.
"It's like every toy you ever had, all rolled into one."
“I can shrug off my role as a mother,” she explains. “I can swear or misbehave in Second Life in a way I couldn’t in real life.”
The second-ever person I meet in Second Life, in a drop-off zone, proves that point. HOUSE Chemistry’s been in Second Life
for nearly six years. The 28-year-old lives in New Orleans, and may or
may not look like his in-universe avatar: a 6-foot-7-inch-tall man
wearing all black, with thick brown dreadlocks down to his waist — he
won’t say. Regardless, HOUSE Chemistry’s warm and welcoming, and seems
to enjoy taking me under his wing, explaining the universe to me.
When I ask him what he does in Second Life,
I’m expecting him to advise me to talk to people, make friends, and
take some classes. He replies a little differently: “Anything I want.
Walk near me. I’ll set this place on fire, watch.”
And so he does, under a clock showing 11:26, on one of Second Life’s
introductory islands. Truthfully, I’m not impressed: it’s a pretty
poor-quality animation with blocky gray smoke and weirdly flesh-colored
balls I presume are meant to represent the actual flames. Still, I
politely show my admiration and ask him whether he’d want to set stuff
on fire in real life.
“No,” he says. There’s a brief pause. “Take your time. You’ll learn how to do all kinds of cool shit.”
I try and move the
conversation on, asking HOUSE Chemistry what he does in real life. “I
build things,” he replies. I’m intrigued by this person who builds
things in real life, then sets polygonal representations of them on fire
in Second Life, and say so out loud. He ignores it, moves on, shuts down the conversation.
“You got it now,” he says. “Enjoy.”
“You a wife or a men?”
The concept of an avatar in the sense we know today first emerged in the 1980s from the LucasArts game Habitat and the cyberpunk novels of the time. Philip Rosedale, who created Second Life,
describes an avatar as “the representation of your chosen embodied
appearance to other people in a virtual world” — one that often blunts
the harsh edges and tones fat into muscle.
There are people like Berry
who use their second lives as a way to play a different role, a smudged
mirror reflection of themselves — and that’s great. But there are those
who believe that identity in Second Life is too opaque.
On my first day in-universe I
meet Larki Merlin, a 40-something German Second Lifer who likes to
punctuate his conversation with written-word emoticons. “I am all time
on big smile,” are his first words to me. His next words are to the
point: “You a wife or a men?” Merlin’s asking that for a good reason; he
stepped away from Second Life two years ago “for a long time — too many crazy people, only sex and lies. 50% of the girls are in rl [real life] boys.”
This might not be far off the truth: Berry tells me that at one point Linden Lab said six of every ten women in Second Life were men behind their avatars. One of the most famous women in Second Life, Jade Lily, is a male member of the US Air Force named Keith Morris. Morris real-life married another Second Lifer, Coreina Grace (real name Meghan Sheehy) in 2009.
Despite this, Merlin’s back,
but he admits that there are slim pickings in the universe: he’s met
maybe two of a hundred friends in Second Life — and “you waste 200 hours to find them.” He’s back, but barely.
"There’s huge areas of Second Life that just look like suburbia and people will build a house and put a TV in it."
Every story has two sides. I asked Berry about her experience in Second Life: has it made her more comfortable, more confident? Has it changed her first life persona in any way?
There’s a long pause. “Err…
It’s made me realize other people are not as scary as they appear to
be.” The first person Berry ever encountered in a virtual world was in Uru. “And I ran away,” she admits softly.
“I don’t know what I was
afraid of, really. But they spoke to me and I ran away, because it was a
stranger.” As a woman, Berry says, the interaction was completely the
opposite of what she’d been taught: “You wouldn’t strike up a
conversation with an unknown male because there are dangers associated
with that.” But when she plucked up the courage to stay and chat, “it
made me realize I’d been frightened of that 5 percent instead of
realizing 95 percent are decent.”
When mainstream media outlets touched down in Second Life seven years ago they tended to focus on the strangeness of it all. People were having sex through a game
and dressing up as foxes and kittens. The reality, says Tom
Boellstorff, a professor of anthropology at the University of
California, Irvine, is more prosaic: “Humans already live many different
kinds of life: online is just one more of those kinds of lives.”
“You can do anything in Second Life,”
Boellstorff continues, his voice rising in a lilt. “You can do crazy
stuff. You can be a ball of light or you can be 500 feet tall, or you
can be a child, or a dog, or whatever.”
You can do all that. But most people?
“There’s huge areas of Second Life
that just look like suburbia and people will build a house and put a TV
in it,” he says. “They’ll watch TV with their friends online.” An
entire world of opportunities out there and people choose to be couch
potatoes. It is, eerily, just like real life.
Ghost towns and boom towns
“We thought of Second Life
as complementing your first life,” Hunter Walk, one of the original
Linden Lab team members working on the universe from its launch, tells
me. It was conceived as a space that gave you a set of choices that were
missing from reality. “In your first life you don’t necessarily get to
fly. Here you can fly. In your first life you can’t choose what you look
like. Here you can choose what you look like — and it’s malleable.”
That changeability extended
right back to the developers. “The story of the internet in general is
one of unintended consequences,” begins Boellstorff. “It’s about
repurposing and doing things the original designers did not design for.”
As the custodians of an internet-based community, Second Life’s
developers were little different. When they began sketching out the
universe early in development, Linden Lab deliberately left things
open-ended. “The early users showed us the way to where the community
was,” explains Walk.
That community is now being
overlooked, believes Berry, who began working for Linden Lab making
textures and music in June 2008, and was fired in June 2013
after a dispute over money. “After five years working quite closely
with them, I still don’t feel I really know what the culture is,” she
says. “They simply never seem to understand their own product. It’s
ludicrous that they don’t understand how people use Second Life, what they like it for, what they want it for.”
There’s no such thing as an
average Second Lifer, but some people just don’t get it, no matter how
long they spend in-world. Berry tried, years back, to convince her
mother and siblings to join the world. “I’ve had very little luck. If I
can’t get them to try it they’re obviously not going to understand it.
And it’s really hard to explain it to anybody else.”
A giant bubble floated down from on high. “Step in,” she said
For the longest time I didn’t
get it. I’d spent several weeks pottering about, teleporting from one
place to another. I stood on a dock of a bay, overlooking an azure sea
and hearing the whistle of the wind. I walked through a cold, gun-metal
gray futuristic world full of walkways that reminded me of any number of
first-person shooters. I’d chased a woman, inexplicably sprinting, arms
flailing, through the palazzos of Milan, looking at the fashion
boutiques. I’d visited London — in reality a tired collection of worn
cliches, a cardboard cut-out of the Beatles crossing the street down
from a roundabout with a red telephone box on one corner. It was kind of
cool, but it was also corny.
Then Berry invited me to
Nemesis. It’s where she lives in-universe, all rolling green hills and
gated houses. Berry — or Pendragon, as she was in this world — wanted to
show me just how magical Second Life could get.
She had in her possession Starax’s Wand. Created by a user, it was at the time the most expensive item a user could buy in Second Life.
Clever coding meant that if its possessor mentioned certain words
in-game — “money,” for example — the universe would change around it (a
briefcase full of cash would descend from the heavens and spit out
greenbacks, for example).
The wand has been largely
outmoded by updates, but some commands still work. We were standing
outside the perimeter wall of Berry’s house, green grass beneath our
feet. Her avatar hunched over and moved her hands on an invisible
keyboard: the animation shows when the real person is typing. In the
chat box appeared a word.
“Bubble.”
A giant bubble floated down
from on high. “Step in,” she said. I did. And the bubble rose, and I saw
a bird’s eye view of Nemesis. I was suspended in mid-air in a giant
bubble, and could roll over the shoreline high above the sea. I couldn’t
help but smile; finally, I’d found my niche.
People come to the Second Life
universe for different reasons: some go there to escape their reality
and to stretch the boundaries of their lives in ways forbidden by the
constraints of their bodies or the norms of society. Some go to meet
friends and family; there are some who want to create buildings,
paintings, and whole new worlds. And some — big companies and small
entrepreneurs — hope to make a living.
There’s no such thing as an average Second Lifer, but some people just don’t get it
Even after the deluge dried up there’s a booming economy in Second Life:
Berry began taking meetings in 2006 with companies looking to extend
their reach into the universe. Her knowledge of the world was her
selling point, helping companies avoid missteps in this strange, new
place. “Reportedly Adidas spent a million dollars on their sim in Second Life,”
Berry says with a laugh. What it got them was a single store selling
sneakers. Problem was, the sneakers slowed down the universe: “Anybody
running an event would say if you’ve got Adidas trainers on, take them
off because they were lagging the sim so bad!” Ironically, Berry says,
it was when the big companies descended on Second Life that the
place felt most like a ghost town, and not a boom town: they didn’t get
the ethos, didn’t engage, and left empty offices and buildings.
Berry’s earnings from Second Life
have varied enormously: a poor year can see her earn £5,000 ($7,600)
for her consultancy work, as well as creating music and textures for
avatars and locations in-world (a few years ago she specialized in
providing Christmas trees to those looking to get into the festive
spirit). “It’s not a fortune,” she explains. “I haven’t earned a lot of
money from it.” But it pays the bills.
Second Life isn’t a
whole new world — that’s something everyone, from Berry, to Walk, to
Boellstorff, has been keen to stress. For those truly committed, who
have property, and cash, and a business, and money invested in the
universe, it’s simply an ongoing extension of their lives: “That’s why
we chose the name,” Walk says.
Settling a civilization
Second Life has
survived its first 10 years, but every society rises and — inevitably —
falls. So what of Linden Lab’s creation? Will people still be living
Second Lives in 2023?
“I wouldn’t be surprised to see Second Life
around for quite a while,” says Hunter Walk. It’s been seven years
since he left the prosaically crazy universe, but he still remains on
its periphery. For a couple of years after leaving Linden Lab he
occasionally dropped back in on the world, teleporting from place to
place and checking out the sights. “It never quite got to the point
where it was something I’d be able to integrate into my life,” he says
regretfully. Instead, he now reads about it, takes pictures, and watches
videos.
Tom Boellstorff looks to history for precedent. LambdaMOO
was the original MOO (object-oriented MUD, a multi-user dungeon game).
Set up so long ago that its creator, Pavel Curtis, can’t remember
whether it went online in 1990 or 1991, it lives on today through the
benevolence and hard work of a core group of volunteers that refuses to
let the world die.
Fee Berry’s less sure. Resident for nearly a decade, she’s seen a lot of areas of Second Life fall victim to the decay that’s part of a relentlessly forward-looking world: “They haven’t really preserved the history of Second Life, as far as I can see, and don’t really rate it as anything worth saving. I think that’s a shame.”
Her 'Second Life' relationship became a real-life romance
Fired by Linden Lab and exasperated at the direction the universe is taking, she’s spending more time in OpenSim, a financially free and less constrained version of the Second Life
architecture, working on paid projects. There’s one drawback: it
doesn’t have a strong enough community or economy — yet. If it gets
those, it wins hands down, she says.
But that doesn’t mean she’s quite done with Linden Lab. She starts extolling the virtues of OpenSim, but brings it back to Second Life.
“I hope to get a better work–life balance, and to be able to spend entertainment — leisure time — in Second Life,”
she says. I get the sense that deep down, she’s made such a strong
connection that she’s permanently a resident there. After all, her Second Life
relationship with partner Oclee Hornet became a real-life romance. “He
had a bald avatar, which is quite unusual in any world,” she says. “I
was interested to know why.” Berry spent most of May in Rotterdam, where
Hornet — real name Eelco Osseweijer — lives. The two own a two-story
red brick home together in Second Life, on which they spend
$295 a month for the freehold to the land. “There’s a possibility we
will live together [in real life] at some stage in the future,” Berry
explains.
Despite it all, I ask her, despite the changes, and the intractability, despite the disputes and the stagnancy, you’re still a Second Life fan?
“Oh yeah,” she says. There’s a
pause and her voice grows richer, the kind of alteration in voice that
only comes when speaking through a genuine, heartfelt, and involuntary
smile.
In the future, most people will live in a total surveillance state – and some of us might even like it
A Banksy graffiti work in London. Photo by Cate Gillon/Getty Images
Suppose you’re walking home one night, alone, and you decide to take a
shortcut through a dark alley. You make it halfway through, when
suddenly you hear some drunks stumbling behind you. Some of them are
shouting curses. They look large and powerful, and there are several of
them. Nonetheless, you feel safe, because you know someone is watching.
You know this because you live in the future where surveillance is
universal, ubiquitous and unavoidable. Governments and large
corporations have spread cameras, microphones and other tracking devices
all across the globe, and they also have the capacity to store and
process oceans of surveillance data in real time. Big Brother not only
watches your sex life, he analyses it. It sounds nightmarish — but it
might be inevitable. So far, attempts to control surveillance have
generally failed. We could be headed straight for the panopticon, and if
recent news developments are any indication, it might not take that
long to get there.
Maybe we should start preparing. And not just by wringing our hands
or mounting attempts to defeat surveillance. For if there’s a chance
that the panopticon is inevitable, we ought to do some hard thinking
about its positive aspects. Cataloguing the downsides of mass
surveillance is important, essential even. But we have a whole
literature devoted to that. Instead, let’s explore its potential
benefits.
The first, and most obvious, advantage of mass surveillance is a
drastic reduction in crime. Indeed, this is the advantage most often put
forward by surveillance proponents today. The evidence as to whether
current surveillance achieves this is ambiguous; cameras, for instance,
seem to have an effect on property crime, but not on incidences of
violence. But today’s world is very different from a panopticon full of
automatically analysed surveillance devices that leave few zones of
darkness.
If calibrated properly, total surveillance might eradicate certain
types of crime almost entirely. People respond well to inevitable
consequences, especially those that follow swiftly on the heels of their
conduct. Few would commit easily monitored crimes such as assault or
breaking and entering, if it meant being handcuffed within minutes. This
kind of ultra-efficient police capability would require not only
sensors capable of recording crimes, but also advanced computer vision
and recognition algorithms capable of detecting crimes quickly. There
has been some recent progress on such algorithms, with further
improvements expected. In theory, they would be able to alert the police
in real time, while the crime was still ongoing. Prompt police
responses would create near-perfect deterrence, and violent crime would
be reduced to a few remaining incidents of overwhelming passion or
extreme irrationality.
If surveillance recordings were stored for later analysis, other
types of crimes could be eradicated as well, because perpetrators would
fear later discovery and punishment. We could expect crimes such as
low-level corruption to vanish, because bribes would become perilous (to
demand or receive) for those who are constantly under watch. We would
likely see a similar reduction in police brutality. There might be an
initial spike in detected cases of police brutality under a total
surveillance regime, as incidents that would previously have gone
unnoticed came to light, but then, after a short while, the numbers
would tumble. Ubiquitous video recording, mobile and otherwise, has
already begun to expose such incidents.
On a smaller scale, mass surveillance would combat all kinds of
abuses that currently go unreported because the abuser has power over
the abused. You see this dynamic in a variety of scenarios, from the
dramatic (child abuse) to the more mundane (line managers insisting on
illegal, unpaid overtime). Even if the victim is too scared to report
the crime, the simple fact that the recordings existed would go a long
way towards equalising existing power differentials. There would be the
constant risk of some auditor or analyst stumbling on the recording, and
once the abused was out of the abuser’s control (grown up, in another
job) they could retaliate and complain, proof in hand. The possibility
of deferred vengeance would make abuse much less likely to occur in the
first place.
With reduced crime, we could also expect a significant reduction in
police work and, by extension, police numbers. Beyond a rapid-reaction
force tasked with responding to rare crimes of passion, there would be
no need to keep a large police force on hand. And there would also be no
need for them to enjoy the special rights they do today. Police
officers can, on mere suspicion, detain you, search your person,
interrogate you, and sometimes enter your home. They can also arrest you
on suspicion of vague ‘crimes’ such as ‘loitering with intent’. Our
present police force is given these powers because it needs to be able
to investigate. Police officers can’t be expected to know who committed
what crime, and when, so they need extra powers to be able to figure
this out, and still more special powers to protect themselves while they
do so. But in a total-surveillance world, there would be no need for
humans to have such extensive powers of investigation. For most crimes,
guilt or innocence would be obvious and easy to establish from the
recordings. The police’s role could be reduced to arresting specific
individuals, who have violated specific laws.
If all goes well, there might be fewer laws for the police to
enforce. Most countries currently have an excess of laws, criminalising
all sorts of behaviour. This is only tolerated because of selective
enforcement; the laws are enforced very rarely, or only against
marginalised groups. But if everyone was suddenly subject to
enforcement, there would have to be a mass legal repeal. When spliffs on
private yachts are punished as severely as spliffs in the ghetto, you
can expect the marijuana legalisation movement to gather steam. When it
becomes glaringly obvious that most people simply can’t follow all the
rules they’re supposed to, these rules will have to be reformed. In the
end, there is a chance that mass surveillance could result in more
personal freedom, not less.
The military is another arm of state power that is ripe for a
surveillance-inspired shrinking. If cross-border surveillance becomes
ubiquitous and effective, we could see a reduction in the $1.7 trillion
that the world spends on the military each year. Previous attempts to
reduce armaments have ultimately been stymied by a lack of reliable
verification. Countries can never trust that their enemies aren’t
cheating, and that encourages them to cheat themselves. Arms races are
also made worse by a psychological phenomenon, whereby each side
interprets the actions of the other as a dangerous provocation, while
interpreting its own as purely defensive or reactive. With cross-border
mass surveillance, countries could check that others are abiding by the
rules, and that they weren’t covertly preparing for an attack. If
intelligence agencies were to use all the new data to become more
sophisticated observers, countries might develop a better understanding
of each other. Not in the hand-holding, peace-and-love sense, but in
knowing what is a genuine threat and what is bluster or posturing. Freed
from fear of surprising new weapons, and surprise attacks, countries
could safely shrink their militaries. And with reduced armies, we should
be able to expect reduced warfare, continuing the historical trend in
conflict reduction since the end of the Second World War.
Of course, these considerations pale when
compared with the potential for mass surveillance to help prevent global
catastrophic risks, and other huge disasters. Pandemics, to name just
one example, are among the deadliest dangers facing the human race. The
Black Death killed a third of Europe’s population in the 14th century
and, in the early 20th century, the Spanish Flu killed off between 50
and 100 million people. In addition, smallpox buried more people than
the two world wars combined. There is no reason to think that great
pandemics are a thing of the past, and in fact there are reasons to
think that another plague could be due soon. There is also the
possibility that a pandemic could arise from synthetic biology, the
human manipulation of microbes to perform specific tasks. Experts are
divided as to the risks involved in this new technology, but they could
be tremendous, especially if someone were to release, accidentally or
malevolently, infectious agents deliberately engineered for high
transmissibility and deadliness.
You can imagine how many lives would have been saved had AIDS been sniffed out by epidemiologists more swiftly
Mass surveillance could help greatly here, by catching lethal
pandemics in their earliest stages, or beforehand, if we were to see one
being created artificially. It could also expose lax safety standards
or dangerous practices in legitimate organisations. Surveillance could
allow for quicker quarantines, and more effective treatment of
pandemics. Medicines and doctors could be rushed to exactly the right
places, and micro-quarantines could be instituted. More dramatic
measures, such as airport closures, are hard to implement on a large
scale, but these quick-response tactics could be implemented narrowly
and selectively. Most importantly, those infected could be rapidly
informed of their condition, allowing them to seek prompt treatment.
With proper procedures and perfect surveillance, we could avoid
pandemics altogether. Infections would be quickly isolated and
eliminated, and eradication campaigns would be shockingly efficient.
Tracking the movements and actions of those who fell ill would make it
much easier to research the causes and pathology of diseases. You can
imagine how many lives would have been saved had AIDS been sniffed out
by epidemiologists more swiftly.
Likewise, mass surveillance could prevent the terrorist use of nukes,
dirty bombs, or other futuristic weapons. Instead of blanket bans in
dangerous research areas, we could allow research to proceed and use
surveillance to catch bad actors and bad practices. We might even see an
increase in academic freedom.
Surveillance could also be useful in smaller, more conventional
disasters. Knowing where everyone in a city was at the moment an
earthquake struck would make rescue services much more effective, and
the more cameras around when hurricanes hit, the better. Over time, all
of this footage would increase our understanding of disasters, and help
us to mitigate their effects.
Indeed, there are whole new bodies of research that could emerge from
the data provided by mass surveillance. Instead of formulating theories
and laboriously recruiting a biased and sometimes unwilling group for
testing, social scientists, economists and epidemiologists could use
surveillance data to test their ideas. And they could do it from home,
immediately, and have access to the world’s entire population. Many
theories could be rapidly confirmed or discarded, with great benefit to
society. The panopticon would be a research nirvana.
Lying and hypocrisy would become practically impossible, and one could no longer project a false image of oneself
Mass surveillance could also make our lives more convenient, by
eliminating the need for passwords. The surveillance system itself could
be used for identification, provided the algorithms were sufficiently
effective. Instead of Mr John Smith typing in ‘passw0rd!!!’ to access
his computer or ‘2345’ to access his money, the system could simply
track where he was at all times, and grant him access to any computers
and money he had the right to. Long security lines at airports could
also be eliminated. If surveillance can detect prohibited items, then
searches are a waste of time. Effective crime detection and deterrence
would mean that people would have little reason to lock their cars or
their doors.
Doing business in a mass surveillance society would be smoother, too.
Outdoor festivals and concerts would no longer need high fences,
security patrols, and intimidating warnings. They could simply replace
them with clear signs along the boundary of the event, as anyone
attending would be identified and billed directly. People could dash
into a shop, grab what they needed, and run out, without having to wait
in line or check out. The camera system would have already billed them.
Drivers who crashed into parked cars would no longer need to leave a
note. They’d be tracked anyway, and insurance companies would have
already settled the matter by the time they returned home. Everyday
human interactions would be changed in far-reaching ways. Lying and
hypocrisy would become practically impossible, and one could no longer
project a false image of oneself. In the realm of personal identity,
there would be less place for imagination or reinvention, and more place
for honesty.
Today’s intricate copyright laws could be simplified, and there would
be no need for the infantilising mess of reduced functionality that is
‘Digital Rights Management’. Surveillance would render DRM completely
unnecessary, meaning that anyone who purchased a song could play it
anytime, on any machine, while copying it and reusing it to their
heart’s content. There would be no point in restricting these uses,
because the behaviour that copyrights holders object to — passing the
music on to others — would be detected and tagged separately. Every time
you bought a song, a book, or even a movie, you’d do so knowing that it
would be with you wherever you went for the rest of your life.
The virtues and vices of surveillance are the imagined virtues and
vices of small villages, which tend to be safe and neighbourly, but
prejudiced and judgemental. With the whole world as the village, we can
hope that the multiplicity of cultures and lifestyles would reduce a
global surveillance culture’s built-in potential for prejudice and
judgment. With people more trusting, and less fearful, of each other, we
could become more willing to help out, more willing to take part in
common projects, more pro-social and more considerate. Yes, these
potential benefits aren’t the whole story on mass surveillance, and I
would never argue that they outweigh the potential downsides. But if
we’re headed into a future panopticon, we’d better brush up on the
possible upsides. Because governments might not bestow these benefits
willingly — we will have to make sure to demand them.
Coming just a year after the creation of the first carbon nanotube computer chip,
scientists have just built the very first actual computer with a
central processor centered entirely around carbon nanotubes. Which means
the future of electronics just got tinier, more efficient, and a whole
lot faster.
Built by a
group of engineers at Stanford University, the computer itself is
relatively basic compared to what we're used to today. In fact,
according to Suhasish Mitra, an electrical engineer at Stanford and
project co-leader, its capabilities are comparable to an Intel
4004—Intel's first microprocessor released in 1971. It can switch
between basic tasks (like counting and organizing numbers) and send data
back to an external memory, but that's pretty much it. Of course, the
slowness is partially due to the fact the computer wasn't exactly built
under the best conditions, MIT Technology Review explains:
"Don't let that fool you, though—this is just the first step. Last
year, IBM proved that carbon nanotube transistors can run about three
times as fast as the traditional silicon variety, and we'd already
managed to arrange over 10,000 carbon nanotubes onto a single chip. They
just hadn't connected them in a functioning circuit. But now that we
have, the future is looking awfully bright."
Theoretically,
carbon nanotube computing would be an order of magnitude faster that
what we've seen thus far with any material. And since carbon nanotubes
naturally dissipate heat at an incredible rate, computers made out of
the stuff could hit blinding speeds without even breaking a sweat. So
the speed limits we have using silicon—which doesn't do so well with
heat—would be effectively obliterated.
The future
of breakneck computing doesn't come without its little speedbumps,
though. One of the problems with nanotubes is that they grow in a
generally haphazard fashion, and some of them even come out with
metallic properties, short-circuiting whatever transistor you decided to
shove it in. In order to overcome these challenges, the researchers at
Stanford had to use electricity to vaporize any metallic nanotubes that
cropped up and formulated design algorithms that would be able to
function regardless of any nanotube alignment problems. Now it's just a
matter of scaling their lab-grown methods to an industrial level—easier
said than done.
Still, despite the limitations at hand, this huge advancement just put another nail in silicon's ever-looming coffin.