Photo: University of Michigan and TSMC
One of several varieties of University of Michigan micromotes. This one incorporates 1 megabyte of flash memory.
Computer scientist David Blaauw
pulls a small plastic box from his bag. He carefully uses his
fingernail to pick up the tiny black speck inside and place it on the
hotel café table. At 1 cubic millimeter, this is one of a line of the
world’s smallest computers. I had to be careful not to cough or sneeze
lest it blow away and be swept into the trash.
Blaauw and his colleague Dennis Sylvester,
both IEEE Fellows and computer scientists at the University of
Michigan, were in San Francisco this week to present 10 papers related
to these “micromote” computers at the IEEE International Solid-State Circuits Conference (ISSCC). They’ve been presenting different variations on the tiny devices for a few years.
Their broader goal is to make smarter, smaller sensors for medical
devices and the Internet of Things—sensors that can do more with less
energy. Many of the microphones, cameras, and other sensors that make up
the eyes and ears of smart devices are always on alert, and frequently
beam personal data into the cloud because they can’t analyze it
themselves. Some have predicted that by 2035, there will be 1 trillion such devices.
“If you’ve got a trillion devices producing readings constantly, we’re
going to drown in data,” says Blaauw. By developing tiny,
energy-efficient computing sensors that can do analysis on board, Blaauw
and Sylvester hope to make these devices more secure, while also saving
energy.
At the conference, they described micromote designs that use only a
few nanowatts of power to perform tasks such as distinguishing the sound
of a passing car and measuring temperature and light levels. They
showed off a compact radio that can send data from the small computers
to receivers 20 meters away—a considerable boost compared to the
50-centimeter range they reported last year at ISSCC. They also described their work with TSMC
(Taiwan Semiconductor Manufacturing Company) on embedding flash memory
into the devices, and a project to bring on board dedicated, low-power
hardware for running artificial intelligence algorithms called deep
neural networks.
Blaauw and Sylvester say they take a holistic approach to adding new
features without ramping up power consumption. “There’s no one answer”
to how the group does it, says Sylvester. If anything, it’s “smart
circuit design,” Blaauw adds. (They pass ideas back and forth rapidly,
not finishing each other’s sentences but something close to it.)
The memory research is a good example of how the right trade-offs can
improve performance, says Sylvester. Previous versions of the
micromotes used 8 kilobytes of SRAM (static RAM), which makes for a
pretty low-performance computer. To record video and sound, the tiny
computers need more memory. So the group worked with TSMC to bring flash
memory on board. Now they can make tiny computers with 1 megabyte of
storage.
Flash can store more data in a smaller footprint than SRAM, but it
takes a big burst of power to write to the memory. With TSMC, the group
designed a new memory array that uses a more efficient charge pump for
the writing process. The memory arrays are a bit less dense than TSMC’s
commercial products, for example, but still much better than SRAM. “We
were able to get huge gains with small trade-offs,” says Sylvester.
Another micromote they presented at the ISSCC incorporates a deep-learning
processor that can operate a neural network while using just 288
microwatts. Neural networks are artificial intelligence algorithms that
perform well at tasks such as face and voice recognition. They typically
demand both large memory banks and intense processing power, and so
they’re usually run on banks of servers often powered by advanced GPUs.
Some researchers have been trying to lessen the size and power demands
of deep-learning AI with dedicated hardware that’s specially designed to
run these algorithms. But even those processors still use over 50
milliwatts of power—far too much for a micromote. The Michigan group
brought down the power requirements by redesigning the chip
architecture, for example by situating four processing elements within
the memory (in this case, SRAM) to minimize data movement.
The idea is to bring neural networks to the Internet of Things. “A
lot of motion detection cameras take pictures of branches moving in the
wind—that’s not very helpful,” says Blaauw. Security cameras and other
connected devices are not smart enough to tell the difference between a
burglar and a tree, so they waste energy sending uninteresting footage
to the cloud for analysis. Onboard deep-learning processors could make
better decisions, but only if they don’t use too much power. The
Michigan group imagine that deep-learning processors could be integrated
into many other Internet-connected things besides security systems. For
example, an HVAC system could decide to turn the air-conditioning down
if it sees multiple people putting on their coats.
After demonstrating many variations on these micromotes in an
academic setting, the Michigan group hopes they will be ready for market
in a few years. Blaauw and Sylvester say their startup company, CubeWorks,
is currently prototyping devices and researching markets. The company
was quietly incorporated in late 2013. Last October, Intel Capital announced they had invested an undisclosed amount in the tiny computer company.
False-color micrograph of Caenorhabditis elegans
(Science Photo Library/Corbis)
If the brain is a collection
of electrical signals, then, if you could catalog all those those
signals digitally, you might be able upload your brain into a computer, thus achieving digital immortality.
While the plausibility—and ethics—of this upload for humans can be debated, some people are forging ahead in the field of whole-brain emulation. There are massive efforts to map the connectome—all the connections in
the brain—and to understand how we think. Simulating brains could lead
us to better robots and artificial intelligence, but the first steps
need to be simple.
So, one group of scientists started with the roundworm Caenorhabditis elegans, a critter whose genes and simple nervous system we know intimately.
The OpenWorm project
has mapped the connections between the worm’s 302 neurons and simulated
them in software. (The project’s ultimate goal is to completely
simulate C. elegans as a virtual organism.) Recently, they put that software program in a simple Lego robot.
The worm’s body parts and neural networks now have
LegoBot equivalents: The worm’s nose neurons were replaced by a sonar
sensor on the robot. The motor neurons running down both sides of the
worm now correspond to motors on the left and right of the robot, explains Lucy Black for I Programmer. She writes:
---
It is claimed that the robot behaved in ways that are similar to observed C. elegans. Stimulation
of the nose stopped forward motion. Touching the anterior and posterior
touch sensors made the robot move forward and back accordingly.
Stimulating the food sensor made the robot move forward.
---
Timothy Busbice, a founder for the OpenWorm project, posted a video of the Lego-Worm-Bot stopping and backing:
The simulation isn’t
exact—the program has some simplifications on the thresholds needed to
trigger a "neuron" firing, for example. But the behavior is impressive
considering that no instructions were programmed into this robot. All it
has is a network of connections mimicking those in the brain of a
worm.
Of course, the goal of uploading our brains assumes that we aren’t alreadyliving in a computer simulation.
Hear out the logic: Technologically advanced civilizations will
eventually make simulations that are indistinguishable from reality. If
that can happen, odds are it has. And if it has, there are probably
billions of simulations making their own simulations. Work out that
math, and "the odds are nearly infinity to one that we are all living in
a computer simulation," writes Ed Grabianowski for io9.
Google's moonshot group Google X is working on a pill that, when
swallowed, will seek out cancer cells in your body. It'll seek out all
sorts of diseases, in fact, pushing the envelope when it comes to
finding and destroying diseases at their earliest stages of development.
This system would face "a much higher regulatory bar than conventional
diagnostic tools," so says Chad A. Mirkin, director of the International
Institute for Nanotechnology at Northwestern University.
Word comes from the Wall Street Journal
where they've got Andrew Conrad, head of the Life Sciences team at the
Google X research lab speaking during their WSDJ Live conference. This
system is called "The Nano Particle Platform," and it aims to
"functionalize Nano Particles, to make them do what we want."
According to Conrad, Google X is working on two separate devices,
more than likely. The first is the pill which contains their smart
nanoparticles. The second is a wearable device that attracts the
particles so that they might be counted.
"Our dream" said Conrad, is that "every test you ever go to the doctor for will be done through this system."
Perhaps that's because, unlike — say — in sales or HR, where
innovation is defined by new management strategies, tech investment is
very product driven. Buying a new piece of hardware or software often
carries the potential for a 'disruptive' breakthrough in productivity or
some other essential business metric. Tech suppliers therefore have a
vested interest in promoting their products as vigorously as possible:
the level of spending on marketing and customer acquisition by some
fast-growing tech companies would turn many consumer brands green with
envy.
As a result, CIOs are tempted by an ever-changing array of tech
buzzwords (cloud, wearables and the Internet of Things [IoT] are
prominent in the recent crop) through which they must sift in order to
find the concepts that are a good fit for their organisations, and that
match their budgets, timescales and appetite for risk. Short-term
decisions are relatively straightforward, but the further you look
ahead, the harder it becomes to predict the winners.
Tech innovation in a one-to-three year timeframe
Despite all the temptations, the technologies that CIOs are looking
at deploying in the near future are relatively uncontroversial — pretty
safe bets, in fact. According to TechRepublic's own research, top CIO
investment priorities over the next three years include security,
mobile, big data and cloud. Fashionable technologies like 3D printing
and wearables find themselves at the bottom of the list.
A separate survey from Deloitte reported similar findings: many of
the technologies that CIOs are piloting and planning to implement in the
near future are ones that have been around for quite some time — business
analytics, mobile apps, social media and big data tools, for example.
Augmented reality and gamification were seen as low-priority
technologies.
This reflects the priorities of most CIOs, who tend to focus on
reliability over disruption: in TechRepublic's research,
'protecting/securing networks and data' trumps 'changing business
requirements' for understandably risk-wary tech chiefs.
Another major factor here is money: few CIOs have a big budget for
bets on blue-skies innovation projects, even if they wanted to. (And
many no doubt remember the excesses of the dotcom years, and are keen to
avoid making that mistake again.)
According to the research by Deloitte, less than 10 percent of the
tech budget is ring-fenced for technology innovation (and CIOs that do
spend more on innovation tend to be in smaller, less conservative,
companies). There's another complication in that CIOs increasingly don't
control the budget dedicated to innovation, as this is handed onto other business units (such as marketing or digital) that are considered to have a more entrepreneurial outlook.
CIOs tend to blame their boss's conservative attitude to risk as the
biggest constraint in making riskier IT investments for innovation and
growth. Although CIOs claim to be willing to take risks with IT
investments, this attitude does not appear to match up with their
current project portfolios.
Another part of the problem is that it's very hard to measure the
return on some of these technologies. Managers have been used to
measuring the benefits of new technologies using a standard
return-on-investment measure that tracks some very obvious costs —
headcount or spending on new hardware, for example. But defining the
return on a social media project or an IoT trial is much more slippery.
Tech investment: A medium-term view
If CIO investment plans remain conservative and hobbled by a limited
budget in the short term, you have to look a little further out to see
where the next big thing in tech might come from.
One place to look is in what's probably the best-known set of predictions about the future of IT: Gartner's Hype Cycle for Emerging Technologies, which tries to assess the potential of new technologies while taking into account the expectations surrounding them.
The chart grades technologies not only by how far they are from
mainstream adoption, but also on the level of hype surrounding them, and
as such it demonstrates what the analysts argue is a fundamental truth:
that we can't help getting excited about new technology, but that we
also rapidly get turned off when we realize how hard it can be to deploy
successfully. The exotically-named Peak of Inflated Expectations is
commonly followed by the Trough of Disillusionment, before technologies
finally make it up the Slope of Enlightenment to the Plateau of
Productivity.
"It was a pattern we were seeing with pretty much all technologies
— that up-and-down of expectations, disillusionment and eventual
productivity," says Jackie Fenn, vice-president and Gartner fellow, who
has been working on the project since the first hype cycle was published
20 years ago, which she says is an example of the human reaction to any
novelty.
"It's not really about the technologies themselves, it's about how we
respond to anything new. You see it with management trends, you see it
with projects. I've had people tell me it applies to their personal
lives — that pattern of the initial wave of enthusiasm, then the
realisation that this is much harder than we thought, and then
eventually coming to terms with what it takes to make something work."
According to Gartner's 2014 list, the technologies expected to reach
the Plateau of Productivity, (where they become widely adopted) within
the next two years include speech recognition and in-memory analytics.
Technologies that might take two to five years until mainstream
adoption include 3D scanners, NFC and cloud computing. Cloud is
currently entering Gartner's trough of disillusionment, where early
enthusiasm is overtaken by the grim reality of making this stuff work:
"there are many signs of fatigue, rampant cloudwashing and
disillusionment (for example, highly visible failures)," Gartner notes.
When you look at a 5-10-year horizon, the predictions include virtual reality, cryptocurrencies and wearable user interfaces.
Working out when the technologies will make the grade, and thus how
CIOs should time their investments, seems to be the biggest challenge.
Several of the technologies on Gartner's first-ever hype curve back in
1995 — including speech recognition and virtual reality — are still on
the 2014 hype curve without making it to primetime yet.
These sorts of user interface technologies have taken a long time to
mature, says Fenn. For example, voice recognition started to appear in
very structured call centre applications, while the latest incarnation
is something like Siri — "but it's still not a completely mainstream
interface," she says.
Nearly all technologies go through the same rollercoaster ride,
because our response to new concepts remains the same, says Fenn. "It's
an innate psychological reaction — we get excited when there's something
new. Partly it's the wiring of our brains that attracts us — we want to
keep going around the first part of the cycle where new technologies
are interesting and engaging; the second half tends to be the hard work,
so it's easier to get distracted."
But even if they can't escape the hype cycle, CIOs can use concepts
like this to manage their own impulses: if a company's investment
strategy means it's consistently adopting new technologies when they are
most hyped (remember a few years back when every CEO had to blog?) then
it may be time to reassess, even if the CIO peer-pressure makes it
difficult.
Says Fenn: "There is that pressure, that if you're not doing it you
just don't get it — and it's a very real pressure. Look at where [new
technology] adds value and if it really doesn't, then sometimes it's
fine to be a later adopter and let others learn the hard lessons if it's
something that's really not critical to you."
The trick, she says, is not to force-fit innovation, but to continually experiment and not always expect to be right.
Looking further out, the technologies labelled 'more than 10 years'
to mainstream adoption on Gartner's hype cycle are the rather
sci-fi-inflected ones: holographic displays, quantum computing and human
augmentation. As such, it's a surprisingly entertaining romp through
the relatively near future of technology, from the rather mundane to the
completely exotic. "Employers will need to weigh the value of human
augmentation against the growing capabilities of robot workers,
particularly as robots may involve fewer ethical and legal minefields
than augmentation," notes Gartner.
Where the futurists roam
Beyond the 10-year horizon, you're very much into the realm where the tech futurists roam.
Steve Brown, a futurist at chip-maker Intel argues that three
mega-trends will shape the future of computing over the next decade.
"They are really simple — it's small, big and natural," he says.
'Small' is the consequence of Moore's Law, which will continue the
trend towards small, low-power devices, making the rise of wearables and
the IoT more likely. 'Big' refers to the ongoing growth in raw
computing power, while 'natural' is the process by which everyday
objects are imbued with some level of computing power.
"Computing was a destination: you had to go somewhere to compute — a
room that had a giant whirring computer in it that you worshipped, and
you were lucky to get in there. Then you had the era where you could
carry computing with you," says Brown.
"The next era is where the computing just blends into the world
around us, and once you can do that, and instrument the world, you can
essentially make everything smart — you can turn anything into a
computer. Once you do that, profoundly interesting things happen,"
argues Brown.
With this level of computing power comes a new set of problems for
executives, says Brown. The challenge for CIOs and enterprise architects
is that once they can make everything smart, what do they want to use
it for? "In the future you have all these big philosophical questions
that you have to answer before you make a deployment," he says.
Brown envisages a world of ubiquitous processing power, where robots are able to see and understand the world around them.
"Autonomous machines are going to change everything,"
he claims. "The challenge for enterprise is how humans will work
alongside machines — whether that's a physical machine or an algorithm
— and what's the best way to take a task and split it into the innately
human piece and the bit that can be optimized in some way by being
automated."
The pace of technological development is accelerating: where we used
to have a decade to make these decisions, these things are going to hit
us faster and faster, argues Brown. All of which means we need to make
better decisions about how to use new technology — and will face harder
questions about privacy and security.
"If we use this technology, will it make us better humans? Which
means we all have to decide ahead of time what do we consider to be
better humans? At the enterprise level, what do we stand for? How do we
want to do business?".
Not just about the hardware and software
For many organizations there's a big stumbling block in the way of
this bright future — their own staff and their ways of working. Figuring
out what to invest in may be a lot easier than persuading staff, and
whole organisations, to change how they operate.
"What we really need to figure out is the relationship between humans
and technology, because right now humans get technology massively
wrong," says Dave Coplin, chief envisioning officer for Microsoft (a
firmly tongue-in-cheek job title, he assures me).
Coplin argues that most of us tend to use new technology to do things
the way we've always been doing them for years, when the point of new
technology is to enable us to do things fundamentally differently. The
concept of productivity is a classic example: "We've got to pick apart
what productivity means. Unfortunately most people think process is
productivity — the better I can do the processes, the more productive I
am. That leads us to focus on the wrong point, because actually
productivity is about leading to better outcomes." Three-quarters of
workers think a productive day in the office is clearing their inbox, he
notes.
Developing a better relationship with technology is necessary because
of the huge changes ahead, argues Coplin: "What happens when technology
starts to disappear into the background; what happens when every
surface has the capability to have contextual information displayed on
it based on what's happening around it, and who is looking at it? This
is the kind of world we're heading into — a world of predictive data
that will throw up all sorts of ethical issues. If we don't get the
humans ready for that change we'll never be able to make the most of
it."
Nicola Millard, a futurologist at telecoms giant BT, echoes these
ideas, arguing that CIOs have to consider not just changes to the
technology ahead of them, but also changes to the workers: a longer
working life requires workplace technologies that appeal to new recruits
as well as staff into their 70s and older. It also means rethinking the
workplace: "The open-plan office is a distraction machine," she says
— but can you be innovative in a grey cubicle? Workers using tablets
might prefer 'perch points' to desks, those using gesture control may
need more space. Even the role of the manager itself may change —
becoming less about traditional command and control, and more about
being a 'party host', finding the right mix of skills to get the job
done.
In the longer term, not only will the technology change profoundly,
but the workers and managers themselves will also need to upgrade their
thinking.
Next week at the World Cup, a
paralyzed volunteer from the Association for Assistance to Disabled
Children will walk onto the field and open the tournament with a
ceremonial kick. This modern miracle is made possible by a robotic
exoskeleton that will move the user's limbs, taking commands directly
from his or her thoughts.
This demonstration is the debut of the Walk Again Project,
a consortium of more than 150 scientists and engineers from around the
globe who have come together to show off recent advances in the field of
brain machine interfaces, or BMI. The paralyzed person inside will be
wearing an electroencephalographic (EEG) headset that records brainwave
activity. A backpack computer will translate those electrical signals
into commands the exoskeleton can understand. As the robotic frame
moves, it also sends its own signals back to the body, restoring not
just the ability to walk, but the sensation as well.
Just how well the wearer will walk and kick are uncertain. The project has been criticized by other neuroscientists as an exploitative spectacle that uses the disabled to promote research which may not be the best path
for restoring health to paralyzed patients. And just weeks before the
project is set to debut on television to hundreds of millions of fans,
it still hasn’t been tested outdoors and awaits some final pieces and
construction. It's not even clear which of the eight people from the
study will be the one inside the suit.
The point of the project is not
to show finished research, however, or sell a particular technology.
The Walk Again Project is meant primarily to inspire. It's a
demonstration that we’re on the threshold of achieving science fiction:
technologies that will allow humans to truly step into the cyborg era.
It’s only taken a little over two centuries to get there.
The past
Scientists have been studying
the way electricity interacts with our biology since 1780, when Luigi
Galvani made the legs of a dead frog dance by zapping them with a spark,
but the modern history behind the technology that allows our brains to
talk directly to machines goes back to the 1950s and John Lilly. He
implanted several hundred electrodes into different parts of a monkey’s
brain and used these implants to apply shocks, causing different body
parts to move. A decade later in 1963, professor Jose Delgado of Yale
tested this theory again like a true Spaniard, stepping into the ring to
face a charging bull, which he stopped in its tracks with a zap to the brain.
In 1969, professor Eberhard Fetz was able to isolate and record the
firing of a single neuron onto a microelectrode he had implanted into
the brain of a monkey. Fetz learned that primates could actually tune
their brain activity to better interact with the implanted machine. He
rewarded them with banana pellets every time they triggered the
microelectrode, and the primates quickly improved in their ability to
activate this specific section of their brain. This was a critical
observation, demonstrating brain’s unique plasticity, its ability to
create fresh pathways to fit a new language.
Today, BMI research has
advanced to not only record the neurons firing in primates’ brains, but
to understand what actions the firing of those neurons represent. "I
spend my life chasing the storms that emanate from the hundreds of
billions of cells that inhabit our brains," explained Miguel Nicolelis, PhD, one of the founders of Center for Neuroengineering
at Duke University and the driving force behind the Walk Again Project.
"What we want to do is listen to these brain symphonies and try to
extract them from the messages they carry."
Nicolelis and his colleagues at
Duke were able to record brain activity and match it to actions. From
there they could translate that brain activity into instructions a
computer could understand. Beginning in the year 2000, Nicolelis and
his colleagues at Duke made a series of breakthroughs. In the most well
known, they implanted a monkey with an array of microelectrodes that
could record the firing of clusters of neurons in different parts of the
brain. The monkey stood on a treadmill and began to walk. On the other
side of the planet, a robot in Japan received the signal emanating from
the primate’s brain and began to walk.
Primates
in the Duke lab learned to control robotic arms using only their
thoughts. And like in the early experiments done by Fetz, the primates
showed a striking ability to improve the control of these new limbs.
"The brain is a remarkable instrument," says professor Craig Henriquez,
who helped to found the Duke lab. "It has the ability to rewire itself,
to create new connections. That’s what gives the BMI paradigm its power.
You are not limited just by what you can physically engineer, because
the brain evolves to better suit the interface."
The present
After his success with
primates, Nicolelis was eager to apply the advances in BMI to people.
But there were some big challenges in the transition from lab animals to
human patients, namely that many people weren’t willing to undergo
invasive brain surgery for the purposes of clinical research. "There is
an open question of whether you need to have implants to get really fine
grained control," says Henriquez. The Walk Again Project hopes to
answer that question, at least partially. While it is based on research
in animals that required surgery, it will be using only external EEG
headsets to gather brain activity.
The fact that these patients
were paralyzed presented another challenge. Unlike the lab monkeys, who
could move their own arms and observe how the robot arm moved in
response, these participants can’t move their legs, or for many, really
remember the subconscious thought process that takes place when you want
to travel by putting one foot in front of the other. The first step was
building up the pathways in the brain that would send mental commands
to the BMI to restore locomotion.
To train the patients in this
new way of thinking about movement, researchers turned to virtual
reality. Each subject was given an EEG headset and an Oculus Rift.
Inside the head-mounted display, the subjects saw a virtual avatar of
themselves from the waist down. When they thought about walking, the
avatar legs walked, and this helped the brain to build new connections
geared towards controlling the exoskeleton. "We also simulate the
stadium, and the roar of the crowd," says Regis Kopper, who runs Duke’s
VR lab. "To help them prepare for the stress of the big day."
Once
the VR training had established a baseline for sending commands to the
legs, there was a second hurdle. Much of walking happens at the level of
reflex, and without the peripheral nervous system that helps people
balance, coordinate, and adjust to the terrain, walking can be a very
challenging task. That’s why even the most advanced robots have trouble navigating stairs
or narrow hallways that would seem simple to humans. If the patients
were going to successfully walk or kick a ball, it wasn’t enough that
they be able to move the exoskeleton’s legs — they had to feel them as
well.
The breakthrough was a special
shirt with vibrating pads on its forearm. As the robot walked, the
contact of its heel and toe on the ground made corresponding sensations
occur along parts of the right and left arms. "The brain essentially
remapped one part of the body onto another," says Henriquez. "This
restored what we call proprioception, the spacial awareness humans need
for walking."
In recent weeks all eight of
the test subjects have successfully walked using the exoskeleton, with
one completing an astonishing 132 steps. The plan is to have the
volunteer who works best with the exoskeleton perform the opening kick.
But the success of the very public demonstration is still up in the air.
The suit hasn’t been completely finished and it has yet to be tested in
an outdoor environment. The group won't confirm who exactly will be
wearing the suit. Nicolelis, for his part, isn’t worried. Asked when he
thought the entire apparatus would be ready, he replied: "Thirty minutes
before."
The future
The Walk Again project may be
the most high-profile example of BMI, but there have been a string of
breakthrough applications in recent years. A patient at the University of Pittsburgh
achieved unprecedented levels of fine motor control with a robotic arm
controlled by brain activity. The Rehabilitation Institute of Chicago
introduced the world’s first mind controlled prosthetic leg. For now the use of advanced BMI technologies is largely confined to academic and medical research, but some projects, like DARPA’s Deka arm,
have received FDA approval and are beginning to move into the real
world. As it improves in capability and comes down in cost, BMI may
open the door to a world of human enhancement that would see people
merging with machines, not to restore lost capabilities, but to augment
their own abilities with cyborg power-ups.
"From the standpoint of
defense, we have a lot of good reasons to do it," says Alan Rudolph, a
former DARPA scientist and Walk Again Project member. Rudolph, for
example, worked on the Big Dog,
and says BMI may allow human pilots to control mechanical units with
their minds, giving them the ability to navigate uncertain or dynamic
terrain in a way that has so far been impossible while keeping soldiers
out of harms way. Our thoughts might control a robot on the surface of
Mars or a microsurgical bot navigating the inside of the human body.
There is a subculture of DIY biohackers and grinders
who are eager to begin adopting cyborg technology and who are willing,
at least in theory, to amputate functional limbs if it’s possible to
replace them with stronger, more functional, mechanical ones. "I know
what the limits of the human body are like," says Tim Sarver, a member
of the Pittsburgh biohacker collective Grindhouse Wetwares. "Once you’ve
seen the capabilities of a 5000psi hydraulic system, it’s no
comparison."
For now, this sci-fi vision
all starts with a single kick on the World Cup pitch, but our inevitable
cyborg future is indeed coming. A recent demonstration
at the University of Washington enabled one person’s thoughts to
control the movements of another person’s body — a brain-to-brain
interface — and it holds the key to BMI’s most promising potential
application. "In this futuristic scenario, voluntary electrical brain
waves, the biological alphabet that underlies human thinking, will
maneuver large and small robots, control airships from afar," wrote
Nicolelis. "And perhaps even allow for the sharing of thoughts and
sensations with one individual to another."
Qualcomm is getting high on 64-bit chips with its fastest ever
Snapdragon processor, which will render 4K video, support LTE Advanced
and could run the 64-bit Android OS.
The new Snapdragon 810 is the company’s “highest performing” mobile chip
for smartphones and tablets, Qualcomm said in a statement. Mobile
devices with the 64-bit chip will ship in the first half of next year,
and be faster and more power-efficient. Snapdragon chips are used in
handsets with Android and Windows Phone operating systems, which are not
available in 64-bit form yet.
The Snapdragon 810 is loaded with the latest communication and graphics
technologies from Qualcomm. The graphics processor can render 4K (3840 x
2160 pixel) video at 30 frames per second, and 1080p video at 120
frames per second. The chip also has an integrated modem that supports
LTE and its successor, LTE-Advanced, which is emerging.
The 810 also is among the first mobile chips to support the latest
low-power LPDDR4 memory, which will allow programs to run faster while
consuming less power. This will be beneficial, especially for tablets,
as 64-bit chips allow mobile devices to have more than 4GB of memory,
which is the limit on current 32-bit chips.
The quad-core chip has a mix of high-power ARM Cortex-A57 CPU cores for
demanding tasks and low-power A53 CPU cores for mundane tasks like
taking calls, messaging and MP3 playback. The multiple cores ensure more
power-efficient use of the chip, which helps extend battery life of
mobile devices.
The company also introduced a Snapdragon 808 six-core 64-bit chip. The
chips will be among the first made using the latest 20-nanometer
manufacturing process, which is an advance from the 28-nm process used
to make Snapdragon chips today.
Qualcomm now has to wait for Google to release a 64-bit version of
Android for ARM-based mobile devices. Intel has already shown mobile
devices running 64-bit Android with its Merrifield chip, but most mobile
products today run on ARM processors. Qualcomm licenses Snapdragon
processor architecture and designs from ARM.
Work for 64-bit Android is already underway,
and applications like the Chrome browser are already being developed
for the OS. Google has not officially commented on when 64-bit Android
would be released, but industry observers believe it could be announced at the Google I/O conference in late June.
Qualcomm spokesman Jon Carvill declined to comment on support for 64-bit
Android. But the chips are “further evidence of our commitment to
deliver top-to-bottom mobile 64-bit leadership across product tiers for
our customers,” Carvill said in an email.
Qualcomm’s chips are used in some of the world’s top smartphones, and
will appear in Samsung’s Galaxy S5. A Qualcomm executive in October last year called
Apple’s A7, the world’s first 64-bit mobile chip, a “marketing
gimmick,” but the company has moved on and now has five 64-bit chips
coming to medium-priced and premium smartphones and tablets. But no
64-bit Android smartphones are available yet, and Apple has a headstart
and remains the only company selling a 64-bit smartphone with its iPhone
5S.
The 810 supports HDMI 1.4 for 4K video output, and the Adreno 430
graphics processor is 30 percent faster on graphics performance and 20
percent more power efficient than the older Adreno 420 GPU. The graphics
processor will support 55-megapixel sensors, Qualcomm said. Other chip
features include 802.11ac Wi-Fi with built-in technology for faster
wireless data transfers, Bluetooth 4.1 and a processing core for
location services.
The six-core Snapdragon 808 is a notch down on performance compared to
the 810, and also has fewer features. The 808 supports LTE-Advanced, but
can support displays with up to 2560 x 1600 pixels. It will support
LPDDR3 memory. The chip has two Cortex-A57 CPUs and four Cortex-A53
cores.
The chips will ship out to device makers for testing in the second half of this year.
This week the experimental developer-aimed group known as Google ATAP
- aka Advanced Technology and Projects (skunkworks) have announced
Project Tango. They’ve suggested Project Tango will appear first as a
phone with 3D sensors. These 3D sensors will be able to scan and build a
map of the room they’re in, opening up a whole world of possibilities.
The device that Project Tango will release first will be just about
as limited-edition as they come. Issued in an edition of 200, this
device will be sent to developers only. This developer group will be
hand-picked by Google’s ATAP - and sign-ups start today. (We’ll be
publishing the sign-up link once active.)
Speaking on this skunkworks project this morning was Google
user Johnny Lee. Mister Johnny Lee is ATAP’s technical program lead,
and he’ll be heading this project for the public, as you’ll see it. This
is the same group that brought you Motorola’s digital tattoos, if you’ll remember.
If your car was
powered by thorium, you would never need to refuel it. The vehicle would
burn out long before the chemical did. The thorium would last so long,
in fact, it would probably outlive you.
That's why a company called Laser Power Systems has created a concept
for a thorium-powered car engine. The element is radioactive, and the
team uses bits of it to build a laserbeam that heats water, produces
steam, and powers an energy-producing turbine.
Thorium is one of the most dense materials on the planet. A small
sample of it packs 20 million times more energy than a similarly-sized
sample of coal, making it an ideal energy source.
The thing is, Dr. Charles Stevens, the CEO of Laser Power Systems, told Mashable that thorium engines won't be in cars anytime soon.
"Cars are not our primary interest," Stevens said. "The automakers don't want to buy them."
He said too much of the automobile industry is focused on making
money off of gas engines, and it will take at least a couple decades for
thorium technology to be used enough in other industries that vehicle
manufacturers will begin to consider revamping the way they think about
engines.
"We're building this to power the rest of the world," Stevens said.
He believes a thorium turbine about the size of an air conditioning unit
could more provide cheap power for whole restaurants, hotels, office
buildings, even small towns in areas of the world without electricity.
At some point, thorium could power individual homes.
Stevens understands that people may be wary of Thorium because it is radioactive — but any such worry would be unfounded.
"The radiation that we develop off of one of these things can be
shielded by a single sheet off of aluminum foil," Stevens said." "You
will get more radiation from one of those dental X-rays than this."
Have something to add to this story? Share it in the comments.
Equinix’s data center in
Secaucus is highly coveted space for financial traders, given its
proximity to the servers that move trades for Wall Street.
The trophy high-rises on Madison, Park and Fifth Avenues in Manhattan
have long commanded the top prices in the country for commercial real
estate, with yearly leases approaching $150 a square foot. So it is
quite a Gotham-size comedown that businesses are now paying rents four
times that in low, bland buildings across the Hudson River in New
Jersey.
Why pay $600 or more a square foot at unglamorous addresses like
Weehawken, Secaucus and Mahwah? The answer is still location, location,
location — but of a very different sort.
Companies are paying top dollar to lease space there in buildings called
data centers, the anonymous warrens where more and more of the world’s
commerce is transacted, all of which has added up to a tremendous boon
for the business of data centers themselves.
The centers provide huge banks of remote computer storage, and the
enormous amounts of electrical power and ultrafast fiber optic links
that they demand.
Prices are particularly steep in northern New Jersey because it is also
where data centers house the digital guts of the New York Stock Exchange
and other markets. Bankers and high-frequency traders are vying to have
their computers, or servers, as close as possible to those markets.
Shorter distances make for quicker trades, and microseconds can mean
millions of dollars made or lost.
When the centers opened in the 1990s as quaintly termed “Internet
hotels,” the tenants paid for space to plug in their servers with a
proviso that electricity would be available. As computing power has
soared, so has the need for power, turning that relationship on its
head: electrical capacity is often the central element of lease
agreements, and space is secondary.
A result, an examination shows, is that the industry has evolved from a
purveyor of space to an energy broker — making tremendous profits by
reselling access to electrical power, and in some cases raising
questions of whether the industry has become a kind of wildcat power
utility.
Even though a single data center can deliver enough electricity to power
a medium-size town, regulators have granted the industry some of the
financial benefits accorded the real estate business and imposed none of
the restrictions placed on the profits of power companies.
Some of the biggest data center companies have won or are seeking
Internal Revenue Service approval to organize themselves as real estate
investment trusts, allowing them to eliminate most corporate taxes. At
the same time, the companies have not drawn the scrutiny of utility
regulators, who normally set prices for delivery of the power to
residences and businesses.
While companies have widely different lease structures, with prices
ranging from under $200 to more than $1,000 a square foot, the
industry’s performance on Wall Street has been remarkable. Digital Realty Trust,
the first major data center company to organize as a real estate trust,
has delivered a return of more than 700 percent since its initial
public offering in 2004, according to an analysis by Green Street
Advisors.
The stock price of another leading company, Equinix,
which owns one of the prime northern New Jersey complexes and is
seeking to become a real estate trust, more than doubled last year to
over $200.
“Their business has grown incredibly rapidly,” said John Stewart, a
senior analyst at Green Street. “They arrived at the scene right as
demand for data storage and growth of the Internet were exploding.”
Push for Leasing
While many businesses own their own data centers — from stacks of
servers jammed into a back office to major stand-alone facilities — the
growing sophistication, cost and power needs of the systems are driving
companies into leased spaces at a breakneck pace.
The New York metro market now has the most rentable square footage in
the nation, at 3.2 million square feet, according to a recent report by
451 Research, an industry consulting firm. It is followed by the
Washington and Northern Virginia area, and then by San Francisco and
Silicon Valley.
A major orthopedics practice in Atlanta illustrates how crucial these data centers have become.
With 21 clinics scattered around Atlanta, Resurgens Orthopaedics
has some 900 employees, including 170 surgeons, therapists and other
caregivers who treat everything from fractured spines to plantar
fasciitis. But its technological engine sits in a roughly
250-square-foot cage within a gigantic building that was once a Sears
distribution warehouse and is now a data center operated by Quality
Technology Services.
Eight or nine racks of servers process and store every digital medical
image, physician’s schedule and patient billing record at Resurgens,
said Bradley Dick, chief information officer at the company. Traffic on
the clinics’ 1,600 telephones is routed through the same servers, Mr.
Dick said.
“That is our business,” Mr. Dick said. “If those systems are down, it’s going to be a bad day.”
The center steadily burns 25 million to 32 million watts, said Brian
Johnston, the chief technology officer for Quality Technology. That is
roughly the amount needed to power 15,000 homes, according to the
Electric Power Research Institute.
Mr. Dick said that 75 percent of Resurgens’s lease was directly related
to power — essentially for access to about 30 power sockets. He declined
to cite a specific dollar amount, but two brokers familiar with the
operation said that Resurgens was probably paying a rate of about $600
per square foot a year, which would mean it is paying over $100,000 a
year simply to plug its servers into those jacks.
While lease arrangements are often written in the language of real
estate,“these are power deals, essentially,” said Scott Stein, senior
vice president of the data center solutions group at Cassidy Turley, a
commercial real estate firm. “These are about getting power for your
servers.”
One key to the profit reaped by some data centers is how they sell
access to power. Troy Tazbaz, a data center design engineer at Oracle
who previously worked at Equinix and elsewhere in the industry, said
that behind the flat monthly rate for a socket was a lucrative
calculation. Tenants contract for access to more electricity than they
actually wind up needing. But many data centers charge tenants as if
they were using all of that capacity — in other words, full price for
power that is available but not consumed.
Since tenants on average tend to contract for around twice the power
they need, Mr. Tazbaz said, those data centers can effectively charge
double what they are paying for that power. Generally, the sale or
resale of power is subject to a welter of regulations and price
controls. For regulated utilities, the average “return on equity” — a
rough parallel to profit margins — was 9.25 percent to 9.7 percent for
2010 through 2012, said Lillian Federico, president of Regulatory
Research Associates, a division of SNL Energy.
Regulators Unaware
But the capacity pricing by data centers, which emerged in interviews
with engineers and others in the industry as well as an examination of
corporate documents, appears not to have registered with utility
regulators.
Interviews with regulators in several states revealed widespread lack of
understanding about the amount of electricity used by data centers or
how they profit by selling access to power.
Bernie Neenan, a former utility official now at the Electric Power
Research Institute, said that an industry operating outside the reach of
utility regulators and making profits by reselling access to
electricity would be a troubling precedent. Utility regulations “are
trying to avoid a landslide” of other businesses doing the same.
Some data center companies, including Digital Realty Trust and DuPont
Fabros Technology, charge tenants for the actual amount of electricity
consumed and then add a fee calculated on capacity or square footage.
Those deals, often for larger tenants, usually wind up with lower
effective prices per square foot.
Regardless of the pricing model, Chris Crosby, chief executive of the
Dallas-based Compass Datacenters, said that since data centers also
provided protection from surges and power failures with backup
generators, they could not be viewed as utilities. That backup equipment
“is why people pay for our business,” Mr. Crosby said.
Melissa Neumann, a spokeswoman for Equinix, said that in the company’s
leases, “power, cooling and space are very interrelated.” She added,
“It’s simply not accurate to look at power in isolation.”
Ms. Neumann and officials at the other companies said their practices
could not be construed as reselling electrical power at a profit and
that data centers strictly respected all utility codes. Alex Veytsel,
chief strategy officer at RampRate, which advises companies on data
center, network and support services, said tenants were beginning to
resist flat-rate pricing for access to sockets.
“I think market awareness is getting better,” Mr. Veytsel said. “And
certainly there are a lot of people who know they are in a bad
situation.”
The Equinix Story
The soaring business of data centers is exemplified by Equinix.
Founded in the late 1990s, it survived what Jason Starr, director of
investor relations, called a “near death experience” when the Internet
bubble burst. Then it began its stunning rise.
Equinix’s giant data center in Secaucus is mostly dark except for lights
flashing on servers stacked on black racks enclosed in cages. For all
its eerie solitude, it is some of the most coveted space on the planet
for financial traders. A few miles north, in an unmarked building on a
street corner in Mahwah, sit the servers that move trades on the New
York Stock Exchange; an almost equal distance to the south, in Carteret,
are Nasdaq’s servers.
The data center’s attraction for tenants is a matter of physics: data,
which is transmitted as light pulses through fiber optic cables, can
travel no faster than about a foot every billionth of a second. So being
close to so many markets lets traders operate with little time lag.
As Mr. Starr said: “We’re beachfront property.”
Standing before a bank of servers, Mr. Starr explained that they
belonged to one of the lesser-known exchanges located in the Secaucus
data center. Multicolored fiber-optic cables drop from an overhead track
into the cage, which allows servers of traders and other financial
players elsewhere on the floor to monitor and react nearly
instantaneously to the exchange. It all creates a dense and unthinkably
fast ecosystem of postmodern finance.
Quoting some lyrics by Soul Asylum, Mr. Starr said, “Nothing attracts a
crowd like a crowd.” By any measure, Equinix has attracted quite a
crowd. With more than 90 facilities, it is the top data center leasing
company in the world, according to 451 Research. Last year, it reported
revenue of $1.9 billion and $145 million in profits.
But the ability to expand, according to the company’s financial filings,
is partly dependent on fulfilling the growing demands for electricity.
The company’s most recent annual report said that “customers are
consuming an increasing amount of power per cabinet,” its term for data
center space. It also noted that given the increase in electrical use
and the age of some of its centers, “the current demand for power may
exceed the designed electrical capacity in these centers.”
To enhance its business, Equinix has announced plans to restructure
itself as a real estate investment trust, or REIT, which, after
substantial transition costs, would eventually save the company more
than $100 million in taxes annually, according to Colby Synesael, an
analyst at Cowen & Company, an investment banking firm.
Congress created REITs in the early 1960s, modeling them on mutual
funds, to open real estate investments to ordinary investors, said
Timothy M. Toy, a New York lawyer who has written about the history of
the trusts. Real estate companies organized as investment trusts avoid
corporate taxes by paying out most of their income as dividends to
investors.
Equinix is seeking a so-called private letter ruling from the I.R.S. to
restructure itself, a move that has drawn criticism from tax watchdogs.
“This is an incredible example of how tax avoidance has become a major business strategy,” said Ryan Alexander, president of Taxpayers for Common Sense,
a nonpartisan budget watchdog. The I.R.S., she said, “is letting people
broaden these definitions in a way that they kind of create the image
of a loophole.”
Equinix, some analysts say, is further from the definition of a real
estate trust than other data center companies operating as trusts, like
Digital Realty Trust. As many as 80 of its 97 data centers are in
buildings it leases, Equinix said. The company then, in effect, sublets
the buildings to numerous tenants.
Even so, Mr. Synesael said the I.R.S. has been inclined to view
recurring revenue like lease payments as “good REIT income.”
Ms. Neumann, the Equinix spokeswoman, said, “The REIT framework is
designed to apply to real estate broadly, whether owned or leased.” She
added that converting to a real estate trust “offers tax efficiencies
and disciplined returns to shareholders while also allowing us to
preserve growth characteristics of Equinix and create significant
shareholder value.”
We’ve been hearing a lot about Google‘s
self-driving car lately, and we’re all probably wanting to know how
exactly the search giant is able to construct such a thing and drive
itself without hitting anything or anyone. A new photo has surfaced that
demonstrates what Google’s self-driving vehicles see while they’re out
on the town, and it looks rather frightening.
The image was tweeted
by Idealab founder Bill Gross, along with a claim that the self-driving
car collects almost 1GB of data every second (yes, every second). This
data includes imagery of the cars surroundings in order to effectively
and safely navigate roads. The image shows that the car sees its
surroundings through an infrared-like camera sensor, and it even can
pick out people walking on the sidewalk.
Of course, 1GB of data every second isn’t too surprising when you
consider that the car has to get a 360-degree image of its surroundings
at all times. The image we see above even distinguishes different
objects by color and shape. For instance, pedestrians are in bright
green, cars are shaped like boxes, and the road is in dark blue.
However, we’re not sure where this photo came from, so it could
simply be a rendering of someone’s idea of what Google’s self-driving
car sees. Either way, Google says that we could see self-driving cars
make their way to public roads in the next five years or so, which actually isn’t that far off, and Tesla Motors CEO Elon Musk is even interested in developing self-driving cars as well. However, they certainly don’t come without their problems, and we’re guessing that the first batch of self-driving cars probably won’t be in 100% tip-top shape.