We’ve already heard plenty about the Oculus Rift virtual reality headset,
and while we youngsters are pretty amazed by the technology, nobody has
their mind blown more than the elderly, who could only dream about such
technology back in their younger days. Recently, a 90-year-old
grandmother ended up trying out the Oculus Rift for herself, and she was
quite amazed.
Imagimind Studio developer Paul Rivot ended up grabbing an Oculus
Rift in order to play around with it and develop some games, but he took
a break from that and decided to give his grandmother a little treat,
by strapping the Oculus Rift to her head in order to experience a bit of
virtual reality herself.
The video is quite entertaining to watch, and
we can’t imagine what’s going on inside of her head, knowing that she
never grew up with such technology as the Oculus Rift, let alone 3D
video games. She even gets to the point where she thought the images
being displayed were actual images taken on-location, when in fact it’s
all 3D-rendered on a computer.
Currently, the Oculus Rift is out in the wild for developers only at
this point, and there’s no announced release date for the device,
although the company has noted that it should arrive to the general
public before the 2014 holiday season. In the meantime, it’s videos like
this that only excite us even more.
Remembering the passwords for all your sites can get frustrating.
There are only so many punctuation, number substitutes and uppercase
variations you can recall, and writing them down for all to find is
hardly an option.
Thanks to researchers at the UC Berkeley School of Information, you
may not need to type those pesky passwords in the future. Instead,
you'll only need to think them.
By measuring brainwaves with biosensor technology, researchers are
able to replace passwords with "passthoughts" for computer
authentication. A $100 headset wirelessly connects to a computer via
Bluetooth, and the device's sensor rests against the user’s forehead,
providing a electroencephalogram (EEG) signal from the brain.
Other biometric authentication systems use fingerprint or retina
scans for security, but they're often expensive and require extensive
equipment. The NeuroSky Mindset looks just like any other Bluetooth set and is more user-friendly, researchers say.
Brainwaves are also unique to each individual, so even if someone knew
your passthought, their emitted EEG signals would be different.
In a series of tests, participants completed
seven different mental tasks with the device, including imagining their
finger moving up and down and choosing a personalized secret. Simple
actions like focusing on breathing or on a thought for ten seconds
resulted in successful authentication.
The key to passthoughts, researchers found, is finding a mental task
that users won’t mind repeating on a daily basis. Most participants
found it difficult to imagine performing an action from their favorite
sport because it was unnatural to imagine movement without using their
muscles. More preferable passthoughts were those where subjects had to
count objects of a specific color or imagine singing a song.
The idea of mind-reading is pretty convenient, but if the devices
aren't accessible people will refuse to use it no matter how accurate
the system, researchers explain.
Everybody
knows a computer is a machine made of metal and plastic, with microchip
cores turning streams of electrons into digital reality.
A century from now, though, computers could look quite different.
They might be made from neurons and chemical baths, from bacterial
colonies and pure light, unrecognizable to our old-fashioned 21st
century eyes.
Far-fetched? A little bit. But a computer is just a tool for
manipulating information. That's not a task wedded to some particular
material form. After all, the first computers were people,
and many people alive today knew a time when fingernail-sized
transistors, each representing a single bit of information, were a great
improvement on unreliable vacuum tubes.
On the following pages, Wired takes a look at some very non-traditional computers.
Above:
Slime Computation
"The great appeal of non-traditional computing is that I can connect
the un-connectable and link the un-linkable," said Andy Adamatzky,
director of the Unconventional Computing Center at the University of the
West of England. He's made computers from electrified liquid crystals, chemical goo and colliding particles, but is best known for his work with Physarum, the lowly slime mold.
Amoeba-like creatures that live in decaying logs and leaves, slime
molds are, at different points in their lives, single-celled organisms
or part of slug-like protoplasmic blobs made from the fusion of millions
of individual cells. The latter form is assumed when slime molds search
for food. In the process they perform surprisingly complicated feats of
navigation and geometric problem-solving.
Slime molds are especially adept at finding solutions to tricky network problems, such as finding efficient designs for Spain's motorways and the Tokyo rail system. Adamatzky and colleagues plan to take this one step further: Their Physarum chip will be "a distributed biomorphic computing device built and operated by slime mold," they wrote in the project description.
"A living network of protoplasmic tubes acts as an active non-linear
transducer of information, while templates of tubes coated with
conductor act as fast information channels," describe the researchers.
"Combined with conventional electronic components in a hybrid chip, Physarum networks will radically improve the performance of digital and analog circuits."
Image: Adamatzky et al./International Journal of General Systems
The Power of the Blob
Inspired by slime mold's problem-solving abilities, Adamatzky and
Jeff Jones, also of the University of the West of England, programmed
the rules for its behavior into a computational model of chemical
attraction. That Russian nesting doll of emulations -- slime mold
interpreted as a program embodied in chemistry translated into a program
-- produced a series of experiments described in one superbly named
paper: "Computation of the Travelling Salesman Problem by a Shrinking
Blob."
In that paper, released March 25 on arXiv,
Jones and Adamatzky's simulated chemical goo solves a classic,
deceptively challenging mathematical problem, finding the shortest route
between many points. For a few points this is simple, but many points
become intractably difficult to compute -- but not for the blob (above).
Video: Jeff Jones and Andrew Adamatzky
Crystal Calculations
For decades, scientists who study strange materials known as complex fluids, which switch easily between different phases of matter, have been fascinated by the extraordinary geometries formed by liquid crystals at different temperatures and pressures.
Those geometries are stored information, and the interaction of
crystals a form of computation. By running an electric current through a
thin film (above) of liquid crystals, researchers led by Adamatzky were able to perform basic computational math and logic.
It's hard to keep up with the accomplishments of synthetic
biologists, who every week seem to announce some new method of turning
life's building blocks into pieces for cellular computers. Yet even in
this crowded field, last week's announcement by Stanford University
researchers of a protein-based transistor stood out.
Responsible for conducting logic operations, the transistor, dubbed a
"transcriptor," is the last of three components -- the others, already
developed, are rewritable memory and information transmission --
necessary to program cells as computers. Synthetic biologist Drew Endy,
the latest study's lead author, envisions plant-based environmental
monitors, programmed tissues and even medical devices that "make Fantastic Voyage come true," he said.
In the image above, Endy's cellular "buffer gates" flash red or
yellow according to their informational state. In the image below, a
string of DNA programmed by synthetic biologist Eric Winfree of the
California Institute of Technology runs synthetic code -- the A's, C's,
T's and G's -- with its molecules.
Most designs for molecular computers are based on human notions of
what a computer should be. Yet as researchers applied mathematician Hajo
Broersma of the Netherlands' University of Twente wrote of their work,
"the simplest living systems have a complexity and sophistication that
dwarfs manmade technology" -- and they weren't even designed to be that
way. Evolution generated them.
In the NASCENCE project, short for "NAnoSCale Engineering for Novel Computation using Evolution,"
Broersma and colleagues plant to exploit evolution's ability to use
combinations of molecules and their emergent properties in unexpected,
incredibly powerful ways. They hope to develop a system that interfaces a
digital computer with nano-scale particle networks, then use the
computer to set algorithmic goals towards which evolution will guide the
particles.
"We want to develop alternative approaches for situations or problems
that are challenging or impossible to solve with conventional methods
and models of computation," they write. One imagines computer chips with
geometries typically seen in molecular structures, such as the E. coli ribosome and RNA seen here; success, predict Broersma's team, could lay "the foundations of the next industrial revolution."
The Large Hadron Collider's 17 miles of track make it the world's
largest particle accelerator. Could it also become the world's largest
computer?
Not anytime soon, but it's at least possible to wonder. Another of
Adamatzky's pursuits is what he calls "collision-based computing," in
which computationally modeled particles course through a digital cyclotron,
with their interactions used to carry out calculations. "Data and
results are like balls travelling in an empty space," he said.
In the image below, simulated particle collisions evolve over time.
Above, protons collide during the Large Hadron Collider's ATLAS
experiment.
The idea of computers that harness the spooky powers of quantum
physics -- such as entanglement, in which far-flung particles are linked
across space and time,
so that altering one instantaneously affects the other -- has been
around for years. And though quantum computers are still decades from
reality, achievements keep piling up: entanglement made visible to the naked eye, performed with ever-greater numbers of particles, and used to control mechanical objects.
The latest achievement, described March 31 in Nature Photonics by physicist Edo Waks of the University of Maryland and colleagues, involves the control of photons
with logic gates made with quantum dots, or crystal semiconductors
controlled by lasers and magnetism. The results "represent an important
step towards solid-state quantum networks," the researchers wrote.
Image: Schematic of the quantum dot-controlled photon. (Kim et al./Nature Photonics)
Frozen Light
If quantum computers running on entangled photons are still far-off,
there's another, non-quantum possibility for light-based computing.
Clouds of ultra-cold atoms, frozen to temperatures just above absolute
zero -- the point at which all motion ceases -- might be used to slow and control light, harnessing it inside an optical computer chip.
Image: Schematic of controlled light. (Anne Goodsell/Harvard University)
The Quantum Brain
It's easy to think of minds as computers,
and accurate in the sense that brains are information-processing
systems. They are also, however, exponentially more complex and
sophisticated than any engineered device.
Even as quantum computing remains a far-off dream, some scientists think quantum physics underlies our thoughts. The question is far from settled,
but quantum processes have been observed in a variety of non-human
cells, raising the alluring possibility of a role in thought.
"Quantum computation occurs in the human mind, but only at the
unconscious level," said theoretical physicist Paola Zizzi of Italy's
University of Padua. "As quantum computation is much faster than
classical computation, the unconscious thought is then much faster than
the conscious thought and in a sense, it 'prepares' the latter."
If identified in our brains, quantum thinking could be used to
inspire computer designs not yet dreamed of. Broadly speaking, that's a
motivation of many in the non-conventional computing community, said
Hector Zenil, a computer scientist at Sweden's Karolinska University and
editor of A Computable Universe: Understanding and Exploring Nature as Computation.
Zenil isn't convinced by quantum claims for the brain, but he sees a
world suffused by informational processes. Researchers like himself and
Zizzi are trying to "use the computational principles that nature may
use in order to conceive new kinds of computing," said Zenil.
In A Computable Universe,
Zenil and others take the idea of computation as an abstract process,
capable of being performed on any system that can store and manipulate
information, to its logical extreme. Computers might not only be made
from chemicals and cells and light, they say; the universe itself could be a computer, processing the information of which our everyday experiences -- and everything else -- are composed.
This is a tricky idea -- if the universe is computed, what's the
computer? -- and for obvious reasons difficult to test, though Zenil
thinks it's possible. In his work on the algorithmic aspects of
existence, he's developed measures of the statistical distributions one
would expect to see if reality is in fact computed.
If that seems a scary proposition, in which life would play out in
linear mechanical ordination, rest assured: mechanistic, predetermined
processes don't hold sway in a universal computer. Some aspects of
existence will necessarily be "undecidable," impossible to describe with
algorithms or predict beforehand. Ghosts will still live in the
machine.
Image: The Milky Way galaxy/NASA/ESA/Q.D. Wang/Jet Propulsion Laboratory/S. Stolovy
A tech team from Tokyo University of
Agriculture and Technology in Japan have unveiled a smell-o-vision TV
that will bring a sense of reality from our favorite TV programs. The
team created a “smelling screen” that enables scents in spots where the
corresponding object is placed. The screen generates scents from gel
pellets from four air streams on all corners of the display. The scents
are blown at varying and controlled rates to make it seem as though the
scent is coming directly from the object on-screen.
This isn’t the first time the “smell-o-vision” technology has
appeared in the tech world, however it is the first time that is has
been implemented into television screens. The smell-o-vision technology
was first implemented in a 1960?s film titled “Scent of Mystery”. The
film launched in 3 customized movie theaters in New York City, Los
Angeles, and Chicago. In the film, there were scents dispersed to the
audience during certain moments in the film. Unfortunately, the film was
a huge failure due to the malfunctioning of the scent mechanism. There
were delayed actions between the specific scenery and the scents, and
the mechanisms made a loud noise while releasing the scents.
This Japanese team seems to have been able to address the problem
however, and successfully implemented the smell-o-vision technology in
televisions. Right now, there can only be one scent released at a time,
however, the team hopes to be able to make it so multiple smells could
be implemented at the same time. They’re hoping to do so by implementing
a cartridge-like system that will allow them to alternate smells more
easily.
While the smell-o-vision can make movies and TV shows more engaging
and real, it also has somewhat of a negative effect. Advertisement
agencies can take advantage of the technology as well, and when they do,
all hell will break loose. If you thought that steak in that commercial
looked good now, wait until you can actually smell it. With this new
technology, restaurants will be able to more effectively rake in
customers with their advertisements. We’ll keep you posted as soon as
more information regarding the smell-o-vision is unveiled.
At SXSW this afternoon, Google provided developers with a first
glance at the Google Glass Mirror API, the main interface between Google
Glass, Google’s servers and the apps that developers will write for
them. In addition, Google showed off a first round of applications that
work on Glass, including how Gmail works on the device, as well as
integrations from companies like the New York Times, Evernote, Path and
others.
The Mirror API is essentially a REST API,
which should make developing for it very easy for most developers. The
Glass device essentially talks to Google’s servers and the developers’
applications then get the data from there and also push it to Glass
through Google’s APIs. All of this data is then presented on Glass
through what Google calls “timeline cards.” These cards can include
text, images, rich HTML and video. Besides single cards, Google also
lets developers use what it calls bundles, which are basically sets of
cards that users can navigate using their voice or the touchpad on the
side of Glass.
It looks like sharing to Google+ is a built-in feature of the Mirror
API, but as Google’s Timothy Jordan noted in today’s presentation,
developers can always add their own sharing options, as well. Other
built-in features seem to include voice recognition, access to the
camera and a text-to-speech engine.
Glass Rules
Because Glass is a new and unique form factor, Jordan also noted,
Google is setting a few rules for Glass apps. They shouldn’t, for
example, show full news stories but only headlines, as everything else
would be too distracting. For longer stories, developers can always just
use Glass to read text to users.
Essentially, developers should make sure that they don’t annoy users
with too many notifications, and the data they send to Glass should
always be relevant. Developers should also make sure that everything
that happens on Glass should be something the user expects, said Jordan.
Glass isn’t the kind of device, he said, where a push notification
about an update to your app makes sense.
Using Glass With Gmail, Evernote, Path and Others
As
part of today’s presentation, Jordan also detailed some Glass apps
Google has been working on itself, and apps that some of its partners
have created. The New York Times app, for example, shows headlines and
then lets you listen to a summary of the article by telling Glass to
“read aloud.” Google’s own Gmail app uses voice recognition to answer
emails (and it obviously shows you incoming mail, as well). Evernote’s
Skitch can be used to take and share photos, and Jordan also showed a
demo of social network Path running on Glass to share your location.
So far, there is no additional information about the Mirror API on
any of Google’s usual sites, but we expect the company to release more
information shortly and will update this post once we hear more.
A new robot unveiled this
week highlights the psychological and technical challenges of designing
a humanoid that people actually want to have around.
Like all little boys, Roboy likes to show off.
He can say a few words. He can shake hands and wave. He is
learning to ride a tricycle. And - every parent's pride and joy - he has
a functioning musculoskeletal anatomy.
But when Roboy is unveiled this Saturday at the Robots on
Tour event in Zurich, he will be hoping to charm the crowd as well as
wow them with his skills.
"One of the goals is for Roboy to be a messenger of a new
generation of robots that will interact with humans in a friendly way,"
says Rolf Pfeifer from the University of Zurich - Roboy's
parent-in-chief.
As manufacturers get ready to market robots for the home it has become essential for them to overcome the public's suspicion of them. But designing a robot that is fun to be with - as well as useful and safe - is quite difficult.
-----
The uncanny valley: three theories
Researchers have speculated about why we might feel uneasy in the presence of realistic robots.
They remind us of corpses or zombies
They look unwell
They do not look and behave as expected
-----
Roboy's main technical innovation
is a tendon-driven design that mimics the human muscular system.
Instead of whirring motors in the joints like most robots, he has around
70 imitation muscles, each containing motors that wind interconnecting
tendons. Consequently, his movements will be much smoother and less
robotic.
But although the technical team was inspired by human beings,
it chose not to create a robot that actually looked like one. Instead
of a skin-like covering, Roboy has a shiny white casing that proudly
reveals his electronic innards.
Behind this design is a long-standing hypothesis about how people feel in the presence of robots.
In 1970, the Japanese roboticist Masahiro Mori speculated that
the more lifelike robots become, the more human beings feel familiarity
and empathy with them - but that a robot too similar to a human
provokes feelings of revulsion.
Mori called this sudden dip in human beings' comfort levels the "uncanny valley".
"There are quite a number of studies that suggest that as
long as people can clearly see that the robot is a machine, even if they
project their feelings into it, then they feel comfortable," says
Pfeifer.
Roboy was styled as a boy - albeit quite a brawny one - to
lower his perceived threat levels to humans. His winsome smile - on a
face picked by Facebook users from a selection - hasn't hurt the team in
their search for corporate sponsorship, either.
But the image problem of robots is not just about the way they look. An EU-wide survey last year found that although most Europeans have a positive view of robots, they feel they should know their place.
Eighty-eight per cent of respondents agreed with the
statement that robots are "necessary as they can do jobs that are too
hard or dangerous for people", such as space exploration, warfare and
manufacturing. But 60% thought that robots had no place in the care of
children, elderly people and those with disabilities.
The computer scientist and psychologist Noel Sharkey has,
however, found 14 companies in Japan and South Korea that are in the
process of developing childcare robots.
South Korea has already tried out robot prison guards, and three years ago launched a plan to deploy more than 8,000 English-language teachers in kindergartens.
-----
A robot buddy?
In the film Robot and Frank, set in the near-future, ageing
burglar Frank is provided with a robot companion to be a helper and
nurse when he develops dementia
The story plays out rather like a futuristic buddy movie -
although he is initially hostile to the robot, Frank is soon programming
him to help him in his schemes, which are not entirely legal
-----
Cute, toy-like robots are already available for the home.
Take the Hello Kitty robot, which has been on the market for
several years and is still available for around $3,000 (£2,000).
Although it can't walk, it can move its head and arms. It also has
facial recognition software that allows it to call children by name and
engage them in rudimentary conversation.
A tongue-in-cheek customer review of the catbot on Amazon reveals a certain amount of scepticism.
"Hello Kitty Robo me was babysit," reads the review.
"Love me hello kitty robo, thank robo for make me talk
good... Use lots battery though, also only for rich baby, not for no
money people."
-----
Asimo
At just 130cm high, Honda's Asimo jogs around on bended knee
like a mechanical version of Dobby, the house elf from Harry Potter. He
can run, climb up and down stairs and pour a bottle of liquid in a cup.
Since 1986, Honda have been working on humanoids with the ultimate aim of providing an aid to those with mobility impairments.
-----
The product description says the
robot is "a perfect companion for times when your child needs a little
extra comfort and friendship" and "will keep your child happily
occupied". In other words, it's something to turn on to divert your
infant for a few moments, but it is not intended as a replacement
child-minder.
An ethical survey
of "robot nannies" by Noel and Amanda Sharkey suggests that as robots
become more sophisticated parents may be increasingly tempted to hand
them too much responsibility.
The survey also raises the question of what long-term effects
will result from children forming an emotional bond with a lump of
plastic. They cite one case study in which a 10-year-old girl, who had
been given a robot doll for several weeks, reported that "now that she's
played with me a lot more, she really knows me".
Noel Sharkey says that he loves the idea of children playing
with robots but has serious concerns about them being brought up by
them. "[Imagine] the kind of attachment disorders they would develop,"
he says.
But despite their limitations, humanoid robots might yet
prove invaluable in narrow, fixed roles in hospitals, schools and homes.
"Something really interesting happens between some kids with
autism and robots that doesn't happen between those children and other
humans," says Maja J Mataric, a roboticist at the University of Southern
California. She's found that such children respond positively to
humanoids and she is trying to work out how they can be used
therapeutically.
In their study, the Sharkeys make the observation that robots have one
advantage over adult humans. They can have physical contact with
children - something now frowned upon or forbidden in schools.
"These restrictions would not apply to a
robot," they write, "because it could not be accused of having sexual
intent and so there are no particular ethical concerns. The only concern
would be the child's safety - for example, not being crushed by a
hugging robot."
When it comes to robots, there is such a thing as too much love.
"If you were having a mechanical assistant in the home that
was powerful enough to be useful, it would necessarily be powerful
enough to be dangerous," says Peter Robinson of Cambridge University.
"Therefore it'd better have really good understanding and
communication."
His team are developing robots with sophisticated facial
recognition. These machines won't just be able to tell Bill from Brenda
but they will be able to infer from his expression whether Bill is
feeling confused, tired, playful or in physical agony.
Roboy's muscles and tendons may actually make him a safer
robot to have around. His limbs have more elasticity than a conventional
robot's, allowing for movement even when he has a power failure.
Rolf Pfeifer has a favourite question which he puts to those sceptical about using robots in caring situations.
"If you couldn't walk upstairs any more, would you want a person to carry you or would you want to take the elevator?"
Most people opt for the elevator, which is - if you think about it - a kind of robot.
Pfeifer believes robots will enter our homes. What is not yet
clear is whether the future lies in humanoid servants with a wide range
of limited skills or in intelligent, responsive appliances designed to
do specific tasks, he says.
"I think the market will ultimately determine what kind of robot we're going to have."
We've had a bit of a love / hate relationship with the Google
Chromebook since the first one crossed our laps back in 2011 -- the Samsung Series 5.
We loved the concept, but hated the very limited functionality provided
by your $500 investment. Since then, the series of barebones laptops
has progressed, and so too has the barebones OS they run, leading to our
current favorite of the bunch: the 2012 Samsung Chromebook.
In that laptop's review, we concluded that "$249 seems like an
appropriate price for this sort of device." So, then, imagine our
chagrin when Google unveiled a very similar sort of device, but
one that comes with a premium. A very hefty premium. It's a high-end,
halo sort of product with incredible build quality, an incredible screen
and an incredible price. Is a Chromebook that starts at more than five
times the cost of its strongest competition even worth considering?
Let's do the math.
Hardware
Wow. This is certainly a departure. If you're going to charge an
obscene premium for a laptop with an incredibly limited OS, you'd better
produce something that is incredibly well-made. In that regard, the
Chromebook Pixel is a complete success. If you'll forgive us just one
cliche, Google has gone from zero to hero with the Pixel. It's truly
something to behold.
First impressions are of a laptop with
surprising density. Apple's MacBook Pros, with their precisely hewn
aluminum exteriors, have long been the benchmark against which other
laptops were held in when it comes to a sense of solidity. In its first
attempt, Google has managed to match that feeling of innate integrity
with the Pixel, and in some ways go beyond it.
It's all machined
aluminum, anodized in a dark, almost gunmetal color that successfully
bridges the gap between sophisticated and cool. Everything is very
angular; vertical sides terminate abruptly at the horizontal plane that
makes up the typing surface. In fact, the only thing not bridged by
right angles is the cylindrical hinge running nearly the entire width of
the machine, but thankfully the edges of the entire laptop are just
rounded enough to keep it from digging into your wrists uncomfortably.
Battle scars received while typing have become a bit of an annoyance in
many modern, aluminum-bodied machines.
A good, quick test of a laptop's rigidity is to open it up, grab it
on both sides of the keyboard and try to twist. On a flimsy product
you'll hear some uncomfortable-sounding noises coming from beneath the
keys and, if you're really unlucky, you might send a letter or two
flying. Not so with the Pixel. The torsional rigidity is impressive for a
machine that is as thin, and as light, as this.
To put some numbers on that, the laptop measures 16mm (0.62 inch) in
thickness and 3.35 pounds (1.52kg) in heft. That compares very favorably
to the 13-inch MacBook Pro with Retina, the one that we would most
closely pit this against, which is 19mm (0.75 inch) thick and weighs
3.57 pounds (1.62kg). So it's thinner and lighter, and with a very
similar 12.85-inch, 2,560 x 1,700 display (which we'll thoroughly
discuss momentarily), but with lower performance. It is, however, on par
with the 13-inch MacBook Air when it comes to speed, and is only
slightly thicker (0.06 inch) and heavier (0.39 pound).
A
dual-core Intel 1.8GHz Core i5 chip is the one and only processor on
offer here, paired with 4GB of DDR3 RAM and generally providing more
than enough oomph to drive the very minimalist operating system, which
is installed on either a 32 or 64GB SSD. The larger option is only
available if you opt for the $1,449 laptop, which also adds
Verizon-compatible LTE to the mix (along with GPS). Either model sports
dual-band MIMO 802.11a/b/g/n along with Bluetooth 3.0. For those who
like to keep it physical, there are two USB ports on the left (sadly
just 2.0) situated next to a Mini DisplayPort and a 3.5mm headphone
jack. On the right is an SD card reader, along with the SIM card tray --
assuming you paid for the WWAN upgrade.
For those who aren't
interested in making use of that headphone jack, there are what Google
calls "powerful speakers" built in here -- though they're hard to spot.
They're integrated somewhere below the keyboard and, believe it or not,
that "powerful" description is quite apt. You won't be giving your
neighbors anything to complain about if these are cranked to maximum
volume, nor do you need to concern yourself about cracking the masonry
thanks to the bass, but the output here is respectably loud and
good-sounding. These speakers are at least on par with your average
mid-range Bluetooth unit, meaning you'll have one less thing to pack.
For the receiving end, Google has also integrated an array of
microphones throughout the machine to help with active noise
cancellation, including one positioned to detect (and eliminate)
keyboard clatter when you're typing whilst in a Hangout or the like.
Without the ability to selectively disable this microphone we can't be
sure how great an effect it had, but we can say that plenty of
QWERTY-based noise got through in our test calls. Google, though, has
indicated it will continue to refine the behavior of that mic, so
there's hope for improvement.
Integrated in the bezel is a webcam
situated in the center-top of the bezel, next to a small status LED to
let you know when Big Brother is watching. One final piece is the power
plug, a largish wall wart that takes a cue from Apple by including a
removable section. Here you can slot in either a flip-out, two-prong end
or a longer, three-prong cable. The inspiration is obvious, but we're
not complaining. This lets you have both a short, easy-to-pack version
when you're traveling light and a longer but rather more clunky version
for those times when you need a bit more reach.
We do, however,
wish Google had also taken inspiration from Apple and Microsoft and
included some sort of magnetic power connector. We found that the small
plug, with its traditional, single-prong-style connector, had a tendency
to slowly work its way out of the laptop when the cable had any tension
from the left. Thankfully, a bright glowing light on the connector lets
you know when the laptop is charged or charging -- and thus when the
thing has slid out far enough to lose connection.
Keyboard and trackpad
Island-style keyboards continue to be all the rage and, for the most
part, Google makes no exception for its latest Chromebook. The primary
keys float in a slightly recessed area, comfortably sized and
comfortably spaced. Each has great feel and great resistance. Typing on
this machine is a joy.
However, the row of function keys that
rest atop the number keys, discrete buttons for adjusting volume and
brightness and the like, is a different story. These are flush with each
other and far stiffer than the normal keys. This isn't much of a
bother, since you won't be using them nearly as frequently as the rest,
but butting them right up against each other makes them difficult to
find by touch. Thankfully, all are backlit, so locating them in the dark
is no problem.
We also wished for dedicated Home and End keys,
after finding the Chrome OS alternative of Ctrl + Alt + Up or Down to be
a bit of a handful. Regardless, you'll quickly learn to type around
these relatively minor shortcomings and enjoy the great keyboard.
Thankfully, the trackpad is equally good.
It's a glass unit,
darkly colored and positioned in the center of the wrist rest, which
makes it slightly shifted to the right compared to the space bar. It has
a matte coating but still feels quite smooth, resulting in a very nice
swiping sensation indeed. Of course, with a 12.85-inch touch-sensitive
display, you may find yourself using it less frequently than you think.
Display
Again, up top is a 12.85-inch, 2,560 x 1,700 IPS LCD panel that we
can't look at without thinking of the very similar 13.3-inch, 2,560 x
1,600 panel on the 13-inch MacBook Pro with Retina display. It's smaller
but packs an extra 100 pixels vertically, giving it a slightly higher
pixel density of 239 ppi. Naturally, that's far from the full story
here, and those who are really into proportions will know that
resolution equates to a 3:2 aspect ratio. In other words: it's rather
tall.
A 16:9 aspect ratio (or something close to it) is the
prevailing trend among non-Macs these days, but even when acknowledging
that, this one feels particularly tall. Still, we didn't exactly mind
it. As mentioned above, the keyboard is plenty roomy, and given that
Chrome OS isn't particularly friendly to multi-window
multi-tasking (manually justifying windows is a real chore) we were
rarely left wanting a wider display.
That was, really, our only
minor reservation about this panel. Otherwise we have nothing but love
for the thing. It is, of course, a ridiculously high resolution, which
makes pixels basically disappear. Indeed the simple, clean and stark
Chrome OS looks great when rendered with such clarity, but we couldn't
help but lament the occasional excess of white space that's becoming
common across many of Google's web apps. For a display with a pixel
density this high, it feels somewhat under-utilized.
That is
until, of course, you boot up the 4K sample footage Google thoughtfully
pre-installed on the machine, which looks properly mind-blowing -- even
if it is only being rendered at slightly higher than half its native
resolution.
This is a glossy panel, tucked behind a pane of Gorilla Glass,
so glare may be a bit of a problem if your work setup has bright lights
positioned behind you. Still, reflectivity seemed to be on par with the
latest, optically bonded panels -- that is to say, far from the
"mirror, mirror" effect provided by many of the earlier gloss displays.
Contrast is quite good from all angles, though the color accuracy drops
off if you look at it from too high or low, with everything quickly
getting a bit pink. Slightly pretty.
And, finally, this is indeed
a touch-enabled panel, something we didn't know we needed on a
Chromebook -- and frankly we're still not sure we do. We'll discuss that
in more detail in the software section below.
Performance and battery life
Again we're dealing with a 1.8GHz Intel Core i5 processor here, a
bit on the mild side compared to most higher-end laptops. Still, it
proves to be more than enough to run the lightweight Chrome OS. That's
paired with 4GB of DDR3 RAM and, predictably, integrated graphics
courtesy of Intel's HD 4000 chipset.
It's no barnstormer, but it
runs a browser with aplomb. And, really, that's about all it's likely to
do with the limited selection of apps available for Chrome. Everything
we threw at it ran fine, though after extended sessions we did notice
heavier websites started to get a little bit stuttery. It's nothing that
rebooting the browser didn't fix.
High-def videos play smoothly,
though when pushing the pixels (or running games), the machine does get
fairly warm. The fan vents are below the hinge; a thin sliver of an
opening that thankfully doesn't seem to dump a lot of hot air into your
lap. It's noticeable, but it isn't particularly loud or annoying and
again, since you likely won't be doing too much taxing stuff here, don't
expect to hear it all that often.
When it comes to battery life,
Google estimates the 59Wh battery will provide "up to" five hours of
continuous use. And, indeed it may. On our standard battery run-down
test, which loops a video at fixed brightness, the machine clocked in at
four hours and eight minutes for the WiFi model. The LTE model, with
its LTE antenna on, came in about 30 minutes shorter at 3:34.
These numbers are rather poor, unfortunately. The 13-inch MacBook Pro
with Retina clocks in at more than six hours on the same battery test,
while both the 13-inch MacBook Air and the latest Samsung Chromebook
score about 30 minutes more even than that.
Connectivity
As mentioned above, both Chromebook Pixel models include dual-band MIMO
802.11a/b/g/n, which means you'll be sucking down bits at an optimal
rate more or less regardless of what sort of router you're connecting
to.
Stepping up to the $1,449 LTE version of course means you can
walk away from those routers. That machine includes a Qualcomm MDM9600
chipset to receive on LTE band 13, intended for Verizon in the US only.
So, then, we tested it in the US in two different LTE markets on both
coasts. Speeds varied widely from location to location, but in general
matched or exceeded the speeds we saw from other Verizon-compatible
mobile devices.
In terms of more practical connectivity concerns,
it's worth noting that the modem takes about 30 seconds to reconnect
after the laptop resumes from its suspended state, which is a bit
annoying but certainly no slower than your average LTE USB modem. Also,
Verizon is kindly including 100MB of data each month for free for your
first two years of Chromebook ownership, but after that you'll be stuck
paying up for one of Verizon's tiered data plans.
Oh, and the
Pixel lacks an Ethernet port, and does not include an adapter. We tried a
few standard USB Ethernet adapters and all worked without a hitch.
Software
As we concluded in our review of the most recent version,
Chrome OS has come a long, long way since that first Chromebook crossed
our laps. What we have now is a far more sophisticated and
comprehensive experience than we did a few years ago, but it's still
incredibly limited compared to the broader world of desktop operating
systems.
Simple tasks like file management can be a real chore if
you're doing anything other than moving a file into a subdirectory. And
while the OS itself has a refreshingly simple visual style, it's also
very stark and, frankly, a somewhat wasteful design. Not to keep harping
on the file explorer, but each file in a list is separated by a sea of
white big enough to basically double the effective height. When you're
skimming through a big 'ol list of files in a directory, it takes a lot
more scrolling than should be necessary given the resolution of this
display.
At least Google made the scrolling easy. As mentioned
above, the trackpad is quite good and very responsive. Multi-finger
gestures are responsive, so good that you might not be inclined to reach
up to that touch panel. But, you should, because the experience is
generally good as well, though you'll rarely be doing anything more than
scrolling webpages or documents. There's not really a whole lot more
Chrome OS can do, but even in games like Cut the Rope and Angry Birds, touch was just as good as... well, as it is on an Android tablet.
That said, it's disappointing that Google didn't introduce any
gestures to the OS to match its newfound touch compatibilities. In fact,
you can't even pinch-zoom in the image viewer or even on most pages in
the Chrome browser -- only in specifically pinch-friendly websites (like
Google Maps). There are no three- or four-finger gestures for switching
apps, and swiping in from the bezels does nothing. Except, that is, for
a swipe up from the bottom, which alternatively shows or hides the
launcher bar.
Again, we won't restate the entire review of Chrome
OS, but it's important to note at least briefly that functionality here
is still very minimal. There are built-in apps for viewing photos and
videos, for browsing files, for taking photos from the integrated
webcam, an app for taking notes and... the web browser. That, of course,
is the most important part. Suffice to say, if you can't do all your
work from inside of an instance of Chrome on some other platform (like
Windows or Mac), you probably won't be able to do it here, either.
Still, we did want to point out one important part of the software, and
that is it's easily replaceable. The bootloader is not locked and we've
already seen the thing rocking Linux -- and looking quite good
while doing it. So, if you happen to be looking for an incredibly
well-designed laptop to run that most noble of open-source operating
systems, this could be it.
Pricing and competition
We can keep the pricing bit short, because there are only two options
here. For $1,299, you can get yourself the WiFi model with 32GB of local
SSD storage. For $1,449 you step up to the LTE model, which throws in
64GB of storage in a bid to sweeten the deal.
Should that still
be too bitter for your tastes -- and we're thinking there's a very good
chance it will be -- Google has included plenty of other incentives that
are at least mildly saccharine. First among these are 12 free Gogo
passes for in-flight connectivity, each one worth about $14 for a total
of $168. The other, rather more compelling add-in, is 1TB of online
storage free for three years.
That, believe it or not, is worth a
whopping $1,800, which of course means that if you were looking to rent
that much data for a period of three years you'd actually be better off
just buying a Pixel. It would, effectively, just be a nice, free toy.
For everyone not
interested in storing copious quantities of stuff in the cloud, both
price points are rather dear to put it mildly. As ever, it's difficult
to compare a Chromebook to other laptops on the market thanks to the
limited functionality provided by the OS. So, we'll focus primarily on
hardware comparisons, and as we mentioned above, we find ourselves
inclined to compare this to the 13-inch MacBook Pro with Retina display.
That machine, with a full operating system and a faster, 2.5GHz Core
i5 processor, starts at $1,499. That, though, has a 128GB of SSD, twice
that of the biggest Pixel. We could also see many comparing this
against the 13-inch MacBook Air, which offers the same CPU, integrated
graphics and 4GB of RAM for the same $1,199. It's lacking the high-res
screen but it perhaps makes up for that with, again, 128GB of storage.
On the PC side of things, that resolution is unmatched, but the other
specs certainly aren't. We recently had reasonably good feelings about Samsung's Series 5 UltraTouch,
a 13-incher packing a similar Core i5 CPU, 4GB of RAM, but again a
500GB platter-based hard disk. An SSD isn't an option, but the $849
price is certainly more palatable.
Again, none of these is an
apples-to-apples comparison, as the Pixel offers a touchscreen,
something all the Macs lack, and offers LTE connectivity, thus making it
even more of a rare bird on the laptop scene. Whether these unique
attributes, plus the various goodies Google is throwing in, turn this
into a compelling proposition compared to the competition is something
you'll have to decide for yourself.
Wrap-up
Again we reach the dreaded wrap-up section on a Chromebook review. It's
simply never easy to classify these machines. In some regards, the
Pixel is even harder to pigeonhole than its predecessors. The level of
quality and attention to detail here is quite remarkable for what is,
we'll again remind you, Google's first swing at building a laptop.
Boot-ups are quick, performance is generally good and, of course,
there's that display.
But, with one single statistic, Google has
made the Chromebook Pixel even easier to write off than any of its
quirky predecessors: price. For an MSRP that is on par with some of the
best laptops in the world, the Pixel doesn't provide anywhere near as
much potential when it comes to functionality. It embraces a world where
everyone is always connected and everything is done on the web -- a
world that few people currently live in.
The Chromebook Pixel, then, is a lot like the Nexus Q:
it's a piece of gorgeous hardware providing limited functionality at a
price that eclipses the (often more powerful) competition. It's a lovely
thing that everyone should try to experience but, sadly, few should
seriously consider buying.
The frosted-glass doors on the 11th floor of Google’s NYC
headquarters part and a woman steps forward to greet me. This is an
otherwise normal specimen of humanity. Normal height, slender build; her
eyes are bright, inquisitive. She leans in to shake my hand and at that
moment I become acutely aware of the device she’s wearing in the place
you would expect eyeglasses: a thin strip of aluminum and plastic with a
strange, prismatic lens just below her brow. Google Glass.
What was a total oddity a year ago, and little more than an
experiment just 18 months ago is now starting to look like a real
product. One that could be in the hands (or on the heads, rather) of
consumers by the end of this year. A completely new kind of computing
device; wearable, designed to reduce distraction, created to allow you
to capture and communicate in a way that is supposed to feel completely
natural to the wearer. It’s the anti-smartphone, explicitly fashioned to
blow apart our notions of how we interact with technology.
But as I release from that handshake and study the bizarre device
resting on my greeter’s brow, my mind begins to fixate on a single
question: who would want to wear this thing in public?
Finding Glass
The Glass project was started "about three years ago" by an engineer
named Babak Parviz as part of Google’s X Lab initiative, the lab also
responsible for — amongst other things — self-driving cars and neural
networks. Unlike those epic, sci-fi R&D projects at Google, Glass is
getting real much sooner than anyone expected. The company offered
developers an option to buy into an early adopter strategy called the
Explorer Program during its I/O conference last year, and just this week
it extended that opportunity to people in the US
in a Twitter campaign which asks potential users to explain how they
would put the new technology to use. Think of it as a really aggressive
beta — something Google is known for.
I was about to beta test Glass myself. But first, I had questions.
Seated in a surprisingly bland room — by Google’s whimsical office
standards — I find myself opposite two of the most important players in
the development of Glass, product director Steve Lee and lead industrial
designer Isabelle Olsson. Steve and Isabelle make for a convincing pair
of spokespeople for the product. He’s excitable, bouncy even, with big
bright eyes that spark up every time he makes a point about Glass.
Isabelle is more reserved, but speaks with incredible fervency about the
product. And she has extremely red hair. Before we can even start
talking about Glass, Isabelle and I are in a heated conversation about
how you define the color navy blue. She’s passionate about design — a
condition that seems to be rather contagious at Google these days — and
it shows.
Though the question of design is at the front of my mind, a picture
of why Glass exists at all begins to emerge as we talk, and it’s clearly
not about making a new fashion accessory. Steve tries to explain it to
me.
"Why are we even working on Glass? We all know that people love to be
connected. Families message each other all the time, sports fanatics
are checking live scores for their favorite teams. If you’re a frequent
traveler you have to stay up to date on flight status or if your gate
changes. Technology allows us to connect in that way. A big problem
right now are the distractions that technology causes. If you’re a
parent — let’s say your child’s performance, watching them do a soccer
game or a musical. Often friends will be holding a camera to capture
that moment. Guess what? It’s gone. You just missed that amazing game."
Isabelle chimes in, "Did you see that Louis C.K. stand up when he was
telling parents, ‘your kids are better resolution in real life?’"
Everyone laughs, but the point is made.
Human beings have developed a new problem since the advent of the
iPhone and the following mobile revolution: no one is paying attention
to anything they’re actually doing. Everyone seems to be looking down at
something or through something. Those perfect moments watching your
favorite band play or your kid’s recital are either being captured via
the lens of a device that sits between you and the actual experience, or
being interrupted by constant notifications. Pings from the outside
world, breaking into what used to be whole, personal moments.
Steve goes on. "We wondered, what if we brought technology closer to
your senses? Would that allow you to more quickly get information and
connect with other people but do so in a way — with a design — that gets
out of your way when you’re not interacting with technology? That’s
sort of what led us to Glass." I can’t stop looking at the lens above
his right eye. "It’s a new wearable technology. It’s a very ambitious
way to tackle this problem, but that’s really sort of the underpinning
of why we worked on Glass."
I get it. We’re all distracted. No one can pay attention. We’re
missing all of life’s moments. Sure, it’s a problem, but it’s a new
problem, and this isn’t the first time we’ve been distracted by a new
technology. Hell, they used to think car radios would send drivers
careening off of the highways. We’ll figure out how to manage our
distraction, right?
Maybe, but obviously the Glass team doesn’t want to wait to find out.
Isabelle tells me about the moment the concept clicked for her. "One
day, I went to work — I live in SF and I have to commute to Mountain
View and there are these shuttles — I went to the shuttle stop and I saw
a line of not 10 people but 15 people standing in a row like this," she
puts her head down and mimics someone poking at a smartphone. "I don’t
want to do that, you know? I don’t want to be that person. That’s when
it dawned on me that, OK, we have to make this work. It’s bold. It’s
crazy. But we think that we can do something cool with it."
Bold and crazy sounds right, especially after Steve tells me that the
company expects to have Glass on the market as a consumer device by the
end of this year.
Google-level design
Forget about normal eyeglasses for a moment. Forget about chunky
hipster glasses. Forget about John Lennon’s circle sunglasses. Forget
The Boys of Summer; forget how she looks with her hair slicked back and
her Wayfarers on. Pretend that stuff doesn’t exist. Just humor me.
The design of Glass is actually really beautiful. Elegant,
sophisticated. They look human and a little bit alien all at once.
Futuristic but not out of time — like an artifact from the 1960’s,
someone trying to imagine what 2013 would be like. This is Apple-level
design. No, in some ways it’s beyond what Apple has been doing recently.
It’s daring, inventive, playful, and yet somehow still ultimately
simple. The materials feel good in your hand and on your head, solid but
surprisingly light. Comfortable. If Google keeps this up, soon we’ll be
saying things like "this is Google-level design."
Even the packaging seems thoughtful.
The system itself is made up of only a few basic pieces. The main
body of Glass is a soft-touch plastic that houses the brains, battery,
and counterweight (which sits behind your ear). There’s a thin metal
strip that creates the arc of the glasses, with a set of rather typical
pad arms and nose pads which allow the device to rest on your face.
Google is making the first version of the device in a variety of
colors. If you didn’t want to get creative, those colors are: gray,
orange, black, white, and light blue. I joke around with Steve and
Isabelle about what I think the more creative names would be. "Is the
gray one Graphite? Hold on, don’t tell me. I’m going to guess." I go
down the list. "Tomato? Onyx? Powder — no Avalanche, and Seabreeze."
Steve and Isabelle laugh. "That’s good," Isabelle says.
But seriously. Shale. Tangerine. Charcoal. Cotton. Sky. So close.
That conversation leads into discussion of the importance of color in
a product that you wear every day. "It’s one of those things, you think
like, ‘oh, whatever, it is important,’ but it’s a secondary thing. But
we started to realize how people get attached to the device… a lot of it
is due to the color," Isabelle tells me.
And there is something to it. When I saw the devices in the different
colors, and when I tried on Tangerine and Sky, I started to get
emotional about which one was more "me." It’s not like how you feel
about a favorite pair of sunglasses, but it evokes a similar response.
They’re supposed to feel like yours.
Isabelle came to the project and Google from Yves Behar’s design
studio. She joined the Glass team when their product was little more
than a bizarre pair of white eyeglass frames with comically large
circuit boards glued to either side. She shows me — perhaps ironically —
a Chanel box with the original prototype inside, its prism lens limply
dangling from the right eye, a gray ribbon cable strewn from one side to
the other. The breadboard version.
It was Isabelle’s job to make Glass into something that you could wear, even if maybe you still weren’t sure you wanted to wear it. She gets that there are still challenges.
The Explorer edition which the company will ship out has an
interchangeable sunglass accessory which twists on or off easily, and I
must admit makes Glass look slightly more sane. I also learn that the
device actually comes apart, separating that center metal rim from the
brains and lens attached on the right. The idea is that you could attach
another frame fitted for Glass that would completely alter the look of
the device while still allowing for the heads-up functionality. Steve
and Isabelle won’t say if they’re working with partners like Ray-Ban or
Tom Ford (the company that makes my glasses), but the New York Times
just reported that Google is speaking to Warby Parker, and I’m inclined
to believe that particular rumor. It’s obvious the company realizes the
need for this thing to not just look wearable — Google needs people to
want to wear it.
So yes, the Glass looks beautiful to me, but I still don’t want to wear it.
Topolsky in Mirrorshades
Finally I get a chance to put the device on and find out what using
Glass in the real world actually feels like. This is the moment I’ve
been waiting for all day. It’s really happening.
When you activate Glass, there’s supposed to be a small screen that
floats in the upper right-hand of your field of vision, but I don’t see
the whole thing right away. Instead I’m getting a ghost of the upper
portion, and the bottom half seems to melt away at the corner of my eye.
Steve and Isabelle adjust the nose pad and suddenly I see the glowing box. Victory.
It takes a moment to adjust to this spectral screen in your vision,
and it’s especially odd the first time you see it, it disappears, and
you want it to reappear but don’t know how to make it happen. Luckily
that really only happens once, at least for me.
Here’s what you see: the time is displayed, with a small amount of
text underneath that reads "ok glass." That’s how you get Glass to wake
up to your voice commands. Actually, it’s a two-step process. First you
have to touch the side of the device (which is actually a touchpad), or
tilt your head upward slowly, a gesture which tells Glass to wake up.
Once you’ve done that, you start issuing commands by speaking "ok glass"
first, or scroll through the options using your finger along the side
of the device. You can scroll items by moving your finger backwards or
forward along the strip, you select by tapping, and move "back" by
swiping down. Most of the big interaction is done by voice, however.
The device gets data through Wi-Fi on its own, or it can tether via
Bluetooth to an Android device or iPhone and use its 3G or 4G data while
out and about. There’s no cellular radio in Glass, but it does have a
GPS chip.
Let me start by saying that using it is actually nearly identical to
what the company showed off in its newest demo video. That’s not CGI —
it’s what Glass is actually like to use. It’s clean, elegant, and makes
relative sense. The screen is not disruptive, you do not feel burdened
by it. It is there and then it is gone. It’s not shocking. It’s not
jarring. It’s just this new thing in your field of vision. And it’s
actually pretty cool.
Glass does all sorts of basic stuff after you say "ok glass." Things
you’ll want to do right away with a camera on your face. "Take a
picture" snaps a photo. "Record a video" records ten seconds of video.
If you want more you can just tap the side of the device. Saying "ok
glass, Google" gets you into search, which plugs heavily into what
Google has been doing with Google Now and its Knowledge Graph. Most of
the time when you ask Glass questions you get hyper-stylized cards full
of information, much like you do in Google Now on Android.
The natural language search works most of the time, but when it
doesn’t, it can be confusing, leaving you with text results that seem
like a dead-end. And Glass doesn’t always hear you correctly, or the
pace it’s expecting you to speak at doesn’t line up with reality. I
struggled repeatedly with Glass when issuing voice commands that seemed
to come too fast for the device to interpret. When I got it right
however, Glass usually responded quickly, serving up bits of information
and jumping into action as expected.
Some of the issues stemmed from a more common problem: no data. A
good data connection is obviously key for the device to function
properly, and when taking Glass outside for stroll, losing data or
experiencing slow data on a phone put the headset into a near-unusable
state.
Steve and Isabelle know the experience isn’t perfect. In fact, they
tell me that the team plans to issue monthly updates to the device when
the Explorer program starts rolling. This is very much a work in
progress.
But the most interesting parts of Glass for many people won’t be its
search functionality, at least not just its basic ability to pull data
up. Yes, it can tell you how old Brad Pitt is (49 for those keeping
count), but Google is more interested in what it can do for you in the
moment. Want the weather? It can do that. Want to get directions? It can
do that and display a realtime, turn-by-turn overlay. Want to have a
Google Hangout with someone that allows them to see what you’re seeing?
Yep, it does that.
But the feature everyone is going to go crazy with — and the feature
you probably most want to use — is Glass’ ability to take photos and
video with a "you are there" view. I won’t lie, it’s amazingly powerful
(and more than a little scary) to be able to just start recording video
or snapping pictures with a couple of flicks of your finger or simple
voice commands.
At one point during my time with Glass, we all went out to navigate
to a nearby Starbucks — the camera crew I’d brought with me came along.
As soon as we got inside however, the employees at Starbucks asked us to
stop filming. Sure, no problem. But I kept the Glass’ video recorder
going, all the way through my order and getting my coffee. Yes, you can
see a light in the prism when the device is recording, but I got the
impression that most people had no idea what they were looking at. The
cashier seemed to be on the verge of asking me what I was wearing on my
face, but the question never came. He certainly never asked me to stop
filming.
Once those Explorer editions are out in the world, you can expect a
slew of use (and misuse) in this department. Maybe misuse is the wrong
word here. Steve tells me that part of the Explorer program is to find
out how people want to (and will) use Glass. "It’s really important," he
says, "what we’re trying to do is expand the community that we have for
Glass users. Currently it’s just our team and a few other Google people
testing it. We want to expand that to people outside of Google. We
think it’s really important, actually, for the development of Glass
because it’s such a new product and it’s not just a piece of software.
We want to learn from people how it’s going to fit into their
lifestyle." He gets the point. "It’s a very intimate device. We’d like
to better understand how other people are going to use it. We think
they’ll have a great opportunity to influence and shape the opportunity
of Glass by not only giving us feedback on the product, but by helping
us develop social norms as well."
I ask if it’s their attempt to define "Glass etiquette." Will there
be the Glass version of Twitter’s RT? "That’s what the Explorer program
is about," Steve says. But that’s not going to answer questions about
what’s right and wrong to do with a camera that doesn’t need to be held
up to take a photo, and often won’t even be noticed by its owner’s
subjects. Will people get comfortable with that? Are they supposed to?
The privacy issue is going to be a big hurdle for Google with Glass.
Almost as big as the hurdle it has to jump over to convince normal
people to wear something as alien and unfashionable as Glass seems right
now.
But what’s it actually like to have Glass on? To use it when you’re walking around? Well, it’s kind of awesome.
Think of it this way — if you get a text message or have an incoming
call when you’re walking down a busy street, there are something like
two or three things you have to do before you can deal with that
situation. Most of them involve you completely taking your attention off
of your task at hand: walking down the street. With Glass, that
information just appears to you, in your line of sight, ready for you to
take action on. And taking that action is little more than touching the
side of Glass or tilting your head up — nothing that would take you
away from your main task of not running into people.
It’s a simple concept that feels powerful in practice.
The same is true for navigation. When I get out of trains in New York
I am constantly jumping right into Google Maps to figure out where I’m
headed. Even after more than a decade in the city, I seem to never be
able to figure out which way to turn when I exit a subway station. You
still have to grapple with asking for directions with Glass, but
removing the barrier of being completely distracted by the device in
your hand is significant, and actually receiving directions as you walk
and even more significant. In the city, Glass make you feel more
powerful, better equipped, and definitely less diverted.
I will admit that wearing Glass made me feel self-conscious, and
maybe it’s just my paranoia acting up (or the fact that I look like a
huge weirdo), but I felt people staring at me. Everyone who I made eye
contact with while in Glass seemed to be just about to say "hey, what
the hell is that?" and it made me uncomfortable.
Steve claims that when those questions do come, people are excited to
find out what Glass is. "We’ve been wearing this for almost a year now
out in public, and it’s been so interesting and exciting to do that.
Before, we were super excited about it and confident in our design, but
you never know until you start wearing it out and about. Of course my
friends would joke with me ‘oh no girls are going to talk to you now,
they’ll think it’s strange.’ The exact opposite happened."
I don’t think Glass is right for every situation. It’s easy to see
how it’s amazing for parents to capture all of the adorable things their
kids are doing, or for skydivers and rock climbers who clearly don’t
have their hands free and also happen to be having life changing
experiences. And yes, it’s probably helpful if you’re in Thailand and
need directions or translation — but this might not be that great at a
dinner party, or on a date, or watching a movie. In fact, it could make
those situations very awkward, or at the least, change them in ways you
might not like.
Sometimes you want to be distracted in the old fashioned ways. And
sometimes, you want people to see you — not a device you’re wearing on
your face. One that may or may not be recording them right this second.
And that brings me back to the start: who would want to wear this thing in public?
Not if, but when
Honestly, I started to like Glass a lot when I was wearing it. It
wasn’t uncomfortable and it brought something new into view (both
literally and figuratively) that has tremendous value and potential. I
don’t think my face looks quite right without my glasses on, and I
didn’t think it looked quite right while wearing Google Glass, but after
a while it started to feel less and less not-right. And that’s
something, right?
The sunglass attachment Google is shipping with the device goes a
long way to normalizing the experience. A partnership with someone like
Ray-Ban or Warby Parker would go further still. It’s actually easy to
see now — after using it, after feeling what it’s like to be in public
with Glass on — how you could get comfortable with the device.
Is it ready for everyone right now? Not really. Does the Glass team
still have huge distance to cover in making the experience work just the
way it should every time you use it? Definitely.
But I walked away convinced that this wasn’t just one of Google’s
weird flights of fancy. The more I used Glass the more it made sense to
me; the more I wanted it. If the team had told me I could sign up to
have my current glasses augmented with Glass technology, I would have
put pen to paper (and money in their hands) right then and there. And
it’s that kind of stuff that will make the difference between this being
a niche device for geeks and a product that everyone wants to
experience.
After a few hours with Glass, I’ve decided that the question is no longer ‘if,’ but ‘when?’
Korean trams and buses are moving away from overhead power wires and high-voltage third rails—literally.
Researchers at the Korea Advanced Institute of Science and Technology
(KAIST) have made major advances in wireless power transfer for mass
transit systems. The fruits of their labor, systems called On-line
Electric Vehicles (OLEV), are already being road tested around Korea.
At it’s heart, the technology uses inductive coupling
to wirelessly transmit electricity from power cables embedded in
roadways to pick-up coils installed under the floor of electric
vehicles.
Engineers say the transmitting technology supplies 180 kW of stable,
constant power at 60 kHz to passing vehicles that are equipped with
receivers. The initial OLEV models above received
100 kW of power at 20 kHz through an almost eight-inch air gap. They
have recorded 85 percent transmission efficiency through testing so far.
(A concept drawing for an OLEV tram. Courtesy KAIST.)
The wireless electricity that powers the vehicle’s motors and systems
is also used to charge an on-board battery that supplies energy to the
vehicle when it is away from the power line.
KAIST plans to start deploying the OLEV technology to tramlines in May and high-speed trains in September.
“We have greatly improved the OLEV technology from the early
development stage by increasing its power transmission density by more
than three times,” said Dong-Ho Cho, the director of KAIST’s Center for
Wireless Power Transfer Technology Business Development, in a release.
“The size and weight of the power pickup modules have been reduced as
well. We were able to cut down the production costs for major OLEV
components, the power supply, and the pickup system, and in turn, OLEV
is one step closer to being commercialized.”
The institute announced that buses equipped with the wireless power
transfer technology are already used daily by students on the KAIST
campus in Daejeon, while others are undergoing road tests in Seoul. Two more OLEV buses will begin trial operations in the city of Gumi in July.
Proponents say that the technology banishes overhead power lines and
rails for electric trams and buses, dramatically lowers the costs of
railway wear and tear and allows smaller tunnels to be built for
electric vehicle infrastructure, lowering construction costs.
(An OLEV shuttle bus that provides rides to students and faculty on
the KAIST campus in Daejeon. Courtesy Hyung-Joon Jeon/KAIST.)
Top Image: KAIST and Korea Railroad Research
Institute displayed wireless power transfer technology to the public on
Feb. 13 by testing it on railroad tracks at Osong Station in Korea.
Photo courtesy Hyung-Joon Juen/KAIST.