I am pleased to announce that we released the Kinect for Windows
software development kit (SDK) 1.8 today. This is the fourth update to
the SDK since we first released it commercially one and a half years
ago. Since then, we’ve seen numerous companies using Kinect for Windows
worldwide, and more than 700,000 downloads of our SDK.
We build each version of the SDK with our customers in mind—listening
to what the developer community and business leaders tell us they want
and traveling around the globe to see what these dedicated teams do, how
they do it, and what they most need out of our software development
kit.
The new background removal API is useful for advertising, augmented reality gaming, training and simulation, and more.
Kinect for Windows SDK 1.8 includes some key features and samples that the community has been asking for, including:
New background removal. An API removes the
background behind the active user so that it can be replaced with an
artificial background. This green-screening effect was one of the top
requests we’re heard in recent months. It is especially useful for
advertising, augmented reality gaming, training and simulation, and
other immersive experiences that place the user in a different virtual
environment.
Realistic color capture with Kinect Fusion. A new
Kinect Fusion API scans the color of the scene along with the depth
information so that it can capture the color of the object along with
its three-dimensional (3D) model. The API also produces a texture map
for the mesh created from the scan. This feature provides a full
fidelity 3D model of a scan, including color, which can be used for full
color 3D printing or to create accurate 3D assets for games, CAD, and
other applications.
Improved tracking robustness with Kinect Fusion.
This algorithm makes it easier to scan a scene. With this update, Kinect
Fusion is better able to maintain its lock on the scene as the camera
position moves, yielding a more reliable and consistent scanning.
HTML interaction sample. This sample demonstrates
implementing Kinect-enabled buttons, simple user engagement, and the use
of a background removal stream in HTML5. It allows developers to use
HTML5 and JavaScript to implement Kinect-enabled user interfaces, which
was not possible previously—making it easier for developers to work in
whatever programming languages they prefer and integrate Kinect for
Windows into their existing solutions.
Multiple-sensor Kinect Fusion sample. This sample
shows developers how to use two sensors simultaneously to scan a person
or object from both sides—making it possible to construct a 3D model
without having to move the sensor or the object! It demonstrates the
calibration between two Kinect for Windows sensors, and how to use
Kinect Fusion APIs with multiple depth snapshots. It is ideal for retail
experiences and other public kiosks that do not include having an
attendant available to scan by hand.
Adaptive UI sample. This sample demonstrates how to
build an application that adapts itself depending on the distance
between the user and the screen—from gesturing at a distance to touching
a touchscreen. The algorithm in this sample uses the physical
dimensions and positions of the screen and sensor to determine the best
ergonomic position on the screen for touch controls as well as ways the
UI can adapt as the user approaches the screen or moves further away
from it. As a result, the touch interface and visual display adapt to
the user’s position and height, which enables users to interact with
large touch screen displays comfortably. The display can also be adapted
for more than one user.
We also have updated our Human Interface Guidelines (HIG) with
guidance to complement the new Adaptive UI sample, including the
following:
Design a transition that reveals or hides additional information without obscuring the anchor points in the overall UI.
Design UI where users can accomplish all tasks for each goal within a single range.
My team and I believe that communicating naturally with computers
means being able to gesture and speak, just like you do when
communicating with people. We believe this is important to the evolution
of computing, and are committed to helping this future come faster by
giving our customers the tools they need to build truly innovative
solutions. There are many exciting applications being created with
Kinect for Windows, and we hope these new features will make those
applications better and easier to build. Keep up the great work, and
keep us posted!
Cards are fast becoming the best design pattern for mobile devices.
We are currently witnessing a re-architecture of the web, away from
pages and destinations, towards completely personalised experiences
built on an aggregation of many individual pieces of content. Content
being broken down into individual components and re-aggregated is the
result of the rise of mobile technologies, billions of screens of all
shapes and sizes, and unprecedented access to data from all kinds of
sources through APIs and SDKs. This is driving the web away from many
pages of content linked together, towards individual pieces of content
aggregated together into one experience.
The aggregation depends on:
The person consuming the content and their interests, preferences, behaviour.
Their location and environmental context.
Their friends’ interests, preferences and behaviour.
The targeting advertising eco-system.
If the predominant medium of our time is set to be the portable
screen (think phones and tablets), then the predominant design pattern
is set to be cards. The signs are already here…
Twitter is moving to cards
Twitter recently launched Cards,
a way to attached multimedia inline with tweets. Now the NYT should
care more about how their story appears on the Twitter card (right hand
in image above) than on their own web properties, because the likelihood
is that the content will be seen more often in card format.
Google is moving to cards
With Google Now,
Google is rethinking information distribution, away from search, to
personalised information pushed to mobile devices. Their design pattern
for this is cards.
Everyone is moving to cards
Pinterest (above left) is built around cards. The new Discover feature on Spotify
(above right) is built around cards. Much of Facebook now represents
cards. Many parts of iOS7 are now card based, for example the app
switcher and Airdrop.
The list goes on. The most exciting thing is that despite these many
early card based designs, I think we’re only getting started. Cards are
an incredible design pattern, and they have been around for a long time.
Cards give bursts of information
Cards as an information dissemination medium have been around for a
very long time. Imperial China used them in the 9th century for
games. Trade cards in 17th century London helped people find businesses.
In 18th century Europe footmen of aristocrats used cards to introduce
the impending arrival of the distinguished guest. For hundreds of years
people have handed around business cards.
We send birthday cards, greeting cards. My wallet is full of debit
cards, credit cards, my driving licence card. During my childhood, I was
surrounded by games with cards. Top Trumps, Pokemon, Panini sticker
albums and swapsies. Monopoly, Cluedo, Trivial Pursuit. Before computer
technology, air traffic controllers used cards to manage the planes in
the sky. Some still do.
Cards are a great medium for communicating quick stories. Indeed the
great (and terrible) films of our time are all storyboarded using a card
like format. Each card representing a scene. Card, Card, Card. Telling
the story. Think about flipping through printed photos, each photo
telling it’s own little tale. When we travelled we sent back postcards.
What about commerce? Cards are the predominant pattern for coupons.
Remember cutting out the corner of the breakfast cereal box? Or being
handed coupon cards as you walk through a shopping mall? Circulars, sent
out to hundreds of millions of people every week are a full page
aggregation of many individual cards. People cut them out and stick them
to their fridge for later.
Cards can be manipulated.
In addition to their reputable past as an information medium, the
most important thing about cards is that they are almost infinitely
manipulatable. See the simple example above from Samuel Couto
Think about cards in the physical world. They can be turned over to
reveal more, folded for a summary and expanded for more details, stacked
to save space, sorted, grouped, and spread out to survey more than one.
When designing for screens, we can take advantage of all these
things. In addition, we can take advantage of animation and movement. We
can hint at what is on the reverse, or that the card can be folded out.
We can embed multimedia content, photos, videos, music. There are so
many new things to invent here.
Cards are perfect for mobile devices and varying screen sizes.
Remember, mobile devices are the heart and soul of the future of your
business, no matter who you are and what you do. On mobile devices,
cards can be stacked vertically, like an activity stream on a phone.
They can be stacked horizontally, adding a column as a tablet is turned
90 degrees. They can be a fixed or variable height.
Cards are the new creative canvas
It’s already clear that product and interaction designers will
heavily use cards. I think the same is true for marketers and creatives
in advertising. As social media continues to rise, and continues to
fragment into many services, taking up more and more of our time,
marketing dollars will inevitably follow. The consistent thread through
these services, the predominant canvas for creativity, will be card
based. Content consumption on Facebook, Twitter, Pinterest, Instagram,
Line, you name it, is all built on the card design metaphor.
I think there is no getting away from it. Cards are the next big
thing in design and the creative arts. To me that’s incredibly exciting.
MakerBot
is best known for its 3D printers, turning virtual products into real
ones, but the company’s latest hardware to go on sale, the MakerBot
Digitizer, takes things in the opposite direction. Announced back in March,
and on sale from today for $1,400, the Digitizer takes a real-world
object and, by spinning it on a rotating platform in front of a camera,
maps out a digital model that can then be saved, shared, modified, and
even 3D printed itself.
Although the process itself involves some complicated technology and
data-crunching, MakerBot claims that users themselves should be able to
scan in an object in just a couple of clicks. The company includes its
own MakerWare for Digitizer software, which creates files suitable for
both the firm’s own 3D printers and generic 3D files for other hardware.
Calibration is a matter of dropping the included glyph block on the
rotating platter and having the camera run through some preconfigured
tests. After that, you center the object you’re hoping to scan,
selecting whether they’re lightly colored, medium, or dark, and then
waiting until the process is done.
MakerBot Digitizer Desktop 3D Scanner overview:
That takes approximately twelve minutes per
object, MakerBot says, so don’t think of this as the 3D scanner
equivalent of a photocopier. The camera itself runs at 1.3-megapixels
and is paired with two Class 1 lasers for mapping out objects, and the
overall resolution is good for 0.5mm in terms of detail and +/- 2.0mm
for dimensional accuracy. Maximum object size is up to 20.3cm in
diameter and the same in height.
Once you’ve actually run something through the scanner, the core grid
file can be shared directly from the app to Thingiverse.com, or edited
and combined with other 3D files to make a new object altogether.
The MakerBot Digitizer Desktop Scanner is available for order now, priced at $1,400.
GAMES CONSOLE AND PHONE MAKER Microsoft has hinted that it will bring its Xbox Kinect camera technology to its Windows Phone 8 mobile operating system following its acquisition of Nokia.
During a conference call on Tuesday, Microsoft operating systems
group VP Terry Myerson hinted that the firm will work with Nokia to
integrated Kinect camera technology into future Windows Phone devices.
Speaking about the Nokia buyout, Myerson said, "In the area of imaging, the Nokia Lumia 1020 has no equal. We are excited to bring this together with our Kinect camera technology to delight our customers."
Myerson didn't elaborate further, so it's unclear when Microsoft will
introduce this technology and what it will be capable of doing.
However, this isn't the first we've heard about Kinect technology
possibly coming to Windows Phone devices. Previous rumours had
speculated that the integration will allow users of Windows Phone
devices to control their handsets using both voice and gestures.
It's possible that the Kinect integration could also enable more
immersive gaming on Windows Phone devices and facilitate deeper
integration with Microsoft's Xbox One games console.
While all this sounds promising, Kinect integration could also be a
bad thing for the Windows Phone ecosystem, with Microsoft having been
accused of spying on its users with Kinect in the wake of recent NSA
surveillance revelations.
Now you can drive your remote-controlled car over the horizon. Flutter
Robotics
engineer Taylor Alexander needed to lift a nuclear cooling tower off
its foundation using 19 high-strength steel cables, and the Android app
that was supposed to accomplish it, for which he’d just paid a developer
$20,000, was essentially worthless. Undaunted and on deadline—the tower
needed a new foundation, and delays meant millions of dollars in
losses—he re-wrote the app himself. That’s when he discovered just how
hard it is to connect to sensors via the standard long-distance industrial wireless protocol, known as Zigbee.
1
It
took him months of hacking just to create a system that could send him a
single number—which represented the strain on each of the cables—from
the sensors he was using. Surely, he thought, there must be a better
way. And that’s when he realized that the solution to his problem would
also unlock the potential of what’s known as the “internet of things”
(the idea that every object we own, no matter how mundane, is connected
to the internet and can be monitored and manipulated via the internet,
whether it’s a toaster, a lightbulb or your car).
+
The result is an in-the-works project called Flutter.
It’s what Taylor calls a “second network”—an alternative to Wi-Fi that
can cover 100 times as great an area, with a range of 3,200 feet, using
relatively little power, and is either the future of the way that all
our connected devices will talk to each other or a reasonable prototype
for it.
+
Flutter’s range is 3,200 feet in open air, but multiple Flutters can also cover even larger areas in a “mesh” network.Flutter
“We
have Wi-Fi in our homes, but it’s not a good network for our things,”
says Taylor. Wi-Fi was designed for applications that require fast
connections, like streaming video, but it’s vulnerable to interference
and has a limited range—often, not enough even to cover an entire house.
+
For
applications with a very limited range—for example anything on your
body that you might want to connect with your smartphone—Bluetooth, the
wireless protocol used by keyboards and smart watches, is good enough.
For industrial applications, the Zigbee standard has been in use for at
least a decade. But there are two problems with Zigbee: the first is
that, as Alexander discovered, it’s difficult to use. The second is that
the Zigbee devices are not open source, which makes them difficult to
integrate with the sort of projects that hardware startups might want to
create.
Making
Flutter cheap means that hobbyists can connect that many more
devices—say, all the lights in a room, or temperature and moisture
sensors in a greenhouse. No one is quite sure what the internet of
things will lead to because the enabling technologies, including cheap
wireless radios like Flutter, have yet to become widespread. The present
day internet of things is a bit like where personal computers were
around the time Steve Wozniak and Steve Jobs were showing off their
Apple I at the Palo Alto home-brew computer club: It’s mostly hobbyists,
with a few big corporations sniffing around the periphery.
+
Flutter
radios connect to tiny Arduino computers, which is the de facto control
and processing system for many startup and open source hardware
projects.Flutter
“I think the internet of things is not going to start with products, but projects,” says Taylor. His goal is to use the current crowd-funding effort for Flutter
to pay for the coding of the software protocol that will run Flutter,
since the microchips it uses are already available from manufacturers.
The resulting software will allow Flutter to create a “mesh network,”
which would allow individual Flutter radios to re-transmit data from any
other Flutter radio that’s in range, potentially giving hobbyists or
startups the ability to cover whole cities with networks of Flutter
radios and their attached sensors.
+
Taylor’s
ultimate goal is to create a system that answers the fundamental needs
of all objects in the internet of things, including good range, low
power consumption, and just enough speed to get the job done—up to 600
kilobits a second, or about 1/20th the speed of a typical home Wi-Fi
connection. One reason for that slow speed is that lower-bandwidth
signals, transmitted in the 915 Mhz range in which Flutter operates,
travel further. These speeds are more than sufficient when the goal is
transmitting sensor readings, which are typically very short strings of
data.
Between
1981 and 1982, renowned photographer Ira Nowinski hiked all over the
Bay Area, taking hundreds of photos of arcades. In all, he snapped
around 700 images, and in awesome news for retro gaming fans many of
them are now available for viewing, courtesy of their acquisition by
Stanford University's library.
Once you're
done looking at the games, and in particular that cruisey arcade that's
nearly all cocktail units, get a load of the fashion. While
arcades still exist today, they sure don't have the same diversity of
clientèle you see here, like Mr. Texas on the Pac-Man cabinet up top.
En 1970 donc, Newsweek expliquait que "durant les vingt dernières années, les Etats-Unis sont devenus (…)
l'un des pays qui espionnent le plus et sont le plus soucieux de ses
données dans l'histoire mondiale. Les gros commerçants, les petits
commerçants, l'administration fiscale, les institutions de police, les
organismes de recensement, les sociologues, les banques, les écoles, les
centres médicaux, les patrons, les agences fédérales, les compagnies
d'assurance (…), tous cherchent obstinément, stockent et
utilisent chaque parcelle d'information qu'ils peuvent trouver sur
chacun des 205 millions d'Américains, individus et groupes". Bref, "très
bientôt, toute la vie et l'histoire d'une personne va être disponible
en un clic sur un ordinateur. On va finir en 1984 avant d'avoir atteint
cette année", prévoyait, à cette époque, un juriste américain.
L'article énumère une série d'anecdotes où s'entrechoquent collectes
de données pour la sécurité et protection de la vie privée. Par exemple,
cette bibliothécaire qui a reçu une visite d'employés de l'IRS,
l'agence américaine chargée des impôts, lui demandant de lui fournir les
noms des "utilisateurs de matériel militant et subversif" – ouvrage sur les explosifs ou biographie du Che Guevara. Ou ce fichier de l'armée fichant les "potentiels perturbateurs ostensibles de la paix", "en plus des 7 millions de fichiers de routine" sur la loyauté ou le statut criminel des citoyens.
Le chapitre sur les écoutes téléphoniques est tout aussi parlant : les écoutes légales, "prudemment
utilisées pendant la seconde guerre mondiale pour pister les espions et
les saboteurs, sont devenues une pratique si banale du FBI et de la
police à la fin des années 1950 qu'elles étaient menées, dit-on, contre
chaque bookmaker du coin". A la suite de l'indignation de certains politiques,
"le Congrès a spécifié en 1968 que le département de justice, le FBI et
la police ne pouvaient pratiquer la surveillance électronique qu'avec
un ordre de justice". Mais le gouvernement fédéral se réserve
toujours le droit de faire des écoutes clandestines, sans ordre de
justice, dans l'intérêt de la "sécurité nationale", explique Newsweek. Avant de préciser – de manière un peu incongrue, vu d'aujourd'hui : "La méfiance grandissante envers le téléphone représente la seule protection réelle de la vie privée."
En parallèle de ce vaste mouvement de collecte des données, les
Américains sont devenus de plus en plus sensibles à leur droit à la
protection de leur vie privée, explique l'hebdomadaire. Ce qui n'empêche
pas, aujourd'hui, une majorité d'entre eux d'approuver la surveillance
des communications téléphoniques, et 62 % d'estimer qu'il est important
que le gouvernement fédéral enquête sur d'éventuelles menaces
"terroristes", quitte à empiéter sur la vie privée, selon un sondage publié le 10 juin.
A growing number of smartphones are shipping with NFC, or Near Field
Communication technology. This lets you send information between devices
by tapping them together. For example you can share a photo with a
friend or make a mobile payment from a digital wallet app.
But a team of researchers is showing off a way you can transmit more than just data — you can also transmit power.
For instance, you could pair a low-power E Ink display with your
smartphone and send across pictures and enough power to flip through a
few of those images.
This lets you use the E Ink screen as a secondary, low-power display
for your smartphone. E Ink only uses power when you refresh the screen,
so you only need a tiny bit of power to display an image and then it
will can be displayed indefinitely without any additional power.
So if you have directions, a map, phone number, or a photo that you
want to be able to look at continuously without running down your
smartphone battery, you can tap the phone against the E Ink screen to
quickly charge the secondary display and then transfer a screenshot.
Then you can slide your phone back in your pocket while the phone
number, address, or other data stays on the screen.
You can’t transmit a lot of energy over an NFC connection
this way, so you’re not exactly going to wirelessly charge your iPod
touch using this kind of setup. But it’s an interesting demo of how NFC,
E Ink, and smartphones can work together.
The demo is courtesy of a team at Intel, the University of Massachussetts and the University of Washington.
IBM on Thursday announced a new computer programming framework that
draws inspiration from the way the human brain receives data, processes
it, and instructs the body to act upon it while requiring relatively
tiny amounts of energy to do so.
"Dramatically different from traditional software, IBM's new programming
model breaks the mold of sequential operation underlying today's von
Neumann architectures and computers. It is instead tailored for a new
class of distributed, highly interconnected, asynchronous, parallel,
large-scale cognitive computing architectures," IBM said in a statement
introducing recent advances made by its Systems of Neuromorphic Adaptive
Plastic Scalable Electronics (SyNAPSE) project.
IBM and research partners Cornell University and iniLabs have completed
the second phase of the approximately $53 million project. With $12
million in new funding from the Defense Advanced Research Projects
Agency (DARPA), IBM said work is set to commence on Phase 3, which will
involve an ambitious plan to develop intelligent sensor networks built
on a "brain-inspired chip architecture" using a "scalable,
interconnected, configurable network of 'neurosynaptic cores'."
"Architectures and programs are closely intertwined and a new
architecture necessitates a new programming paradigm," Dr. Dharmendra
Modha, principal investigator and senior manager, IBM Research, said in a statement.
"We are working to create a FORTRAN for synaptic computing chips. While
complementing today's computers, this will bring forth a fundamentally
new technological capability in terms of programming and applying
emerging learning systems."
Going forward, work on the project will focus on honing a programming
language for the SyNAPSE chip architecture first shown by IBM in 2011,
with an agenda of using the new framework to deal with "big data"
problems more efficiently.
IBM listed the following tools and systems it has developed with its partners towards this end:
Simulator: A multi-threaded, massively parallel and highly
scalable functional software simulator of a cognitive computing
architecture comprising a network of neurosynaptic cores.
Neuron Model: A simple, digital, highly parameterized spiking
neuron model that forms a fundamental information processing unit of
brain-like computation and supports a wide range of deterministic and
stochastic neural computations, codes, and behaviors. A network of such
neurons can sense, remember, and act upon a variety of spatio-temporal,
multi-modal environmental stimuli.
Programming Model: A high-level description of a "program"
that is based on composable, reusable building blocks called "corelets."
Each corelet represents a complete blueprint of a network of
neurosynaptic cores that specifies a based-level function. Inner
workings of a corelet are hidden so that only its external inputs and
outputs are exposed to other programmers, who can concentrate on what
the corelet does rather than how it does it. Corelets can be combined to
produce new corelets that are larger, more complex, or have added
functionality.
Library: A cognitive system store containing designs and
implementations of consistent, parameterized, large-scale algorithms and
applications that link massively parallel, multi-modal, spatio-temporal
sensors and actuators together in real-time. In less than a year, the
IBM researchers have designed and stored over 150 corelets in the
program library.
Laboratory: A novel teaching curriculum that spans the
architecture, neuron specification, chip simulator, programming
language, application library and prototype design models. It also
includes an end-to-end software environment that can be used to create
corelets, access the library, experiment with a variety of programs on
the simulator, connect the simulator inputs/outputs to
sensors/actuators, build systems, and visualize/debug the results.
Lately there’s been a spate of articles about breakthroughs in
battery technology. Better batteries are important, for any of a number
of reasons: electric cars, smoothing out variations in the power grid,
cell phones, and laptops that don’t need to be recharged daily.
All of these nascent technologies are important, but some of them
leave me cold, and in a way that seems important. It’s relatively easy
to invent new technology, but a lot harder to bring it to market. I’m
starting to understand why. The problem isn’t just commercializing a new
technology — it’s everything that surrounds that new technology.
Take an article like Battery Breakthrough Offers 30 Times More Power, Charges 1,000 Times Faster.
For the purposes of argument, let’s assume that the technology works;
I’m not an expert on the chemistry of batteries, so I have no reason to
believe that it doesn’t. But then let’s take a step back and think about
what a battery does. When you discharge a battery, you’re using a
chemical reaction to create electrical current (which is moving
electrical charge). When you charge a battery, you’re reversing that
reaction: you’re essentially taking the current and putting that back in
the battery.
So, if a battery is going to store 30 times as much power and charge
1,000 times faster, that means that the wires that connect to it need to
carry 30,000 times more current. (Let’s ignore questions like “faster
than what?,” but most batteries I’ve seen take between two and eight
hours to charge.) It’s reasonable to assume that a new battery
technology might be able to store electrical charge more efficiently,
but the charging process is already surprisingly efficient: on the order of 50% to 80%, but possibly much higher for a lithium battery.
So improved charging efficiency isn’t going to help much — if charging a
battery is already 50% efficient, making it 100% efficient only
improves things by a factor of two. How big are the wires for an
automobile battery charger? Can you imagine wires big enough to handle
thousands of times as much current? I don’t think Apple is going to make
any thin, sexy laptops if the charging cable is made from 0000 gauge
wire (roughly 1/2 inch thick, capacity of 195 amps at 60 degrees C). And
I certainly don’t think, as the article claims, that I’ll be able to
jump-start my car with the battery in my cell phone — I don’t have any
idea how I’d connect a wire with the current-handling capacity of a
jumper cable to any cell phone I’d be willing to carry, nor do I want a
phone that turns into an incendiary firebrick when it’s charged, even if
I only need to charge it once a year.
Here’s an older article that’s much more in touch with reality: Battery breakthrough could bring electric cars to all.
The claims are much more limited: these new batteries deliver 2.5
times as much energy with roughly the same weight as current batteries.
But more than that, look at the picture. You don’t get a sense of the
scale, but notice that the tabs extending from the batteries (no doubt
the electrical contacts) are relatively large in relation to the
battery’s body, certainly larger in relation to the battery’s size than
the terminal posts on a typical auto battery. And even more, the
terminals are flat, which maximizes surface area, which maximizes both
heat dissipation (a big issue at high current), and surface area (to
transfer power more efficiently). That’s what I like to see, and that’s
what makes me think that this is a breakthrough that, while less
dramatic, isn’t being over-hyped by irresponsible reporting.
I’m not saying that the problems presented by ultra-high capacity
batteries aren’t solvable. I’m sure that the researchers are well aware
of the issues. Sadly, I’m not so surprised that the reporters who wrote
about the research didn’t understand the issues, resulting in some
rather naive claims about what the technology could accomplish. I can
imagine that there are ways to distribute current within the batteries
that might solve some of the current carrying issues. (For example, high
terminal voltages with an internal voltage divider network that
distributes current to a huge number of cells). As we used to say in
college, “That’s an engineering problem” — but it’s an engineering
problem that’s certainly not trivial.
This argument isn’t intended to dump cold water on battery research,
nor is it really to complain about the press coverage (though it was
relatively uninformed, to put it politely, about the realities of moving
electrical charge around). There’s a bigger point here: innovation is
hard. It’s not just about the conceptual breakthrough that lets you put
30 times as much charge in one battery, or 64 times as many power-hungry
CPUs on one Google Glass frame. It’s about everything else that
surrounds the breakthrough: supplying power, dissipating heat,
connecting wires, and more. The cost of innovation has plummeted in
recent years, and that will allow innovators to focus more on the hard
problems and less on peripheral issues like design and manufacturing.
But the hard problems remain hard.