Between
1981 and 1982, renowned photographer Ira Nowinski hiked all over the
Bay Area, taking hundreds of photos of arcades. In all, he snapped
around 700 images, and in awesome news for retro gaming fans many of
them are now available for viewing, courtesy of their acquisition by
Stanford University's library.
Once you're
done looking at the games, and in particular that cruisey arcade that's
nearly all cocktail units, get a load of the fashion. While
arcades still exist today, they sure don't have the same diversity of
clientèle you see here, like Mr. Texas on the Pac-Man cabinet up top.
En 1970 donc, Newsweek expliquait que "durant les vingt dernières années, les Etats-Unis sont devenus (…)
l'un des pays qui espionnent le plus et sont le plus soucieux de ses
données dans l'histoire mondiale. Les gros commerçants, les petits
commerçants, l'administration fiscale, les institutions de police, les
organismes de recensement, les sociologues, les banques, les écoles, les
centres médicaux, les patrons, les agences fédérales, les compagnies
d'assurance (…), tous cherchent obstinément, stockent et
utilisent chaque parcelle d'information qu'ils peuvent trouver sur
chacun des 205 millions d'Américains, individus et groupes". Bref, "très
bientôt, toute la vie et l'histoire d'une personne va être disponible
en un clic sur un ordinateur. On va finir en 1984 avant d'avoir atteint
cette année", prévoyait, à cette époque, un juriste américain.
L'article énumère une série d'anecdotes où s'entrechoquent collectes
de données pour la sécurité et protection de la vie privée. Par exemple,
cette bibliothécaire qui a reçu une visite d'employés de l'IRS,
l'agence américaine chargée des impôts, lui demandant de lui fournir les
noms des "utilisateurs de matériel militant et subversif" – ouvrage sur les explosifs ou biographie du Che Guevara. Ou ce fichier de l'armée fichant les "potentiels perturbateurs ostensibles de la paix", "en plus des 7 millions de fichiers de routine" sur la loyauté ou le statut criminel des citoyens.
Le chapitre sur les écoutes téléphoniques est tout aussi parlant : les écoutes légales, "prudemment
utilisées pendant la seconde guerre mondiale pour pister les espions et
les saboteurs, sont devenues une pratique si banale du FBI et de la
police à la fin des années 1950 qu'elles étaient menées, dit-on, contre
chaque bookmaker du coin". A la suite de l'indignation de certains politiques,
"le Congrès a spécifié en 1968 que le département de justice, le FBI et
la police ne pouvaient pratiquer la surveillance électronique qu'avec
un ordre de justice". Mais le gouvernement fédéral se réserve
toujours le droit de faire des écoutes clandestines, sans ordre de
justice, dans l'intérêt de la "sécurité nationale", explique Newsweek. Avant de préciser – de manière un peu incongrue, vu d'aujourd'hui : "La méfiance grandissante envers le téléphone représente la seule protection réelle de la vie privée."
En parallèle de ce vaste mouvement de collecte des données, les
Américains sont devenus de plus en plus sensibles à leur droit à la
protection de leur vie privée, explique l'hebdomadaire. Ce qui n'empêche
pas, aujourd'hui, une majorité d'entre eux d'approuver la surveillance
des communications téléphoniques, et 62 % d'estimer qu'il est important
que le gouvernement fédéral enquête sur d'éventuelles menaces
"terroristes", quitte à empiéter sur la vie privée, selon un sondage publié le 10 juin.
A growing number of smartphones are shipping with NFC, or Near Field
Communication technology. This lets you send information between devices
by tapping them together. For example you can share a photo with a
friend or make a mobile payment from a digital wallet app.
But a team of researchers is showing off a way you can transmit more than just data — you can also transmit power.
For instance, you could pair a low-power E Ink display with your
smartphone and send across pictures and enough power to flip through a
few of those images.
This lets you use the E Ink screen as a secondary, low-power display
for your smartphone. E Ink only uses power when you refresh the screen,
so you only need a tiny bit of power to display an image and then it
will can be displayed indefinitely without any additional power.
So if you have directions, a map, phone number, or a photo that you
want to be able to look at continuously without running down your
smartphone battery, you can tap the phone against the E Ink screen to
quickly charge the secondary display and then transfer a screenshot.
Then you can slide your phone back in your pocket while the phone
number, address, or other data stays on the screen.
You can’t transmit a lot of energy over an NFC connection
this way, so you’re not exactly going to wirelessly charge your iPod
touch using this kind of setup. But it’s an interesting demo of how NFC,
E Ink, and smartphones can work together.
The demo is courtesy of a team at Intel, the University of Massachussetts and the University of Washington.
IBM on Thursday announced a new computer programming framework that
draws inspiration from the way the human brain receives data, processes
it, and instructs the body to act upon it while requiring relatively
tiny amounts of energy to do so.
"Dramatically different from traditional software, IBM's new programming
model breaks the mold of sequential operation underlying today's von
Neumann architectures and computers. It is instead tailored for a new
class of distributed, highly interconnected, asynchronous, parallel,
large-scale cognitive computing architectures," IBM said in a statement
introducing recent advances made by its Systems of Neuromorphic Adaptive
Plastic Scalable Electronics (SyNAPSE) project.
IBM and research partners Cornell University and iniLabs have completed
the second phase of the approximately $53 million project. With $12
million in new funding from the Defense Advanced Research Projects
Agency (DARPA), IBM said work is set to commence on Phase 3, which will
involve an ambitious plan to develop intelligent sensor networks built
on a "brain-inspired chip architecture" using a "scalable,
interconnected, configurable network of 'neurosynaptic cores'."
"Architectures and programs are closely intertwined and a new
architecture necessitates a new programming paradigm," Dr. Dharmendra
Modha, principal investigator and senior manager, IBM Research, said in a statement.
"We are working to create a FORTRAN for synaptic computing chips. While
complementing today's computers, this will bring forth a fundamentally
new technological capability in terms of programming and applying
emerging learning systems."
Going forward, work on the project will focus on honing a programming
language for the SyNAPSE chip architecture first shown by IBM in 2011,
with an agenda of using the new framework to deal with "big data"
problems more efficiently.
IBM listed the following tools and systems it has developed with its partners towards this end:
Simulator: A multi-threaded, massively parallel and highly
scalable functional software simulator of a cognitive computing
architecture comprising a network of neurosynaptic cores.
Neuron Model: A simple, digital, highly parameterized spiking
neuron model that forms a fundamental information processing unit of
brain-like computation and supports a wide range of deterministic and
stochastic neural computations, codes, and behaviors. A network of such
neurons can sense, remember, and act upon a variety of spatio-temporal,
multi-modal environmental stimuli.
Programming Model: A high-level description of a "program"
that is based on composable, reusable building blocks called "corelets."
Each corelet represents a complete blueprint of a network of
neurosynaptic cores that specifies a based-level function. Inner
workings of a corelet are hidden so that only its external inputs and
outputs are exposed to other programmers, who can concentrate on what
the corelet does rather than how it does it. Corelets can be combined to
produce new corelets that are larger, more complex, or have added
functionality.
Library: A cognitive system store containing designs and
implementations of consistent, parameterized, large-scale algorithms and
applications that link massively parallel, multi-modal, spatio-temporal
sensors and actuators together in real-time. In less than a year, the
IBM researchers have designed and stored over 150 corelets in the
program library.
Laboratory: A novel teaching curriculum that spans the
architecture, neuron specification, chip simulator, programming
language, application library and prototype design models. It also
includes an end-to-end software environment that can be used to create
corelets, access the library, experiment with a variety of programs on
the simulator, connect the simulator inputs/outputs to
sensors/actuators, build systems, and visualize/debug the results.
Lately there’s been a spate of articles about breakthroughs in
battery technology. Better batteries are important, for any of a number
of reasons: electric cars, smoothing out variations in the power grid,
cell phones, and laptops that don’t need to be recharged daily.
All of these nascent technologies are important, but some of them
leave me cold, and in a way that seems important. It’s relatively easy
to invent new technology, but a lot harder to bring it to market. I’m
starting to understand why. The problem isn’t just commercializing a new
technology — it’s everything that surrounds that new technology.
Take an article like Battery Breakthrough Offers 30 Times More Power, Charges 1,000 Times Faster.
For the purposes of argument, let’s assume that the technology works;
I’m not an expert on the chemistry of batteries, so I have no reason to
believe that it doesn’t. But then let’s take a step back and think about
what a battery does. When you discharge a battery, you’re using a
chemical reaction to create electrical current (which is moving
electrical charge). When you charge a battery, you’re reversing that
reaction: you’re essentially taking the current and putting that back in
the battery.
So, if a battery is going to store 30 times as much power and charge
1,000 times faster, that means that the wires that connect to it need to
carry 30,000 times more current. (Let’s ignore questions like “faster
than what?,” but most batteries I’ve seen take between two and eight
hours to charge.) It’s reasonable to assume that a new battery
technology might be able to store electrical charge more efficiently,
but the charging process is already surprisingly efficient: on the order of 50% to 80%, but possibly much higher for a lithium battery.
So improved charging efficiency isn’t going to help much — if charging a
battery is already 50% efficient, making it 100% efficient only
improves things by a factor of two. How big are the wires for an
automobile battery charger? Can you imagine wires big enough to handle
thousands of times as much current? I don’t think Apple is going to make
any thin, sexy laptops if the charging cable is made from 0000 gauge
wire (roughly 1/2 inch thick, capacity of 195 amps at 60 degrees C). And
I certainly don’t think, as the article claims, that I’ll be able to
jump-start my car with the battery in my cell phone — I don’t have any
idea how I’d connect a wire with the current-handling capacity of a
jumper cable to any cell phone I’d be willing to carry, nor do I want a
phone that turns into an incendiary firebrick when it’s charged, even if
I only need to charge it once a year.
Here’s an older article that’s much more in touch with reality: Battery breakthrough could bring electric cars to all.
The claims are much more limited: these new batteries deliver 2.5
times as much energy with roughly the same weight as current batteries.
But more than that, look at the picture. You don’t get a sense of the
scale, but notice that the tabs extending from the batteries (no doubt
the electrical contacts) are relatively large in relation to the
battery’s body, certainly larger in relation to the battery’s size than
the terminal posts on a typical auto battery. And even more, the
terminals are flat, which maximizes surface area, which maximizes both
heat dissipation (a big issue at high current), and surface area (to
transfer power more efficiently). That’s what I like to see, and that’s
what makes me think that this is a breakthrough that, while less
dramatic, isn’t being over-hyped by irresponsible reporting.
I’m not saying that the problems presented by ultra-high capacity
batteries aren’t solvable. I’m sure that the researchers are well aware
of the issues. Sadly, I’m not so surprised that the reporters who wrote
about the research didn’t understand the issues, resulting in some
rather naive claims about what the technology could accomplish. I can
imagine that there are ways to distribute current within the batteries
that might solve some of the current carrying issues. (For example, high
terminal voltages with an internal voltage divider network that
distributes current to a huge number of cells). As we used to say in
college, “That’s an engineering problem” — but it’s an engineering
problem that’s certainly not trivial.
This argument isn’t intended to dump cold water on battery research,
nor is it really to complain about the press coverage (though it was
relatively uninformed, to put it politely, about the realities of moving
electrical charge around). There’s a bigger point here: innovation is
hard. It’s not just about the conceptual breakthrough that lets you put
30 times as much charge in one battery, or 64 times as many power-hungry
CPUs on one Google Glass frame. It’s about everything else that
surrounds the breakthrough: supplying power, dissipating heat,
connecting wires, and more. The cost of innovation has plummeted in
recent years, and that will allow innovators to focus more on the hard
problems and less on peripheral issues like design and manufacturing.
But the hard problems remain hard.
With the wearable device known as MYO,
there’s no need for the computer to see you to understand your
commands. Instead, this armband connects to your device – Mac and
Windows for now, Android and iOS soon – with Bluetooth and reads
gestures you make with your hand and arm through muscle fluctuations.
This armband is already out in the wild – the full “second wave” for the
public comes in early 2014.
While devices like Leap Motion
require that you stay within a pre-determined space that its sensors
can “see”, MYO stays within the Bluetooth range of reading. This means
you’ll be kicking it at either 100 meters (330 ft) or 50 meters,
depending on if you’re working with Bluetooth LE (Low Energy) or not –
LE works with the shorter range.
The armband is a “one size fits all” sort of situation, and the final
form hasn’t exactly taken shape as yet. The mock-ups you see here are
about as close as the team has gotten to a real final form, while demo
videos still show a slightly less finessed iteration.
The consumer edition of this device will have a
pre-set selection of gestures easily recognized by the sensors in the
device. As the developers at Thalmic Labs suggest, this is only done to make things simple right out of the box:
“We want you to have the best experience with your MYO!
In order to do this, at the general consumer level we are going to be
providing you with set gestures that are easily recognized. This is not
due to a technical limitation but instead will allow you to have a more
seamless user experience out of the box. You will then be able to decide
what each gesture represents.”
Thalmic Labs is a North American based company that was only fully
founded in 2012, made up of 21 creative minds and headed by Canadian
entrepreneur Stephen Lake. MYO is currently up for pre-order for $149
and again, ships in “early 2014.”