IBMhas been shipping computers for more than 65 years, and it is finally on the verge of creating a true electronic brain.
Big Blue is announcing today that it, along with four universities and the Defense Advanced Research Projects Agency (DARPA), have created the basic design of anexperimental computer chip that emulates the way the brain processes information.
IBM’s so-called cognitive computing chips could one day simulate and emulate the brain’s ability to sense, perceive, interact and recognize — all tasks that humans can currently do much better than computers can.
Dharmendra Modha (pictured below right) is the principal investigator of the DARPA project, called Synapse (Systems of Neuromorphic Adaptive Plastic Scalable Electronics, or SyNAPSE). He is also a researcher at the IBM Almaden Research Center in San Jose, Calif.
“This is the seed for a new generation of computers, using a combination of supercomputing, neuroscience, and nanotechnology,” Modha said in an interview with VentureBeat. ”The computers we have today are more like calculators. We want to make something like the brain. It is a sharp departure from the past.”
If it eventually leads to commercial brain-like chips, the project could turn computing on its head, overturning the conventional style of computing that has ruled since the dawn of the information age and replacing it with something that is much more like a thinking artificial brain. The eventual applications could have a huge impact on business, science and government. The idea is to create computers that are better at handling real-world sensory problems than today’s computers can. IBM could also build a better Watson, the computer that became the world champion at the game show Jeopardy earlier this year.
We wrote about the project whenIBM announced the projectin November, 2008 and again when it hit its first milestone in November, 2009. Now the researchers have completed phase one of the project, which was to design a fundamental computing unit that could be replicated over and over to form the building blocks of an actual brain-like computer.
Richard Doherty, an analyst at the Envisioneering Group, has been briefed on the project and he said there is “nothing even close” to the level of sophistication in cognitive computing as this project.
This new computing unit, or core, is analogous to the brain. It has “neurons,” or digital processors that compute information. It has “synapses” which are the foundation of learning and memory. And it has “axons,” or data pathways that connect the tissue of the computer.
While it sounds simple enough, the computing unit is radically different from the way most computers operate today. Modern computers are based on the von Neumann architecture, named after computing pioneer John von Neumann and his work from the 1940s.
In von Neumann machines, memory and processor are separated and linked via a data pathway known as a bus. Over the past 65 years, von Neumann machines have gotten faster by sending more and more data at higher speeds across the bus, as processor and memory interact. But the speed of a computer is often limited by the capacity of that bus, leading some computer scientists to call it the “von Neumann bottleneck.”
With the human brain, the memory is located with the processor (at least, that’s how it appears, based on our current understanding of what is admittedly a still-mysterious three pounds of meat in our heads).
The brain-like processors with integrated memory don’t operate fast at all, sending data at a mere 10 hertz, or far slower than the 5 gigahertz computer processors of today. But the human brain does an awful lot of work in parallel, sending signals out in all directions and getting the brain’s neurons to work simultaneously. Because the brain has more than 10 billion neuron and 10 trillion connections (synapses) between those neurons, that amounts to an enormous amount of computing power.
IBM wants to emulate that architecture with its new chips.
“We are now doing a new architecture,” Modha said. “It departs from von Neumann in variety of ways.”
The research team has built its first brain-like computing units, with 256 neurons, an array of 256 by 256 (or a total of 65,536) synapses, and 256 axons. (A second chip had 262,144 synapses) In other words, it has the basic building block of processor, memory, and communications. This unit, or core, can be built with just a few million transistors (some of today’s fastest microchips can be built with billions of transistors).
Modha said that this new kind of computing will likely complement, rather than replace, von Neumann machines, which have become good at solving problems involving math, serial processing, and business computations. The disadvantage is that those machines aren’t scaling up to handle big problems well any more. They are using too much power and are harder to program.
The more powerful a computer gets, the more power it consumes, and manufacturing requires extremely precise and expensive technologies. And the more components are crammed together onto a single chip, the more they “leak” power, even in stand-by mode. So they are not so easily turned off to save power.
The advantage of the human brain is that it operates on very low power and it can essentially turn off parts of the brain when they aren’t in use.
These new chips won’t be programmed in the traditional way. Cognitive computers are expected to learn through experiences, find correlations, create hypotheses, remember, and learn from the outcomes. They mimic the brain’s “structural and synaptic plasticity.” The processing is distributed and parallel, not centralized and serial.
With no set programming, the computing cores that the researchers have built can mimic the event-driven brain, which wakes up to perform a task.
Modha said the cognitive chips could get by with far less power consumption than conventional chips.
The so-called “neurosynaptic computing chips” recreate a phenomenon known in the brain as a “spiking” between neurons and synapses. The system can handle complex tasks such as playing a game of Pong, the original computer game from Atari, Modha said.
Two prototype chips have already been fabricated and are being tested. Now the researchers are about to embark on phase two, where they will build a computer. The goal is to create a computer that not only analyzes complex information from multiple senses at once, but also dynamically rewires itself as it interacts with the environment, learning from what happens around it.
The chips themselves have no actual biological pieces. They are fabricated from digital silicon circuits that are inspired by neurobiology. The technology uses 45-nanometer silicon-on-insulator complementary metal oxide semiconductors. In other words, it uses a very conventional chip manufacturing process. One of the cores contains 262,144 programmable synapses, while the other contains 65,536 learning synapses.
Besides playing Pong, the IBM team has tested the chip on solving problems related to navigation, machine vision, pattern recognition, associative memory (where you remember one thing that goes with another thing) and classification.
Eventually, IBM will combine the cores into a full integrated system of hardware and software. IBM wants to build a computer with 10 billion neurons and 100 trillion synapses, Modha said. That’s as powerful than the human brain. The complete system will consume one kilowatt of power and will occupy less than two liters of volume (the size of our brains), Modha predicts. By comparison, today’s fastest IBM supercomputer, Blue Gene, has 147,456 processors, more than 144 terabytes of memory, occupies a huge, air-conditioned cabinet, and consumes more than 2 megawatts of power.
As a hypothetical application, IBM said that a cognitive computer could monitor the world’s water supply via a network of sensors and tiny motors that constantly record and report data such as temperature, pressure, wave height, acoustics, and ocean tide. It could then issue tsunami warnings in case of an earthquake. Or, a grocer stocking shelves could use an instrumented glove that monitors sights, smells, texture and temperature to flag contaminated produce. Or a computer could absorb data and flag unsafe intersections that are prone to traffic accidents. Those tasks are too hard for traditional computers.
Synapse is funded with a $21 million grant from DARPA, and it involve six IBM labs, four universities (Cornell, the University of Wisconsin, University of California at Merced, and Columbia) and a number of government researchers.
For phase 2, IBM is working with a team of researchers that includes Columbia University; Cornell University; University of California, Merced; and University of Wisconsin, Madison. While this project is new, IBM has been studying brain-like computing as far back as 1956, when it created the world’s first (512 neuron) brain simulation.
“If this works, this is not just a 5 percent leap,” Modha said. “This is a leap of orders of magnitude forward. We have already overcome huge conceptual roadblocks.”
In the fall of 1977, I experimented with a newfangled PC, a Radio Shack TRS-80. For data storage it used—I kid you not—a cassette tape player. Tape had a long history with computing; I had used the IBM 2420 9-track tape system on IBM 360/370 mainframes to load software and to back-up data. Magnetic tape was common for storage in pre-personal computing days, but it had two main annoyances: it held tiny amounts of data, and it was slower than a slug on a cold spring morning. There had to be something better, for those of us excited about technology. And there was: the floppy disk.
Welcome to the floppy disk family: 8”, 5.25” and 3.5”
In the mid-70s I had heard about floppy drives, but they were expensive, exotic equipment. I didn't know that IBM had decided as early as 1967 that tape-drives, while fine for back-ups, simply weren't good enough to load software on mainframes. So it was that Alan Shugart assigned David L. Noble to lead the development of “a reliable and inexpensive system for loading microcode into the IBM System/370 mainframes using a process called Initial Control Program Load (ICPL).” From this project camethe first 8-inch floppy disk.
Oh yes, before the 5.25-inch drives you remember were the 8-inch floppy. By 1978 I was using them on mainframes; later I would use them on Online Computer Library Center (OCLC) dedicated cataloging PCs.
The 8-inch drivebegan to show up in 1971. Since they enabled developers and users to stop using the dreaded paper tape (which were easy to fold, spindle, and mutilate,not to mention to pirate) and the loathed IBM 5081 punch card. Everyone who had ever twisted a some tape or—the horror!—dropped a deck of Hollerith cards was happy to adopt 8-inch drives.
Before floppy drives, we often had to enter data using punch cards.
Besides, the early single-sided 8-inch floppy could hold the data of up to 3,000 punch cards, or 80K to you. I know that's nothing today — this article uses up 66K with the text alone – but then it was a big deal.
Some early model microcomputers, such as the Xerox 820 and Xerox Alto, used 8-inch drives, but these first generation floppies never broke through to the larger consumer market. That honor would go to the next generation of the floppy: the 5.25 inch model.
By 1972, Shugart had left IBM and founded his own company, Shugart Associates. In 1975, Wang, which at the time owned the then big-time dedicated word processor market, approached Shugart about creating a computer that would fit on top of a desk. To do that, Wang needed a smaller, cheaper floppy disk.
According to Don Massaro (PDF link), another IBMer who followed Shugart to the new business, Wang’s founder Charles Wang said, “I want to come out with a much lower-end word processor. It has to be much lower cost and I can't afford to pay you $200 for your 8" floppy; I need a $100 floppy.”
So, Shugart and company started working on it. According to Massaro, “We designed the 5 1/4" floppy drive in terms of the overall design, what it should look like, in a car driving up to Herkimer, New York to visit Mohawk Data Systems.” The design team stopped at a stationery store to buy cardboard while trying to figure out what size the diskette should be. “It's real simple, the reason why it was 5¼,” he said. “5 1/4 was the smallest diskette that you could make that wouldnotfit in your pocket. We didn't want to put it in a pocket because we didn't want it bent, okay?”
Shugart also designed the diskette to be that size because an analysis of the cassette tape drives and their bays in microcomputers showed that a 5.25” drive was as big as you could fit into the PCs of the day.
According to another story fromJimmy Adkisson, a Shugart engineer, “Jim Adkisson and Don Massaro were discussing the proposed drive's size with Wang. The trio just happened to be doing their discussing at a bar. An Wang motioned to a drink napkin and stated 'about that size' which happened to be 5 1/4-inches wide.”
Wang wasn’t the most important element in the success of the 5.25-inch floppy. George Sollman, another Shugart engineer,took an early model of the 5.25” driveto a Home Brew Computer Club meeting. “The following Wednesday or so, Don came to my office and said, 'There's a bum in the lobby,’” Sollman says. “‘And, in marketing, you're in charge of cleaning up the lobby. Would you get the bum out of the lobby?’ So I went out to the lobby and this guy is sitting there with holes in both knees. He really needed a shower in a bad way but he had the most dark, intense eyes and he said, 'I've got this thing we can build.'”
The bum's name was Steve Jobs and the “thing” was the Apple II.
Apple had also used cassette drives for its first computers. Jobs knew his computers also needed a smaller, cheaper, and better portable data storage system. In late 1977, the Apple II was made available withoptional 5.25” floppy drivesmanufactured by Shugart. One drive ordinarily held programs, while the other could be used to store your data. (Otherwise, you had to swap floppies back-and-forth when you needed to save a file.)
The PC that made floppy disks a success: 1977's Apple II
The floppy disk seems so simple now, but it changed everything. As IBM's history of the floppy disk states, this was a big advance in user-friendliness. “But perhaps the greatest impact of the floppy wasn’t on individuals, but on the nature and structure of the IT industry. Up until the late 1970s, most software applications for tasks such as word processing and accounting were written by the personal computer owners themselves. But thanks to the floppy, companies could write programs, put them on the disks, and sell them through the mail or in stores. 'It made it possible to have a software industry,' says Lee Felsenstein, a pioneer of the PC industry who designed the Osborne 1, the first mass-produced portable computer. Before networks became widely available for PCs, people used floppies to share programs and data with each other—calling it the 'sneakernet.'”
In short, it was the floppy disk that turned microcomputers into personal computers.
Which of these drives did you own?
The success of the Apple II made the 5.25” drive the industry standard. The vast majority of CP/M-80 PCs, from the late 70s to early 80s, used this size floppy drive. When the first IBM PC arrived in 1981 you had your choice of one or two 160 kilobyte (K – yes, just oneK) floppy drives.
Throughout the early 80s, the floppy drive becametheportable storage format. (Tape quickly was relegated tobusiness back-ups.) At first, the floppy disk drives were only built with one read/write head, but another set of heads were quickly incorporated. This meant that when the IBM XT PC arrived in 1983, double-sided floppies could hold up to 360K of data.
There were some bumps along the road to PC floppy drive compatibility. Some companies, such as DEC with itsDEC Rainbow, introduced its own non-compatible 5.25” floppy drives. They were single-sided but with twice the density, and in 1983 a single box of 10 disks cost $45 – twice the price of the standard disks.
In the end, though, market forces kept the various non-compatible disk formats from splitting the PC market into separate blocks. (How the data was stored was another issue, however. Data stored on a CP/M system was unreadable on a PC-DOS drive, for examples, so dedicated applications likeMedia Masterpromised to convert data from one format to another.)
That left lots of room for innovation within the floppy drive mainstream. In 1984, IBM introduced the IBM Advanced Technology (AT) computer. This model came with a high-density 5.25-inch drive, which could handle disks that could up hold up to 1.2MB of data.
A variety ofother floppy drives and disk formatswere tried. These included 2.0, 2.5, 2.8, 3.0, 3.25, and 4.0 inch formats. Most quickly died off, but one, the 3.5” size – introduced by Sony in 1980 – proved to be a winner.
The 3.5 disk didn't really take off until 1982. Then, theMicrofloppy Industry Committeeapproved a variation of the Sony design and the “new” 3.5” drive was quickly adopted by Apple for the Macintosh, by Commodore for the Amiga, and by Atari for its Atari ST PC. The mainstream PC market soon followed and by 1988, the more durable 3.5” disks outsold the 5.25” floppy disks. (During the transition, however, most of us configured our PCs to have both a 3.5” drive and a 5.25” drive, in addition to the by-now-ubiquitous hard disks. Still, most of us eventually ran into at least one situation in which we had a file on a 5.25” disk and no floppy drive to read it on.)
I
The one 3.5” diskette that everyone met at one time or another: An AOL install disk.
The first 3.5” disks could only hold 720K. But they soon became popular because of the more convenient pocket-size format and their somewhat-sturdier construction (if you rolled an office chair over one of these, you had a chance that the data might survive). Another variation of the drive, using Modified Frequency Modulation (MFM) encoding, pushed 3.5” diskettes storage up to 1.44Mbs in IBM's PS/2 and Apple's Mac IIx computers in the mid to late 1980s.
By then, though floppy drives would continue to evolve, other portable technologies began to surpass them.
In 1991, Jobs introduced the extended-density (ED) 3.5” floppy on his NeXT computer line. These could hold up to 2.8MBs. But it wasn't enough. A variety of other portable formats that could store more data came along, such as magneto-optical drives and Iomega's Zip drive, and they started pushing floppies out of business.
The real floppy killers, though, were read-writable CDs, DVDs, and, the final nail in the coffin:USB flash drives. Today, a 64GB flash drive can hold more data than every floppy disk I've ever owned all rolled together.
Today, you can still buynew 1.44MB floppy drivesandfloppy disks, but for the other formats you need to look to eBay or yard sales. If you really want a new 3.5” drive or disks, I'd get them sooner than later. Their day is almost done.
But, as they disappear even from memory, we should strive to remember just how vitally important floppy disks were in their day. Without them, our current computer world simply could not exist. Before the Internet was open to the public, it was floppy disks that let us create and trade programs and files. They really were what put thepersonalin “personal computing.”
Once upon a time, Steve Ballmer blasted Apple for asking its customers to pay$500 for an Apple logo.This was the “Apple Tax“, the price difference between the solid, professional workmanship of a laptop running on Windows, and Apple’s needlessly elegant MacBooks.
Following last week’sverdict against Samsung, the kommentariat have raised the specter of an egregious new Apple Tax, one that Apple will levy on other smartphone makers who will have no choice but to pass the burden on to you. The idea is this: Samsung’s loss means it will now have to compete against Apple with its dominant hand — a lower price tag — tied behind its back. This will allow Apple to exact higher prices for its iPhones (and iPads) and thus inflict even more pain and suffering on consumers.
There seems to be a moral aspect, here, as if Apple should be held to a higher standard. Last year,Apple and Nokia settled an IP “misunderstanding”that also resulted in a “Tax”…but it was Nokia that played the T-Man role: Apple paid Nokia more than $600M plus an estimated $11.50 per iPhone sold. Where were the handwringers who nowaccuse Apple of abusing the patent systemwhen the Nokia settlement took place? Where was the outrage against the “evil”, if hapless, Finnish company? (Amusingly, observers speculate thatNokia has made more moneyfrom these IP arrangements than from selling its own Lumia smartphones.)
Even where the moral tone is muted, the significance of the verdict (which you can read in fullhere) is over-dramatized.For instance, see this August 24th Wall Street Journal story sensationally titledAfter Verdict, Prepare for the ‘Apple Tax’:
After its stunning victory against rival device-makerSamsung ElectronicsCo., experts say consumers should expect smartphones, tablets and other mobile devices that license variousAppleInc., design and software innovations to be more expensive to produce.
“There may be a big Apple tax,” said IDC analyst Al Hilwa. “Phones will be more expensive.”
The reason is that rival device makers will likely have to pay to license the various Apple technologies the company sought to protect in court. The jury found that Samsung infringed as many as seven Apple patents, awarding $1.05 billion in damages.
The $1B sum awarded to Apple sounds impressive, but to the giants involved, it doesn’t really change much. Samsung’s annualmarketingbudget is about $2.75B (it covers washer-dryers and TVs, but it’s mostly smartphones), and, of course, Apple is sitting on a $100B+ cash hoard.
Then there’s the horror over the open-ended nature of the decision: Apple can continue to seek injunctions against products that infringe on their patents. From the NYT article:
…the decision could essentially force [Samsung] and other smartphone makers to redesign their products to be less Apple-like, or risk further legal defeats.
Certainly, injunctionscouldposea real threat. They could remove competitors, make Apple more dominant, give it morepricing powerto the consumer’s detriment…but none of this is a certainty. Last week’s verdict and any follow-up injunctions are sure to be appealed and appealed again until all avenues are exhausted. The Apple Tax won’t be enforced for several years, if ever.
And even if the “Tax”isassessed, will it have a deleterious impact on device manufacturers and consumers?Last year,about half of all Android handset makers— includingZTE,HTC,Sharp— were handed a Microsoft Tax bill ($27 per phone in ZTE’s case), one that isn’t impeded by an obstacle course of appeals. Count Samsung in this group: The Korean giantreportedly agreed to pay Microsoft“between $10 and $15 – for each Android smartphone or tablet computer it sells.” Sell 100M devices and the tax bill owed to Ballmer and Co. exceeds $1B. Despite this onerous surcharge, Android devices thrive, and Samsung has quickly jumped to the lead in the Android handset race (fromInforma, Telecoms & Media):
Amusingly, the Samsung verdict prompted this gloating tweet from Microsoft exec Bill Cox:
Windows Phone is looking gooooood right now.
(Or, asAllThingsDinterpreted it:Microsoft to Samsung. Mind if I Revel in Your Misfortune for a Moment?)
The subtext is clear: Android handset makers should worry about threats to the platform and seek safe harbor with the “Apple-safe” Windows Phone 8. This will be a “goooood” thing all around: If more handset makers offer Windows Phone devices, there will be more choices, fewer opportunities for Apple to get “unfairly high” prices for its iDevices. The detrimental effects, to consumers, of the “Apple Tax” might not be so bad, after all.
The Samsung trial recalls the interestingpeace agreementthat Apple and Microsoft forged in 1997,when Microsoft “invested” $150M in Apple as a fig-leaf for anIP settlement(see the end of the Quora article). The interesting part of the accord is the provision in which the companies agree that they won’t “clone” each other’s products. If Microsoft could arrange a cross-license agreement with Apple that includes an anti-cloning provision and eventually come up with its own original work (everyone agrees that Microsoft’s Modern UI is elegant, interesting, not just a knock-off), how come Samsung didn’t reach a similar arrangement and produce its own distinctive look and feel?
Microsoft and Apple saw that an armed peace was a better solution than constant IP conflicts. Can Samsung and Apple decide to do something similar and feed engineers rather than platoons of high-priced lawyers (the real winners in these battles)?
It’s a nice thought but I doubt it’ll happen. Gates and Jobs had known one another for a long time; there was animosity, but also familiarity. There is no such comfort between Apple and Samsung execs. There is, instead, a wide cultural divide.
"It's the first time that such a system has been tried outdoors," said biologist Jean-Marc Landry, who took part in testing on a Swiss meadow this week. In the trial, reported by the country's news agency ATS, around 10 sheep were each equipped with a heart monitor before being targeted by a pair of Wolfdogs -- both of which were muzzled. During the experiment, the change in the flock's heartbeat was found to be significant enough to imagine a system whereby the sheep could be fitted with a collar that releases a repellent to drive the wolf away, while also sending an SMS to the shepherd. The device is aimed at owners of small flocks who lack the funds to keep a sheepdog, Landry said, adding that it could also be used in tourist areas where guard dogs are not popular. A prototype collar is expected in the autumn and testing is planned in Switzerland and France in 2013. Other countries including Norway are said to be interested. The issue of wolves is a divisive one in Switzerland where the animals appear to be back after a 100-year absence. On July 27 a wolf killed two sheep in St Gall, the first such attack in the eastern canton. (c) 2012 AFP
sheep to warn shepherds of wolf attacks by SMS
August 4, 2012
Testing is under way in Switzerland where sheep can alert shepherds of an imminent wolf attack by text message
Enlarge
"It's the first time that
such a system has been tried outdoors," said biologist Jean-Marc Landry,
who took part in testing on a Swiss meadow this week.
In the trial, reported by the country's news agency ATS, around 10 sheep
were each equipped with a heart monitor before being targeted by a pair
of Wolfdogs -- both of which were muzzled.
During the experiment, the change in the flock's heartbeat was found to
be significant enough to imagine a system whereby the sheep could be
fitted with a collar that releases a repellent to drive the wolf away,
while also sending an SMS to the shepherd.
The device is aimed at owners of small flocks who lack the funds to keep
a sheepdog, Landry said, adding that it could also be used in tourist
areas where guard dogs are not popular.
A prototype collar is expected in the autumn and testing is planned in
Switzerland and France in 2013. Other countries including Norway are
said to be interested.
The issue of wolves is a divisive one in Switzerland where the animals
appear to be back after a 100-year absence.
On July 27 a wolf killed two sheep in St Gall, the first such attack in
the eastern canton.
(c) 2012 AFP
"It's the first time that
such a system has been tried outdoors," said biologist Jean-Marc Landry,
who took part in testing on a Swiss meadow this week.
In the trial, reported by the country's news agency ATS, around 10 sheep
were each equipped with a heart monitor before being targeted by a pair
of Wolfdogs -- both of which were muzzled.
During the experiment, the change in the flock's heartbeat was found to
be significant enough to imagine a system whereby the sheep could be
fitted with a collar that releases a repellent to drive the wolf away,
while also sending an SMS to the shepherd.
The device is aimed at owners of small flocks who lack the funds to keep
a sheepdog, Landry said, adding that it could also be used in tourist
areas where guard dogs are not popular.
A prototype collar is expected in the autumn and testing is planned in
Switzerland and France in 2013. Other countries including Norway are
said to be interested.
The issue of wolves is a divisive one in Switzerland where the animals
appear to be back after a 100-year absence.
On July 27 a wolf killed two sheep in St Gall, the first such attack in
the eastern canton.
(c) 2012 AFP
sheep to warn shepherds of wolf attacks by SMS
August 4, 2012
Testing is under way in Switzerland where sheep can alert shepherds of an imminent wolf attack by text message
Enlarge
Facedeals - a new
camera that can recognise shoppers from their Facebook pictures as they
enter a shop, and then offer them discounts
A promotional video created to promote the concept shows drinkers
entering a bar, and then being offerend cheap drinks as they are
recognised.
'Facebook
check-ins are a powerful mechanism for businesses to deliver discounts
to loyal customers, yet few businesses—and fewer customers—have realized
it,' said Nashville-based advertising agency Redpepper.
They are already trialling the scheme in firms close to their office.
'A search for businesses with active deals in our area turned up a measly six offers.
'The
odds we’ll ever be at one of those six spots are low (a strip club and
photography studio among them), and the incentives for a check-in are
not nearly enticing enough for us to take the time.
'So we set out to evolve the check-in and sweeten the deal, making both irresistible.
'We call it Facedeals.'
The Facedeal camera can identify faces when
people walk in by comparing Facebook pictures of people who have signed
up to the service
Facebook recently hit the
headlines when it bought face.com, an Israeli firm that pioneered the
use of face recognition technology online.
The social networking giant uses the software to recognise people in uploaded pictures, allowing it to accurately spot friends.
The software uses a complex algorithm to find the correct person from their Facebook pictures
The Facebook camera requires people to have authorised the Facedeals app through their Facebook account.
This verifies your most recent photo tags and maps the biometric data of your face.
The system then learns what a user looks like as more pictures are approved.
This data is then used to identify you in the real world.
In a demonstration video, the firm behind the
camera showed it being used to offer free drinks to customers if they
signed up to the system.
After a lot of theorizing,
postulating, and non-human trials, it looks like bionic eye implants are
finally hitting the market — first in Europe, and hopefully soon in the
US. These implants can restore sight to completely blind patients —
though only if the blindness is caused by a faulty retina, as in macular
degeneration (which millions of old people suffer from), diabetic
retinopathy, or other degenerative eye diseases.
The first of these implants, Argus II developed by Second Sight,
is already available in Europe. For around $115,000, you get a 4-hour
operation to install an antenna behind your eye, and a special pair of
camera-equipped glasses that send signals to the antenna. The antenna is
wired into your retina with around 60 electrodes, creating the
equivalent of a 60-pixel display for your brain to interpret. The first
users of the Argus II bionic eye report that they can see rough shapes
and track the movement of objects, and slowly read large writing.
The
second bionic eye implant, the Bio-Retina developed by Nano Retina, is a
whole lot more exciting. The Bio-Retina costs less — around the $60,000
mark — and instead of an external camera, the vision-restoring sensor
is actually placed inside the eye, on top of the retina. The operation
only takes 30 minutes and can be performed under local anesthetic.
Basically, with macular
degeneration and diabetic retinopathy, the light-sensitive rods and
cones in your retina stop working. The Bio-Retina plops a
24×24-resolution (576-pixel!) sensor right on top of your damaged
retina, and 576 electrodes on the back of the sensor implant themselves
into the optic nerve. An embedded image processor converts the data from
each of the pixels into electrical pulses that are coded in such a way
that the brain can perceive different levels of grayscale.
The
best bit, though, is how the the sensor is powered. The Bio-Retina
system comes with a standard pair of corrective lenses that are modified
so that they can fire a near-infrared laser beam through your iris to
the sensor at the back of your eye. On the sensor there is a
photovoltaic cell that produces up to three milliwatts — not a lot, but
more than enough. The infrared laser is invisible and harmless. To see
the Bio-Retina system in action, watch the demo video embedded below.
Human
trials of Bio-Retina are slated to begin in 2013 — but like Second
Sight, US approval could be a long time coming. It’s easy enough to hop
on a plane and visit one of the European clinics offering bionic eye
implants, though. Moving forward, multiple research groups are working
on bionic eyes with even more electrodes, and thus higher resolution,
but there doesn’t seem to be any progress on sensors or encoder chips
that can create a color image. A lot of work is being done on understanding how the retina, optic nerve, and brain process and perceive images — so who knows what the future might hold.
Of course, Apple didn’t cut the iPad from whole cloth (which probably
would have been linen). It was built upon decades of ideas, tests,
products and more ideas. Before we explore the iPad’s story, it’s
appropriate to consider the tablets and the pen-driven devices that
preceded it.
So Popular So Quickly
Today the iPad is so popular that it’s easy to overlook that it’s
only three years old. Apple has updated it just twice. Here’s a little
perspective to reinforce the iPad’s tender age:
When President Barak Obama was inaugurated as America’s 44th president, there was no iPad.
In 2004 when the Boston Red Sox broke the Curse of the Bambino
and won the World Series for the first time in 86 years, there was no
iPad. Nor did it exist three years later, when they won the championship
again.
Elisha Gray
was an electrical engineer and inventor who lived in Ohio and
Massachusetts between 1835 and 1901. Elisha was a wonderful little geek,
and became interested in electricity while studying at Oberlin College. He collected nearly 70 patents in his lifetime, including that of the Telautograph. [PDF].
The Telautograph let a person use a stylus that was connected to two rheostats,
which managed the current produced by the amount of resistance
generated as the operator wrote with the stylus. That electronic record
was transmitted to a second Telautograph, reproducing the author’s
writing on a scroll of paper. Mostly. Gray noted that, since the scroll
of paper was moving, certain letters were difficult or impossible to
produce. For example, you couldn’t “…dot an i or cross a t or underscore
or erase a word.” Users had to get creative.
Still, the thing was a hit, and was used in hospitals, clinics,
insurance firms, hotels (as communication between the front desk and
housekeeping), banks and train dispatching. Even the US Air Force used the Telautograph to disseminate weather reports.
It’s true that the Telautograph is more akin to a fax machine than a
contemporary tablet, yet it was the first electronic writing device to
receive a patent, which was awarded in 1888.
Of course, ‘ol Elisha is better known for arriving at the US patent
office on Valentine’s Day, 1876, with what he described as an apparatus
“for transmitting vocal sounds telegraphically” just two hours after
Mr. Alexander Graham Bell showed up with a description of a device that
accomplished the same feat. After years of litigation, Bell was legally
declared the inventor of what we now call the telephone, even though
the device described in his original patent application wouldn’t have
worked (Gray’s would have). So Gray/Bell have a Edison/Tesla thing going on.
Back to tablets.
Research Continues
Research continued after the turn of the century. The US Patent Office awarded a patent to Mr. Hyman Eli Goldberg of Chicago in 1918,
for his invention of the Controller. This device concerned the “a
moveable element, a transmitting sheet, a character on said sheet formed
of conductive ink and electrically controlled operating mechanism for
said moveable element.” It’s considered the first patent awarded for a
handwriting recognition user interface with a stylus.
Photo credit: Computer History Museum
Jumping ahead a bit, we find the Styalator (early 1950’s) and the RAND tablet
(1964). Both used a pen and a tablet-like surface for input. The RAND
(above) is more well-known and cost an incredible $18,000. Remember,
that’s 18 grand in 1960?s money. Both bear little resemblance to
contemporary tablet computers, and consisted of a tablet surface and an
electronic pen. Their massive bulk — and price tags ?- made them a
feasible purchase for few.
Alan Kay and the Dynabook
In 1968, things got real. Almost. Computer scientist Alan Kay1
described his concept for a computer meant for children. His “Dynabook”
would be small, thin, lightweight and shaped like a tablet.
In a paper entitled “A Personal Computer For Children Of All Ages,” [PDF] Kay described his vision for the Dynabook:
”The size should be no larger than a notebook; weigh less
than 4 lbs.; the visual display should be able to present 4,000
printing quality characters with contrast ratios approaching that of a
book; dynamic graphics of reasonable quality should be possible; there
should be removable local file storage of at least one million
characters (about 500 ordinary book pages) traded off against several
hours audio (voice/music) files.”
In the video below, Kay explains his thoughts on the original prototype:
That’s truly amazing vision. Alas, the Dynabook as Kay envisioned it was never produced.
Apple’s First Tablet
The first commercial tablet product from Apple appeared in 1979. The Apple Graphics Tablet was meant to compliment the Apple II and use the “Utopia Graphics System” developed by musician Todd Rundgren. 2
That’s right, Todd Rundgren. The FCC soon found that it caused radio
frequency interference, unfortunately, and forced Apple to discontinue
production.
A revised version was released in the early 1980’s, which Apple described like this:
“The Apple Graphics Tablet turns your Apple II system
into an artist’s canvas. The tablet offers an exciting medium with easy
to use tools and techniques for creating and displaying
pictured/pixelated information. When used with the Utopia Graphics
Tablet System, the number of creative alternatives available to you
multiplies before your eyes.
The Utopia Graphics Tablet System includes a wide array of brush
types for producing original shapes and functions, and provides 94 color
options that can generate 40 unique brush shades. The Utopia Graphics
Tablet provides a very easy way to create intricate designs, brilliant
colors, and animated graphics.”
The GRiDpad
This early touchscreen device cost $2,370 in 1989 and reportedly inspired Jeff Hawkins
to create the first Palm Pilot. Samsung manufactured the GRiDpad
PenMaster, which weighed under 5 lbs., was 11.5“ x 9.3” x 1.48? and ran
on a 386SL 20MHz processor with a 80387SX coprocessor. It had 20 MB RAM
and the internal hard drive was available at 40 MB, 60 MB, 80 MB or 120
MB. DigiBarn has a nice GRiDpad gallery.
The Newton Message Pad
With Steve Jobs out of the picture, Apple launched its second pen-computing product, the Newton Message Pad.
Released in 1993, the Message Pad was saddled with iffy handwriting
recognition and poor marketing efforts. Plus, the size was odd; too big
to fit comfortably in a pocket yet small enough to suggest that’s where
it ought to go.
The Newton platform evolved and improved in the following years, but was axed in 1998 (I still use one, but I’m a crazy nerd).
Knight-Ridder and the Tablet Newspaper
This one is compelling. Back in 1994, media and Internet publishing company Knight-Ridder3
produced a video demonstrating its faith in digital newspaper. Its
predictions are eerily accurate, except for this bold statement:
“Many of the technologists…assume that information is
just a commodity and people really don’t care where that information
comes from as long as it matches their set of personal interests. I
disagree with that view. People recognize the newspapers they subscribe
to…and there is a loyalty attached to those.”
Knight-Ridder got a lot right, but I’m afraid the technologists
quoted above were wrong. Just ask any contemporary newspaper publisher.
The Late Pre-iPad Tablet Market
Many other devices appeared at this time, but what I call the “The
Late Pre-iPad Tablet Market” kicked off when Bill Gates introduced the
Compaq tablet PC in 2001. That year, Gates made a bold prediction at COMDEX:
“‘The PC took computing out of the back office and into
everyone’s office,’ said Gates. ‘The Tablet takes cutting-edge PC
technology and makes it available wherever you want it, which is why I’m
already using a Tablet as my everyday computer. It’s a PC that is
virtually without limits – and within five years I predict it will be
the most popular form of PC sold in America.’”
None of these devices, including those I didn’t mention, saw the
success of the iPad. That must be due to in a large part to iOS. While
the design was changing dramatically — flat, touch screen, light weight,
portable — the operating system was stagnant and inappropriate. When
Gates released the Compaq tablet in 2001, it was running Windows XP.
That system was built for a desktop computer and it simply didn’t work
on a touch-based tablet.
Meanwhile, others dreamed of what could be, unhindered by the limitations of hardware and software. Or reality.
Tablets in Pop Culture
The most famous fictional tablet device must be Star Trek’s Personal Access Display Device
or “PADD.” The first PADDs appeared as large, wedge-shaped clipboards
in the original Star Trek series and seemed to operate with a stylus
exclusively. Kirk and other officers were always signing them with a
stylus, as if the yeomen were interstellar UPS drivers and Kirk was
receiving a lot of packages. 4
As new Trek shows were developed, new PADD models appeared. The
devices went multi-touch in The Next Generation, adopting the LCARS
Interface. A stylus was still used from time to time, though there was
less signing. And signing. Aaand signing.
In Stanley Kubrick’s 2001: A Space Odyssey, David Bowman and
Frank Poole use flat, tablet-like devices to send and receive news from
Earth. In his novel, Arthur C. Clarke described the “Newspad” like
this:
“When he tired of official reports and memoranda and
minutes, he would plug his foolscap-sized Newspad into the ship’s
information circuit and scan the latest reports from Earth. One by one
he would conjure up the world’s major electronic papers; he knew the
codes of the more important ones by heart, and had no need to consult
the list on the back of his pad. Switching to the display unit’s
short-term memory, he would hold the front page while he quickly
searched the headlines and noted the items that interested him.
Each had its own two-digit reference; when he punched that, the
postage-stamp-sized rectangle would expand until it neatly filled the
screen and he could read it with comfort. When he had finished, he would
flash back to the complete page and select a new subject for detailed
examination.
Floyd sometimes wondered if the Newspad, and the fantastic technology
behind it, was the last word in man’s quest for perfect communications.
Here he was, far out in space, speeding away from Earth at thousands of
miles an hour, yet in a few milliseconds he could see the headlines of
any newspaper he pleased. (That very word ‘newspaper,’ of course, was an
anachronistic hangover into the age of electronics.) The text was
updated automatically on every hour; even if one read only the English
versions, one could spend an entire lifetime doing nothing but absorbing
the ever-changing flow of information from the news satellites.
It was hard to imagine how the system could be improved or made more
convenient. But sooner or later, Floyd guessed, it would pass away, to
be replaced by something as unimaginable as the Newspad itself would
have been to Caxton or Gutenberg.”
The iPad was released in 2010, so Clarke missed reality by only nine years. Not bad for a book published in 1968.
Next Time: Apple Rumors Begin
In the next article in this series, I’ll pick things up in the early
2000’s when rumors of an Apple-branded tablet gained momentum. For now,
I’ll leave you with this quote from an adamant Steve Jobs, taken from an AllThingsD conference in 2003:
“Walt Mossberg: A lot of people think given the success
you’ve had with portable devices, you should be making a tablet or a
PDA.
Steve Jobs: There are no plans to make a tablet. It turns out people
want keyboards. When Apple first started out, people couldn’t type. We
realized: Death would eventually take care of this. We look at the
tablet and we think it’s going to fail. Tablets appeal to rich guys with
plenty of other PCs and devices already. I get a lot of pressure to do a
PDA. What people really seem to want to do with these is get the data
out. We believe cell phones are going to carry this information. We
didn’t think we’d do well in the cell phone business. What we’ve done
instead is we’ve written what we think is some of the best software in
the world to start syncing information between devices. We believe that
mode is what cell phones need to get to. We chose to do the iPod instead
of a PDA.”
We’ll pick it up from there next time. Until then, go and grab your
iPad and give a quiet thanks to Elisha Gray, Hyman Eli Goldberg, Alan
Kay, the Newton team, Charles Landon Knight and Herman Ridder, Bill
Gates and yes, Stanley Kubrick, Arthur C. Clarke and Gene Roddenberry.
Without them and many others, you might not be holding that wonderful
little device.
More recently known as a co-developer on the the One Laptop Per Child machine. The computer itself was inspired, in part, by Kay’s work on the Dynabook.
I’m really sorry for all the Flash on Todd’s site. It’s awful.
We present the first large-scale analysis of hardware failure rates on a
million consumer PCs. We find that many failures are neither transient
nor independent. Instead, a large portion of hardware induced failures
are recurrent: a machine that crashes from a fault in hardware is up to
two orders of magnitude more likely to crash a second time. For example,
machines with at least 30 days of accumulated CPU time over an 8 month
period had a 1 in 190 chance of crashing due to a CPU subsystem fault.
Further, machines that crashed once had a probability of 1 in 3.3 of
crashing a second time. Our study examines failures due to faults within
the CPU, DRAMand disk subsystems. Our analysis spans desktops and
laptops, CPU vendor, overclocking, underclocking, generic vs. brand
name, and characteristics such as machine speed and calendar age. Among
our many results, we find that CPU fault rates are correlated with the
number of cycles executed, underclocked machines are significantly more
reliable than machines running at their rated speed, and laptops are
more reliable than desktops.
We've been writing a lot recently about how the private space industry
is poised to make space cheaper and more accessible. But in general,
this is for outfits such as NASA, not people like you and me.
Today, a company called NanoSatisfi is launching a Kickstarter project to send an Arduino-powered satellite into space, and you can send an experiment along with it.
Whether it's private industry or NASA or the ESA or anyone else,
sending stuff into space is expensive. It also tends to take
approximately forever to go from having an idea to getting funding to
designing the hardware to building it to actually launching something.
NanoSatisfi, a tech startup based out of NASA's Ames Research Center
here in Silicon Valley, is trying to change all of that (all of
it) by designing a satellite made almost entirely of off-the-shelf (or
slightly modified) hobby-grade hardware, launching it quickly, and then
using Kickstarter to give you a way to get directly involved.
ArduSat is based on the CubeSat platform, a standardized satellite
framework that measures about four inches on a side and weighs under
three pounds. It's just about as small and cheap as you can get when it
comes to launching something into orbit, and while it seems like a very
small package, NanoSatisfi is going to cram as much science into that
little cube as it possibly can.
Here's the plan: ArduSat, as its name implies, will run on Arduino
boards, which are open-source microcontrollers that have become wildly
popular with hobbyists. They're inexpensive, reliable, and packed with
features. ArduSat will be packing between five and ten individual
Arduino boards, but more on that later. Along with the boards, there
will be sensors. Lots of sensors, probably 25 (or more), all compatible with the Arduinos and all very tiny and inexpensive. Here's a sampling:
Yeah, so that's a lot of potential for science, but the entire
Arduino sensor suite is only going to cost about $1,500. The rest of the
satellite (the power system, control system, communications system,
solar panels, antennae, etc.) will run about $50,000, with the launch
itself costing about $35,000. This is where you come in.
NanoSatisfi is looking for Kickstarter funding to pay for just the
launch of the satellite itself: the funding goal is $35,000. Thanks to
some outside investment, it's able to cover the rest of the cost itself.
And in return for your help, NanoSatisfi is offering you a chance to
use ArduSat for your own experiments in space, which has to be one of the coolest Kickstarter rewards ever.
For a $150 pledge, you can reserve 15 imaging slots on ArduSat.
You'll be able to go to a website, see the path that the satellite will
be taking over the ground, and then select the targets you want to
image. Those commands will be uploaded to the ArduSat, and when it's in
the right spot in its orbit, it'll point its camera down at Earth and
take a picture which will be then emailed right to you. From space.
For $300, you can upload your own personal message to ArduSat, where it will be broadcast back to Earth from space
for an entire day. ArduSat is in a polar orbit, so over the course of
that day, it'll circle the Earth seven times and your message will be
broadcast over the entire globe.
For $500, you can take advantage of the whole point of ArduSat and
run your very own experiment for an entire week on a selection of
ArduSat's sensors. You know, in space. Just to be
clear, it's not like you're just having your experiment run on data
that's coming back to Earth from the satellite. Rather, your experiment
is uploaded to the satellite itself, and it's actually running on one of
the Arduino boards on ArduSat real time, which is why there are so many
identical boards packed in there.
Now, NanoSatisfi itself doesn't really expect to get involved with a
lot of the actual experiments that the ArduSat does: rather, it's saying
"here's this hardware platform we've got up in space, it's got all
these sensors, go do cool stuff." And if the stuff that you can do with
the existing sensor package isn't cool enough for you, backers of the
project will be able to suggest new sensors and new configurations, even
for the very first generation ArduSat.
To make sure you don't brick the satellite with buggy code,
NanSatisfi will have a duplicate satellite in a space-like environment
here on Earth that it'll use to test out your experiment first. If
everything checks out, your code gets uploaded to the satellite, runs in
whatever timeslot you've picked, and then the results get sent back to
you after your experiment is completed. Basically, you're renting time
and hardware on this satellite up in space, and you can do (almost)
whatever you want with that.
ArduSat has a lifetime of anywhere from six months to two years. None
of this payload stuff (neither the sensors nor the Arduinos) are
specifically space-rated or radiation-hardened or anything like that,
and some of them will be exposed directly to space. There will be some
backups and redundancy, but partly, this will be a learning experience
to see what works and what doesn't. The next generation of ArduSat will
take all of this knowledge and put it to good use making a more capable
and more reliable satellite.
This, really, is part of the appeal of ArduSat: with a fast,
efficient, and (relatively) inexpensive crowd-sourced model, there's a
huge potential for improvement and growth. For example, If this
Kickstarter goes bananas and NanoSatisfi runs out of room for people to
get involved on ArduSat, no problem, it can just build and launch
another ArduSat along with the first, jammed full of (say) fifty more
Arduinos so that fifty more experiments can be run at the same time. Or
it can launch five more ArduSats. Or ten more. From the decision to
start developing a new ArduSat to the actual launch of that ArduSat is a
period of just a few months. If enough of them get up there at the same
time, there's potential for networking multiple ArduSats together up in
space and even creating a cheap and accessible global satellite array.
If this sounds like a lot of space junk in the making, don't worry: the
ArduSats are set up in orbits that degrade after a year or two, at which
point they'll harmlessly burn up in the atmosphere. And you can totally
rent the time slot corresponding with this occurrence and measure
exactly what happens to the poor little satellite as it fries itself to a
crisp.
Longer term, there's also potential for making larger ArduSats with
more complex and specialized instrumentation. Take ArduSat's camera:
being a little tiny satellite, it only has a little tiny camera, meaning
that you won't get much more detail than a few kilometers per pixel. In
the future, though, NanoSatisfi hopes to boost that to 50 meters (or
better) per pixel using a double or triple-sized satellite that it'll
call OptiSat. OptiSat will just have a giant camera or two, and in
addition to taking high resolution pictures of Earth, it'll also be able
to be turned around to take pictures of other stuff out in space. It's
not going to be the next Hubble, but remember, it'll be under your control.
NanoSatisfi's Peter Platzer holds a prototype ArduSat board,
including the master controller, sensor suite, and camera. Photo: Evan
Ackerman/DVICE
Assuming the Kickstarter campaign goes well, NanoSatisfi hopes to
complete construction and integration of ArduSat by about the end of the
year, and launch it during the first half of 2013. If you don't manage
to get in on the Kickstarter, don't worry- NanoSatisfi hopes that there
will be many more ArduSats with many more opportunities for people to
participate in the idea. Having said that, you should totally get
involved right now: there's no cheaper or better way to start doing a
little bit of space exploration of your very own.
Check out the ArduSat Kickstarter video below, and head on through the link to reserve your spot on the satellite.
If you follow the world of Android
tablets and phones, you may have heard a lot about Tegra 3 over the
last year. Nvidia's chip currently powers many of the top Android
tablets, and should be found in a few Android smartphones by the end of
the year. It may even form the foundation of several upcoming Windows 8
tablets and possibly future phones running Windows Phone 8. So what is
the Tegra 3 chip, and why should you care whether or not your phone or
tablet is powered by one?
Nvidia's system-on-chip
Tegra is the brand for Nvidia's line of system-on-chip (SoC) products
for phones, tablets, media players, automobiles, and so on. What's a
system-on-chip? Essentially, it's a single chip that combines all the
major functions needed for a complete computing system: CPU cores,
graphics, media encoding and decoding, input-output, and even cellular
or Wi-Fi communcations and radios. The Tegra series competes with chips
like Qualcomm's Snapdragon, Texas Instruments' OMAP, and Samsung's
Exynos.
The first Tegra chip was a flop. It was used in very few products,
notably the ill-fated Zune HD and Kin smartphones from Microsoft. Tegra
2, an improved dual-core processor, was far more successful but still
never featured in enough devices to become a runaway hit.
Tegra 3 has been quite the success so far. It is found in a number of popular Android tablets like the Eee Pad Transformer Prime, and is starting to find its way into high-end phones like the global version of the HTC One X
(the North American version uses a dual-core Snapdragon S4 instead, as
Tegra 3 had not been qualified to work with LTE modems yet). Expect to
see it in more Android phones and tablets internationally this fall.
4 + 1 cores
Tegra 3 is based on the ARM processor design and architecture, as are
most phone and tablet chips today. There are many competing ARM-based
SoCs, but Tegra 3 was one of the first to include four processor cores.
There are now other quad-core SoCs from Texas Instruments and Samsung,
but Nvidia's has a unique defining feature: a fifth low-power core.
All five of the processor cores are based on the ARM Cortex-A9
design, but the fifth core is made using a special low-power process
that sips battery at low speeds, but doesn't scale up to high speeds
very well. It is limited to only 500MHz, while the other cores run up to
1.4GHz (or 1.5GHz in single-core mode).
When your phone or tablet is in sleep mode, or you're just performing
very simple operations or using very basic apps, like the music player,
Tegra 3 shuts down its four high-power cores and uses only the
low-power core. It's hard to say if this makes it far more efficient
than other ARM SoCs, but battery life on some Tegra 3 tablets has been
quite good.
Tegra 3 under a microscope. You can see the five CPU cores in the center.
Good, not great, graphics
Nvidia's heritage is in graphics processors. The company's claim to
fame has been its GPUs for traditional laptops, desktops, and servers.
You might expect Tegra 3 to have the best graphics processing power of
any tablet or phone chip, but that doesn't appear to be the case. Direct
graphics comparisons can be difficult, but there's a good case to be
made that the A5X processor in the new iPad has a far more powerful
graphics processor. Still, Tegra 3 has plenty of graphics power, and
Nvidia works closely with game developers to help them optimize their
software for the platform. Tegra 3 supports high-res display output (up
to 2560 x 1600) and improved video decoding capabilities compared to
earlier Tegra chips.
Do you need one?
The million-dollar question is: Does the Tegra 3 chip provide a truly
better experience than other SoCs? Do you need four cores, or even "4 +
1"? The answer is no. Most smartphone and tablet apps don't make great
use of multiple CPU cores, and making each core faster can often do more
for the user experience than adding more cores. That said, you
shouldn't avoid a product because it has a Tegra 3 chip, either. Its
performance and battery life appear to be quite competitive in today's
tablet and phone market. Increasingly, the overall quality of a product
is determined by its design, size, weight, display quality, camera
quality, and other features more than mere processor performance.
Consider PCWorld's review of the North American HTC One X; with the dual-core Snapdragon S4 instead of Tegra 3, performance was still very impressive.