Many of us have been waiting for the moment when 3D printers
would not only be offered ready-to-use without the need of DIY
assembly, but at a price comparable to a common computer. Well get
excited, because that day has arrived.
Created by 3D Systems, the Cube
will retail for just $1,299 and is connected to a community of 3D
designers where you can find inspiration, or upload your own designs and
sell them in the Cubify marketplace. Admittedly, the MakerBot Replicator
is only a tad more expensive at $1,749, but just like the early
versions of the home Windows PC versus the Mac, the Cube wins on style
points for those who prefer a less industrial look and feel to their 3D
printer.
You can order the Cube 3D printer here and check out the design to fabrication process in the video below.
IBM researchers reverse engineered a macaque brain as a start to engineering one of their own
The gnomes at IBM’s research labs were not content to make merely a genius computer
that could beat any human at the game of jeopardy. They had to go and
create a new kind of machine intelligence that mimics the actual human
brain.
Watson, the reigning jeopardy champ, is smart, but it’s still
recognizably a computer. This new stuff is something completely
different. IBM is setting out to build an electronic brain from the
ground up.
Cognitive computing, as the new field is called, takes computing
concepts to a whole new level. Earlier this week, Dharmendra Modha, who
works at IBM’s Almaden Research Center, regaled a roomful of analysts
with what cognitive computing can do and how IBM is going about making a
machine that thinks the way we do. His own blog on the subject is here.
First Modha described the challenges, which involve aspects of neuroscience, supercomputing, and nanotechnology.
The human brain integrates memory and processing together, weighs
less than 3 lbs, occupies about a two-liter volume, and uses less power
than a light bulb. It operates as a massively parallel distributed
processor. It is event driven, that is, it reacts to things in its
environment, uses little power when active and even less while resting.
It is a reconfigurable, fault-tolerant learning system. It is
excellent at pattern recognition and teasing out relationships.
A computer, on the other hand, has separate memory and processing.
It does its work sequentially for the most part and is run by a clock.
The clock, like a drum majorette in a military band, drives every
instruction and piece of data to its next location — musical chairs with
enough chairs. As clock rates increase to drive data faster, power
consumption goes up dramatically, and even at rest these machines need a
lot of electricity. More importantly, computers have to be
programmed. They are hard wired and fault prone. They are good at
executing defined algorithms and performing analytics.
With $41 million in funding from the Defense Advanced Research
Projects Agency (DARPA), the scientists at the Almaden lab set out to
make a brain in a project called Systems of Neuromorphic Adaptive
Plastic Scalable Electronics (SyNAPSE).
The rough analogy between a brain and a computer posits roles for
cell types — neurons, axons, and synapses — that correspond to machine
elements — processors, communications links, and memory. The matches
are not exact, as brain cells’ functions are less distinct from each
other than the computer elements. But the key is that the brain
elements all reside near each other, and activity in any given complex
is stimulated by activity from adjacent complexes. That is, thoughts
stimulate other thoughts.
Modha and his team set out to map and synthesize a wiring diagram for
the brain, no trivial task, as the brain has 22 billion neurons and 220
trillion synapses. In May 2009, the team managed to simulate a system
with 1 billion neurons, roughly the brain of a lower mammal. Except
that it operates at one-thousandth of real time, not enough to perform
what Modha called “the four Fs”: food, fight, flight, and mating.
But the structure of this machine is entirely different from today’s
commercial computers. The memory and processing elements are built
close together. It has no clock. Operations are asynchronous and event
driven; that is, they have no predetermined order or schedule. And
instead of being programmed, they learn. Just like us.
Part of getting the power down to brain-like levels is not storing
temporary results (caching, in industry jargon). Sensing stimulates
action, which is sensed and acted upon further. And so on.
The team recently built a smaller hardware version of the brain
simulation, one with just 256 neurons, 262,000 programmable synapses,
and 65,000 learning synapses. The good news is that this machine runs
at within an order of magnitude of the power that a real brain
consumes. With its primitive capabilities, this brainlette is capable
of spatial navigation, machine vision, pattern recognition, and
associative memory and can do evidence-based hypothesis generation. It
has a “mind’s eye” that can see a pattern, for example, a badly written
number, and generate a good guess as to what the actual number is.
Already better than our Precambrian ancestors.
Modha pointed out that this type of reasoning is a lot like that of a
typical right hemisphere in the brain: intuitive, parallel, synthetic.
Not content with half a brain, Modha envisions adding a typical von
Neumann-type computer, which acts more like a reasoning left hemisphere,
to the mix, and having the two share information, just like a real
brain.
When this brain is ready to go to market, I’m going to send my own on holiday and let Modha’s do my thinking for me.
Oh, and, by the way, in case you were wondering whether the SyNAPSE
project has caused Watson to be put out to pasture, nothing could be
further from the truth. Watson is alive and well and moving on to new,
more practical applications.
For example, since jeopardy contestants can’t “call a friend,” Watson
was constrained to the data that could be loaded directly into the
machine (no Internet searches), but in the latest application of Watson
technology — medical diagnoses — the Internet is easily added to the
corpus within the machine, allowing Watson to search a much wider range
of unstructured data before rendering an answer.
Watson had to hit the bell faster than the human contestants, but the
doctors seeking advice on a strange set of symptoms can easily wait a
half hour or longer. So, Watson can make more considered choices.
Watson at work is a serious tool.
All this genius is causing my brain to explode.
Disclosure: Endpoint has a consulting relationship with IBM.
Infamous for failing to commercialize the technologies it invented,
Xerox's R&D subsidiary has a new strategy for innovation: make
money.
Cheap trick: In a prototype, logic circuits and computer memory are printed together on a sheet of plastic.
PARC
Last month, a small Norwegian company called Thinfilm Electronics and PARC, the storied Silicon Valley research lab, jointly showed off a technological first—a plastic film that combined both printed transistors and printed digital memory.
Such flexible electronics could be an important component of future
products, such as food packaging that senses and record temperatures,
shock-sensing helmets, as well as smart toys. But the story of how
PARC's technology—the printed transistors—wound up paired with memory
technology from an obscure Norwegian company also provides a window onto
a 10-year struggle by Xerox to transform the way it commercializes
R&D ideas.
For most of its 40-year history, PARC (for Palo Alto Research Center)
was as famous for squandering new technologies as it was for inventing
them. The mouse, the graphical user interface, and the drop-down menu
were all born at PARC—but it was Apple and Microsoft that commercialized
them and made them cornerstone inventions of the PC industry.
The list of innovations lost hardly stops there. While Xerox did, of
course, commercialize PARC's blockbuster technology of laser printing,
other PARC inventions ultimately commercialized elsewhere include
Ethernet networking, the PDF file format, and electronic paper—created
at the research lab in 1975, long before the Amazon Kindle and other
e-books appeared.
By 2001, Xerox had seen enough. Facing poor financial results, its
then-CEO, Anne M. Mulcahy, vowed to return the company to profitability.
As part of that effort, Xerox reincorporated its cash-burning R&D
center as an independent company, simply called PARC, with a mandate to
turn a profit whether by licensing patents, through contract research,
or by creating partnerships with other firms.
The buzzword attached to new era was "open innovation"; PARC's
researchers would now freely associate with the outside world to hone
ideas and work out how to commercialize them. "When PARC spun out in
2002, open, collaborative innovation became, in essence, the business
model for PARC," says Lawrence Lee, currently PARC's director of
strategy. "But we've only figured out what that means in practice over
the last couple of years."
PARC's advances in printing transistors came at around the same time
the lab was being reorganized, making the technology a key proving
ground for the new strategy. PARC at first hoped to develop organic
electronic displays, a potentially huge market, but the technology
proved difficult to manufacture, and it fell far short of silicon-based
displays in performance.
In the old days, the idea might have languished. Xerox headquarters
had often failed to embrace new inventions that didn't relate to the
company's core businesses of selling copiers. But following the "open
innovation" idea, PARC started shopping the technology to manufacturers,
telling them that printed transistors could also provide very cheap,
flexible sensors and computer logic for packaging, toys, and other uses.
Tamara St. Claire, PARC's vice president for global business
development, says manufacturers liked the idea but wanted to see what
she terms a "minimum viable product"—management-speak for something more
than a benchtop experiment. To develop one, in 2010 PARC formed a
"co-innovation engagement" with Thinfilm,
which was already making printed memory. The resulting prototype
circuit was the first to combine both printed transistors and memory,
according to PARC.
The Xerox company now has partnerships with several other firms and
government agencies to use printed electronics in pressure-measuring
helmets as well as in packaging that can sense pressure, sound, light,
acceleration, or temperature. By doing so, it hopes to tap a market for
printed electronics that an analyst firm, IDTechex, estimates could
reach $45 billion by 2021.
For PARC, the partnerships are signs that open innovation is working.
"There are plenty of great ideas at PARC, but you learn early on that
execution is often the hard part—execution and timing," says St. Claire.
"It's something you can say PARC is really starting to understand. You
almost have to be as innovative in the commercialization—especially when
you have game-changing technologies—as on the technology side."
PARC, which once served only Xerox, now has an expanding list of
technologies in development with outside partners that include Fujitsu,
Motorola, NEC Display Solutions, Microsoft, Samsung, SolFocus, and
Oracle. The change in strategy has helped turn it from a
multimillion-dollar financial sinkhole into a modest, but growing,
innovation business. In 2010, it was profitable on revenue of more than
$60 million, a spokesman says. PARC, which has 250 employees, is also
patenting at a fast clip, with about 150 patents filed per year since
2002.
The focus on doing business, not just having ideas, has also boosted
morale, says Teresa Amabile, an organizational psychologist at Harvard
Business School. "I've talked to a lot of scientists, technicians, and
engineers doing R&D inside companies ... and the people I talk to at
PARC [are] more strongly, intrinsically motivated than the average,"
she says. "They are driven by real passions and excitement for the
disruptive discoveries they are making, coupled with excitement for
seeing what they were doing actually being used in the world. That
combination is pretty unusual."
What if you could feel what's on your television screen? Tech company Senseg is working on a way for you to someday be able to do just that, and recently demonstrated a prototype tablet that is already able to make that magic happen.
The tech is made possible using an electrostatic-field-based system that allows different parts of the screen to produce varying degrees of friction. So, while you're touching a flat screen, it feels like you're touching something textured instead. Your traditional screen is turned into what Senseg is calling a "Feel Screen," allowing you to feel textures, contours and edges of things that are displayed in front of you.
Feel Screens don't rely on moving parts in the screen itself, and could be integrated into devices we use today such as smartphones, tablets, and televisions.Senseg's technology is still very much in prototype-form, but could be headed our way in the next 24 months.
Through this signage at Promenade Temecula, the mall is notifying shoppers that their phones may be tracked as they move throughout the premises.
NEW YORK (CNNMoney) -- Attention holiday shoppers: your cell phone may be tracked this year.
Starting on Black Friday and running through New Year's Day, two U.S. malls -- Promenade Temecula in southern California and Short Pump Town Center in Richmond, Va. -- will track guests' movements by monitoring the signals from their cell phones.
While the data that's collected is anonymous, it can follow shoppers' paths from store to store.
The goal is for stores to answer questions like: How many Nordstrom shoppers also stop at Starbucks? How long do most customers linger in Victoria's Secret? Are there unpopular spots in the mall that aren't being visited?
While U.S. malls have long tracked how crowds move throughout their stores, this is the first time they've used cell phones.
But obtaining that information comes with privacy concerns.
The management company of both malls, Forest City Commercial Management, says personal data is not being tracked.
"We won't be looking at singular shoppers," said Stephanie Shriver-Engdahl, vice president of digital strategy for Forest City. "The system monitors patterns of movement. We can see, like migrating birds, where people are going to."
Still, the company is preemptively notifying customers by hanging small signs around the shopping centers. Consumers can opt out by turning off their phones.
The tracking system, called FootPath Technology, works through a series of antennas positioned throughout the shopping center that capture the unique identification number assigned to each phone (similar to a computer's IP address), and tracks its movement throughout the stores.
The system can't take photos or collect data on what shoppers have purchased. And it doesn't collect any personal details associated with the ID, like the user's name or phone number. That information is fiercely protected by mobile carriers, and often can be legally obtained only through a court order.
"We don't need to know who it is and we don't need to know anyone's cell phone number, nor do we want that," Shriver-Engdahl said.
Manufactured by a British company, Path Intelligence, this technology has already been used in shopping centers in Europe and Australia. And according to Path Intelligence CEO Sharon Biggar, hardly any shoppers decide to opt out.
"It's just not invasive of privacy," she said. "There are no risks to privacy, so I don't see why anyone would opt out."
Now, U.S. retailers including JCPenney (JCP,Fortune 500) and Home Depot (HD,Fortune 500) are also working with Path Intelligence to use their technology, Biggar said.
Home Depot has considered implementing the technology but is not currently using it any stores, a company spokesman said.JCPenney declined to comment on its relationship with the vendor.
Some retail analysts say the new technology is nothing to be worried about. Malls have been tracking shoppers for years through people counters, security cameras, heat maps and even undercover researchers who follow shoppers around.
And some even say websites that trackonline shoppersare more invasive, recording not only a user's name and purchases, but then targeting them with ads even after they've left a site.
"It's important for shoppers to realize this sort of data is being collected anyway," Biggar said.
Whereas a website can track a customer who doesn't make a purchase, physical stores have been struggling to perfect this kind of research, Biggar said. By combining the data from FootPath with their own sales figures, stores will have better measurements to help them improve the shopping experience.
"We can now say, you had 100 people come to this product, but no one purchased it," Biggar said. "From there, we can help a retailer narrow down what's going wrong."
But some industry analysts worry about the broader implications of this kind of technology.
"Most of this information is harmless and nobody ever does anything nefarious with it," said Sucharita Mulpuru, retail analyst at Forrester Research. "But the reality is, what happens when you start having hackers potentially having access to this information and being able to track your movements?"
Last year,hackers hit AT&T, exposing the unique ID numbers and e-mail addresses of more than 100,000 iPad 3G owners. To make it harder for hackers to get at this information, Path Intelligence scrambles those numbers twice.
"I'm sure as more people get more cell phones, it's probably inevitable that it will continue as a resource," Mulpuru said. "But I think the future is going to have to be opt in, not opt out."
Personal comment:
One step further. I guess we have to be thankful to be given the ability to opt out the system by 'just' switching off our cell-phone!!!
The Kilobots are an inexpensive system for testing synchronized and collaborative behavior in a very large swarm of robots. Photo courtesy of Michael Rubenstein
The Kilobots are coming. Computer scientists and engineers at Harvard University have developed and licensed technology that will make it easy to test collective algorithms on hundreds, or even thousands, of tiny robots.
Called Kilobots, the quarter-sized bug-like devices scuttle around on three toothpick-like legs, interacting and coordinating their own behavior as a team. AJune 2011 Harvard Technical Reportdemonstrated a collective of 25 machines implementing swarming behaviors such as foraging, formation control, and synchronization.
Once up and running, the machines are fully autonomous, meaning there is no need for a human to control their actions.
The communicative critters were created by members of the Self-Organizing Systems Research Group led by Radhika Nagpal, the Thomas D. Cabot Associate Professor of Computer Science at the Harvard School of Engineering and Applied Sciences (SEAS) and a Core Faculty Member at the Wyss Institute for Biologically Inspired Engineering at Harvard. Her team also includes Michael Rubenstein, a postdoctoral fellow at SEAS; and Christian Ahler, a fellow of SEAS and the Wyss Institute.
Thanks to a technology licensing deal with the K-Team Corporation, a Swiss manufacturer of high-quality mobile robots, researchers and robotics enthusiasts alike can now take command of their own swarm.
One key to achieving high-value applications for multi-robot systems in the future is the development of sophisticated algorithms that can coordinate the actions of tens to thousands of robots.
"The Kilobot will provide researchers with an important new tool for understanding how to design and build large, distributed, functional systems," says Michael Mitzenmacher, Area Dean for Computer Science at SEAS.
The name "Kilobot" does not refer to anything nefarious; rather, it describes the researchers' goal of quickly and inexpensively creating a collective of a thousand bots.
Inspired by nature, such swarms resemble social insects, such as ants and bees, that can efficiently search for and find food sources in large, complex environments, collectively transport large objects, and coordinate the building of nests and other structures.
Due to reasons of time, cost, and simplicity, the algorithms being developed today in research labs are only validated in computer simulation or using a few dozen robots at most.
In contrast, the design by Nagpal's team allows a single user to easily oversee the operation of a large Kilobot collective, including programming, powering on, and charging all robots, all of which would be difficult (if not impossible) using existing robotic systems.
So, what can you do with a thousand tiny little bots?
Robot swarms might one day tunnel through rubble to find survivors, monitor the environment and remove contaminants, and self-assemble to form support structures in collapsed buildings.
They could also be deployed to autonomously perform construction in dangerous environments, to assist with pollination of crops, or to conduct search and rescue operations.
For now, the Kilobots are designed to provide scientists with a physical testbed for advancing the understanding of collective behavior and realizing its potential to deliver solutions for a wide range of challenges.
-----
Personal comment:
This remembers me one project I have worked on, back in 2007, called "Variable Environment", which was involving swarm based robots called "e-puck" developed at EPFL. E-pucks were reacting in an autonomous manner to human activity around them.
The future of augmented-reality technology is here - as long as you're a rabbit. Bioengineers have placed the first contact lenses containing electronic displays into the eyes of rabbits as a first step on the way to proving they are safe for humans. The bunnies suffered no ill effects, the researchers say.
The first version may only have one pixel, but higher resolution lens displays - like those seen inTerminator- could one day be used as satnav enhancers showing you directional arrows for example, or flash up texts and emails - perhaps even video. In the shorter term, the breakthrough also means people suffering from conditions like diabetes and glaucoma may find they have a novel way to monitor their conditions.
In February,New Scientistrevealedthe litany of research projects underway in the field of contact lens enhancement. While one companyhas fielded a contact lens technologyusing a surface-mounted strain gauge to assess glaucoma risk, none have built in a display, or the lenses needed for focused projection onto the retina - and then tested it in vivo. They have now.
"We have demonstrated the operation of a contact lens display powered by a remote radiofrequency transmitter in free space and on a live rabbit," says a US and Finnish team led by Babak Praviz of the University of Washington in Seattle.
"This verifies that antennas, radio chips, control circuitry, and micrometre-scale light sources can be integrated into a contact lens and operated on live eyes."
The test lens was powered remotely using a 5-millimetre-long antenna printed on the lens to receive gigahertz-range radio-frequency energy from a transmitter placed ten centimetres from the rabbit's eye. To focus the light on the rabbit's retina, the contact lens itself was fabricated as a Fresnel lens - in which a series of concentric annular sections is used to generate the ultrashort focal length needed.
They found their lens LED glowed brightly up to a metre away from the radio source in free space, but needed to be 2 centimetres away when the lens was placed in a rabbit's eye and the wireless reception was affected by body fluids. All the 40-minute-long tests on live rabbits were performed under general anaesthetic and showed that the display worked well - and fluroescence tests showed no damage or abrasions to the rabbit's eyes after the lenses were removed.
While making a higher resolution display is next on their agenda, there are uses for this small one, say the researchers: "A display with a single controllable pixel could be used in gaming, training, or giving warnings to the hearing impaired."
"This is clearly way off in the future. But we're aware of the research that is ongoing in this field and we're watching the technology's potential for biosensing and drug delivery applications in particular," says a spokesperson for the British Contact Lens Association in London.
Focus in future will be on HTML5 as mobile world shifts towards
non-proprietary open standards – and now questions will linger over use
of Flash on desktop
Adobe is killing off development of its
mobile Flash plugin, and laying off 750 staff as part of broader
restructuring. Photograph: Paul Sakuma/AP
Mobile Flash is being killed off. The plugin that launched a thousand online forum arguments and a technology standoff between Apple and the format's creator, Adobe,
will no longer be developed for mobile browsers, the company said in a
note that will accompany a financial briefing to analysts.
Instead the company will focus on development around HTML5
technologies, which enable modern browsers to do essentially the same
functions as Flash did but without relying on Adobe's proprietary
technologies, and which can be implemented across platforms.
The existing plugins for the Android and BlackBerry platforms will be given bug fixes and security updates, the company said in a statement first revealed by ZDNet. But further development will end.
The
decision also raises a question mark over the future of Flash on
desktop PCs. Security vulnerabilities in Flash on the desktop have been
repeatedly exploited to infect PCs in the past 18 months, while Microsoft
has also said that the default browser in its forthcoming Windows 8
system, expected at the end of 2012, will not include the Flash plugin
by default. Apple, which in the third quarter captured 5% of the world
market, does not include Flash in its computers by default.
John Nack, a principal product manager at Adobe, commented on his personal blog
(which does not necessarily reflect Adobe views) that: "Adobe saying
that Flash on mobile isn't the best path forward [isn't the same as]
Adobe conceding that Flash on mobile (or elsewhere) is bad technology.
Its quality is irrelevant if it's not allowed to run, and if it's not
allowed to run, then Adobe will have to find different ways to meet
customers' needs."
Around 250m iOS (iPhone, iPod Touches and iPad)
devices have been sold since 2007. There are no clear figures for how
many are now in use. More recently Larry Page, chief executive of
Google, said that a total of 190m Android devices have been activated.
It is not clear how many of those include a Flash plugin in the browser.
"Our
future work with Flash on mobile devices will be focused on enabling
Flash developers to package native apps with Adobe Air for all the major
app stores," Adobe said in the statement. "We will no longer adapt
Flash Player for mobile devices to new browser, OS version or device
configurations.
"Some of our source code licensees may opt to
continue working on and releasing their own implementations. We will
continue to support the current Android and PlayBook configurations with
critical bug fixes and security updates."
The decision comes as
Adobe plans to cut 750 staff, principally in North America and Europe.
An Adobe spokesperson declined to give any figures for the extent of
layoffs in the UK. The company reiterated its expectation that it will
meet revenue targets for the fourth quarter.
The reversal by Adobe
– and its decision to focus on the open HTML5 platform for mobile –
brings to an end a long and tumultuous row between Apple and Adobe over
the usefulness of Flash on the mobile platform. The iPhone launched in
2007 without Flash capability, as did the iPad in 2010.
Steve
Jobs, then Apple's chief executive, and Apple's engineers insisted that
Flash was a "battery hog" and introduced security and stability flaws;
Adobe countered that it was broadly implemented in desktop PCs and used
widely on the web.
Jobs's antagonism was partly driven, his
biography reveals, by Adobe's reluctance after he rejoined Apple in 1996
to port its movie-editing programs to the Mac and to keep its Photoshop
suite comparable on the Mac platform with the Windows one.
But
Jobs also insisted that mobile Flash failed in the role of providing a
good user experience, and also would restrict Apple's ability to push
forward on the iOS platform. Studies of browser crash reports by Apple's
teams showed that Flash was responsible for a signficant proportion of
user problems; Apple was also not satisfied that a Flash plugin would be
available for the first iPhone in 2007 which would not consume more
battery power than would be acceptable.
Jobs managed to persuade
Eric Schmidt, then Google's chief executive and a member of the Apple
board, to get YouTube to make videos available in the H.264 format
without a Flash "wrapper", as was then used for the desktop
implementation.
But the disagreements between Apple and Adobe
intensified, especially when Android devices began appearing which did
use the Flash plugin. Apple refused to use it, and banned apps from its
App Store which tried to use or include Flash.
In "Thoughts on Flash",
an open letter published by Jobs in April 2010, he asserted that "Flash
was created during the PC era – for PCs and mice. Flash is a successful
business for Adobe, and we can understand why they want to push it
beyond PCs. But the mobile era is about low power devices, touch
interfaces and open web standards – all areas where Flash falls short.
"New
open standards created in the mobile era, such as HTML5, will win on
mobile devices (and PCs too). Perhaps Adobe should focus more on
creating great HTML5 tools for the future, and less on criticizing Apple
for leaving the past behind."
The first human genome cost $3 billion to complete; now we can sequence the entire population of Chicago for the same price
The mythical "$1,000 genome" is almost upon us, said Jonathan Rothberg, CEO of sequencing technology company Ion Torrent, at MIT's Emerging Technology conference. If his prediction comes true, it will represent an astonishing triumph in rapid technological development. The rate at which genome sequencing has become more affordable isfaster than Moore's law. (You can read a Q&ATRdid with Rothberg earlier this yearhere, and a profile of his companyhere).
"By this time next year sequencing human genomes as fast and cheap as bacterial genome," said Rothberg. (Earlier, he'd commented that his company can now do an entire bacterial genome in about two hours.)
I was in the room on October 19 when he said it, and I would have thought it pure hubris were it not for Rothberg's incredible track record in this area, from founding successful previous-generation sequencing company 454 Life Sciences to recent breakthroughs made with the same technology he proposes will get us to the $1,000 genome.
The Personal Genome Maker is already showing up in clinical labs, even doctors' offices
The key to this breakthrough, says Rothberg, is that the PGM does not rely on conventional wet chemistry to sequence DNA. Instead, it works almost entirely throughconventional microchip technology, which means Ion Torrent is leveraging decades of investment in conventional transistors and chips.
So what's the age of the $1,000 genome look like? Until we know what more of those genes actually correlate with, for most of us it won't be so different from the present.
"Right now don't have very many correlations between those 3 billion base pairs [of the human genome] and outcomes or medicines," says Rothberg. He predicts it will take at least 10 years of clinical experiments with full genome sequencing to get us to the point where we can begin to unlock its value.
"And it will be 20 years before we understand cancer at same level as HIV and can come up with combinations of medicine [tailored] for each individual," says Rothberg.