Entries tagged as technologyRelated tags 3d camera flash game hardware headset history mobile mobile phone software tracking virtual reality web wiki www 3d printing 3d scanner crowd-sourcing diy evolution facial copy food innovation&society medecin microsoft physical computing piracy programming rapid prototyping recycling robot virus 3g gsm lte network ai algorythm android apple arduino automation data mining data visualisation neural network sensors siri artificial intelligence big data cloud computing coding fft program ad app google htc ios linux os sdk super collider tablet usb windows 8 amazon cloud iphone ar augmented reality army drone car privacy super computer chrome browser firefox ie laptop computing farm open source security sustainability cpu amd ibm intel nvidia qualcomm cray data center energy facebookFriday, August 26. 2011MicroGen Chips to Power Wireless Sensors Through Environmental VibrationsVia Daily Tech ----- MicroGen Systems says its chips differ from other vibrational energy-harvesting devices because they have low manufacturing costs and use nontoxic material instead of PZT, which contains lead.
(TOP) Prototype wireless sensor battery with four energy-scavenging chips. (BOTTOM) One chip with a vibrating cantilever (Source: MicroGen Systems )
MicroGen
Systems is in the midst of creating energy-scavenging chips that will convert
environmental vibrations into electricity to power wireless sensors.
Wednesday, August 24. 2011The First Industrial EvolutionVia Big Think by Dominic Basulto ----- If the first industrial revolution was all about mass manufacturing and machine power replacing manual labor, the First Industrial Evolution will be about the ability to evolve your personal designs online and then print them using popular 3D printing technology. Once these 3D printing technologies enter the mainstream, they could lead to a fundamental change in the way that individuals - even those without any design or engineering skills - are able to create beautiful, state-of-the-art objects on demand in their own homes. It is, quite simply, the democratization of personal manufacturing on a massive scale. At the Cornell Creative Machines Lab, it's possible to glimpse what's next for the future of personal manufacturing. Researchers led by Hod Lipson created the website Endless Forms (a clever allusion to Charles Darwin’s famous last line in The Origin of Species) to "evolve” everyday objects and then bring them to life using 3D printing technologies. Even without any technical or design expertise, it's possible to create and print forms ranging from lamps to mushrooms to butterflies. You literally "evolve" printable, 3D objects through a process that echoes the principles of evolutionary biology. In fact, to create this technology, the Cornell team studied how living items like oak trees and elephants evolve over time. 3D printing capabilities, once limited to the laboratory, are now hitting the mainstream. Consider the fact that MakerBot Industries just landed $10 million from VC investors. In the future, each of us may have a personal 3D printer in the home, ready to print out personal designs on demand.
Wait a second, what's going on here? Objects using humans to evolve themselves? 3D Printers? Someone's been drinking the Kool-Aid, right?
What if that gorgeous iPad 2 you’re holding in your hand was actually “evolved” and not “designed”? What if it is the object that controls the design, and not the designer that controls the object? Hod Lipson, an expert on self-aware robots and a pioneer of the 3D printing movement, has claimed that we are on the brink of the second industrial revolution. However, if objects really do "evolve," is it more accurate to say that we are on the brink of The First Industrial Evolution? The final frontier, of course, is not the ability of humans to print out beautifully-evolved objects on demand using 3D printers in their homes. (Although that’s quite cool). The final frontier is the ability for self-aware objects independently “evolving” humans and then printing them out as they need them. Sound far-fetched? Well, it’s now possible to print 3D human organs and 3D human skin. When machine intelligence progresses to a certain point, what’s to stop independent, self-aware machines from printing human organs? The implications – for both atheists and true believers – are perhaps too overwhelming even to consider. Friday, August 19. 2011IBM unveils cognitive computing chips, combining digital 'neurons' and 'synapses'Via Kurzweil ----- IBM researchers unveiled today a new generation of experimental computer chips designed to emulate the brain’s abilities for perception, action and cognition. In a sharp departure from traditional von Neumann computing concepts in designing and building computers, IBM’s first neurosynaptic computing chips recreate the phenomena between spiking neurons and synapses in biological systems, such as the brain, through advanced algorithms and silicon circuitry. The technology could yield many orders of magnitude less power consumption and space than used in today’s computers, the researchers say. Its first two prototype chips have already been fabricated and are currently undergoing testing. Called cognitive computers, systems built with these chips won’t be programmed the same way traditional computers are today. Rather, cognitive computers are expected to learn through experiences, find correlations, create hypotheses, and remember — and learn from — the outcomes, mimicking the brains structural and synaptic plasticity. “This is a major initiative to move beyond the von Neumann paradigm that has been ruling computer architecture for more than half a century,” said Dharmendra Modha, project leader for IBM Research. “Future applications of computing will increasingly demand functionality that is not efficiently delivered by the traditional architecture. These chips are another significant step in the evolution of computers from calculators to learning systems, signaling the beginning of a new generation of computers and their applications in business, science and government.” Neurosynaptic chips IBM is combining principles from nanoscience, neuroscience, and supercomputing as part of a multi-year cognitive computing initiative. IBM’s long-term goal is to build a chip system with ten billion neurons and hundred trillion synapses, while consuming merely one kilowatt of power and occupying less than two liters of volume.
IBM has two working prototype designs. Both cores were fabricated in 45 nm SOICMOS and contain 256 neurons. One core contains 262,144 programmable synapses and the other contains 65,536 learning synapses. The IBM team has successfully demonstrated simple applications like navigation, machine vision, pattern recognition, associative memory and classification. IBM’s overarching cognitive computing architecture is an on-chip network of lightweight cores, creating a single integrated system of hardware and software. It represents a potentially more power-efficient architecture that has no set programming, integrates memory with processor, and mimics the brain’s event-driven, distributed and parallel processing. ![]() Visualization of the long distance network of a monkey brain (credit: IBM Research) SyNAPSE The company and its university collaborators also announced they have been awarded approximately $21 million in new funding from the Defense Advanced Research Projects Agency (DARPA) for Phase 2 of the Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) project. The goal of SyNAPSE is to create a system that not only analyzes complex information from multiple sensory modalities at once, but also dynamically rewires itself as it interacts with its environment — all while rivaling the brain’s compact size and low power usage. For Phase 2 of SyNAPSE, IBM has assembled a world-class multi-dimensional team of researchers and collaborators to achieve these ambitious goals. The team includes Columbia University; Cornell University; University of California, Merced; and University of Wisconsin, Madison. Why Cognitive Computing Future chips will be able to ingest information from complex, real-world environments through multiple sensory modes and act through multiple motor modes in a coordinated, context-dependent manner. For example, a cognitive computing system monitoring the world’s water supply could contain a network of sensors and actuators that constantly record and report metrics such as temperature, pressure, wave height, acoustics and ocean tide, and issue tsunami warnings based on its decision making. Similarly, a grocer stocking shelves could use an instrumented glove that monitors sights, smells, texture and temperature to flag bad or contaminated produce. Making sense of real-time input flowing at an ever-dizzying rate would be a Herculean task for today’s computers, but would be natural for a brain-inspired system. “Imagine traffic lights that can integrate sights, sounds and smells and flag unsafe intersections before disaster happens or imagine cognitive co-processors that turn servers, laptops, tablets, and phones into machines that can interact better with their environments,” said Dr. Modha. IBM has a rich history in the area of artificial intelligence research going all the way back to 1956 when IBM performed the world’s first large-scale (512 neuron) cortical simulation. Most recently, IBM Research scientists created Watson, an analytical computing system that specializes in understanding natural human language and provides specific answers to complex questions at rapid speeds. --- Thursday, August 11. 2011Does Facial Recognition Technology Mean the End of Privacy?Via big think by Dominic Basulto ----- At the Black Hat security conference in Las Vegas, researchers from Carnegie Mellon demonstrated how the same facial recognition technology used to tag Facebook photos could be used to identify random people on the street. This facial recognition technology, when combined with geo-location, could fundamentally change our notions of personal privacy. In Europe, facial recognition technology has already stirred up its share of controversy, with German regulators threatening to sue Facebook up to half-a-million dollars for violating European privacy rules. But it's not only Facebook - both Google (with PittPatt) and Apple (with Polar Rose) are also putting the finishing touches on new facial recognition technologies that could make it easier than ever before to connect our online and offline identities. If the eyes are the window to the soul, then your face is the window to your personal identity. And it's for that reason that privacy advocates in both Europe and the USA are up in arms about the new facial recognition technology. What seems harmless at first - the ability to identify your friends in photos - could be something much more dangerous in the hands of anyone else other than your friends for one simple reason: your face is the key to linking your online and offline identities. It's one thing for law enforcement officials to have access to this technology, but what if your neighbor suddenly has the ability to snoop on you? The researchers at Carnegie Mellon showed how a combination of simple technologies - a smart phone, a webcam and a Facebook account - were enough to identify people after only a three-second visual search. Hackers - once they can put together a face and the basics of a personal profile - like a birthday and hometown - they can start piecing together details like your Social Security Number and bank account information.
Forget being fingerprinted, it could be far worse to be Faceprinted. It's like the scene from The Terminator, where Arnold Schwarzenegger is able to identify his targets by employing a futuristic form of facial recognition technology. Well, the future is here. Imagine a complete stranger taking a photo of you and immediately connecting that photo to every element of your personal identity and using that to stalk you (or your wife or your daughter). It happened to reality TV star Adam Savage - when he uploaded a photo to his Twitter page of his SUV parked outside his home, he didn't realize that it included geo-tagging meta-data. Within hours, people knew the exact location of his home. Or, imagine walking into a store, and the sales floor staff doing a quick visual search using a smart phone camera, finding out what your likes and interests are via Facebook or Google, and then tailoring their sales pitch accordingly. It's targeted advertising, taken to the extreme.
Which raises the important question: Is Privacy a Right or a Privilege? Now that we're all celebrities in the Internet age, it doesn't take much to extrapolate that soon we'll all have the equivalent of Internet paparazzi incessantly snapping photos of us and intruding into our daily lives. Cookies, spiders, bots and spyware will seem positively Old School by then. The people with money and privilege and clout will be the people who will be able to erect barriers around their personal lives, living behind the digital equivalent of a gated community. The rest of us? We'll live our lives in public. Geeks Without Frontiers Pursues Wi-Fi for EveryoneVia OStatic ----- Recently, you may have heard about new efforts to bring online access to regions where it has been economically nonviable before. This idea is not new, of course. The One Laptop Per Child (OLPC) initiative was squarely aimed at the goal until it ran into some significant hiccups. One of the latest moves on this front comes from Geeks Without Frontiers, which has a stated goal of positively impacting one billion lives with technology over the next 10 years. The organization, sponsored by Google and The Tides Foundation, is working on low cost, open source Wi-Fi solutions for "areas where legacy broadband models are currently considered to be uneconomical." According to an announcement from Geeks Without Frontiers: "GEEKS expects that this technology, built mainly by Cozybit, managed by GEEKS and I-Net Solutions, and sponsored by Google, Global Connect, Nortel, One Laptop Per Child, and the Manna Energy Foundation, will enable the development and rollout of large-scale mesh Wi-Fi networks for atleast half of the traditional network cost. This is a major step in achieving the vision of affordable broadband for all." It's notable that One Laptop Per Child is among the sponsors of this initiative. The organization has open sourced key parts of its software platform, and could have natural synergies with a global Wi-Fi effort. “By driving down the cost of metropolitan and village scale Wi-Fi networks, millions more people will be able to reap the economic and social benefits of significantly lower cost Internet access,” said Michael Potter, one of the founders of the GEEKS initiative. The Wi-Fi technology that GEEKS is pursuing is mesh networking technology. Specifically, open80211s (o11s), which implements the AMPE (Authenticated Mesh Peering Exchange) enabling multiple authenticated nodes to encrypt traffic between themselves. Mesh networks are essentially widely distributed wireless networks based on many repeaters throught a specific location. You can read much more about the open80211s standard here. The GEEKS initiative has significant backers, and with sponsorship from OLPC, will probably benefit from good advice on the topic of bringing advanced technologies to disadvantaged regions of the world. The effort will be worth watching. Tuesday, August 09. 2011The Subjectivity of Natural ScrollingVia Slash Gear -----
Apple released its new OS X Lion for Mac computers recently, and there was one controversial change that had the technorati chatting nonstop. In the new Lion OS, Apple changed the direction of scrolling. I use a MacBook Pro (among other machines, I’m OS agnostic). On my MacBook, I scroll by placing two fingers on the trackpad and moving them up or down. On the old system, moving my fingers down meant the object on the screen moved up. My fingers are controlling the scroll bars. Moving down means I am pulling the scroll bars down, revealing more of the page below what is visible. So, the object moves upwards. On the new system, moving my fingers down meant the object on screen moves down. My fingers are now controlling the object. If I want the object to move up, and reveal more of what is beneath, I move my fingers up, and content rises on screen.
The scroll bars are still there, but Apple has, by default, hidden them in many apps. You can make them reappear by hunting through the settings menu and turning them back on, but when they do come back, they are much thinner than they used to be, without the arrows at the top and bottom. They are also a bit buggy at the moment. If I try to click and drag the scrolling indicator, the page often jumps around, as if I had missed and clicked on the empty space above or below the scroll bar instead of directly on it. This doesn’t always happen, but it happens often enough that I have trained myself to avoid using the scroll bars this way.
Some disclosure: my day job is working for Samsung. We make Windows computers that compete with Macs. I work in the phones division, but my work machine is a Samsung laptop running Windows. My MacBook is a holdover from my days as a tech journalist. When you become a tech journalist, you are issued a MacBook by force and stripped of whatever you were using before. "Natural scrolling will seem familiar to those of you not frozen in an iceberg since World War II"I am not criticizing or endorsing Apple’s new natural scrolling in this column. In fact, in my own usage, there are times when I like it, and times when I don’t. Those emotions are usually found in direct proportion to the amount of NyQuil I took the night before and how hot it was outside when I walked my dog. I have found no other correlation. The new natural scrolling method will probably seem familiar to those of you not frozen in an iceberg since World War II. It is the same direction you use for scrolling on most touchscreen phones, and most tablets. Not all, of course. Some phones and tablets still use styli, and these phone often let you scroll by dragging scroll bars with the pointer. But if you have an Android or an iPhone or a Windows Phone, you’re familiar with the new method. My real interest here is to examine how the user is placed in the conversation between your fingers and the object on screen. I have heard the argument that the new method tries, and perhaps fails, to emulate the touchscreen experience by manipulating objects as if they were physical. On touchscreen phones, this is certainly the case. When we touch something on screen, like an icon or a list, we expect it to react in a physical way. When I drag my finger to the right, I want the object beneath to move with my finger, just as a piece of paper would move with my finger when I drag it. This argument postulates a problem with Apple’s natural scrolling because of the literal distance between your fingers and the objects on screen. Also, the angle has changed. The plane of your hands and the surface on which they rest are at an oblique angle of more than 90 degrees from the screen and the object at hand. Think of a magic wand. When you wave a magic wand with the tip facing out before you, do you imagine the spell shooting forth parallel to the ground, or do you imagine the spell shooting directly upward? In our imagination, we do want a direct correlation between the position of our hands and the reaction on screen, this is true. However, is this what we were getting before? Not really. The difference between classic scrolling and ‘natural’ scrolling seems to be the difference between manipulating a concept and manipulating an object. Scroll bars are not real, or at least they do not correspond to any real thing that we would experience in the physical world. When you read a tabloid, you do not scroll down to see the rest of the story. You move your eyes. If the paper will not fit comfortably in your hands, you fold it. But scrolling is not like folding. It is smoother. It is continuous. Folding is a way of breaking the object into two conceptual halves. Ask a print newspaper reporter (and I will refrain from old media mockery here) about the part of the story that falls “beneath the fold.” That part better not be as important as the top half, because it may never get read. Natural scrolling correlates more strongly to moving an actual object. It is like reading a newspaper on a table. Some of the newspaper may extend over the edge of the table and bend downward, making it unreadable. When you want to read it, you move the paper upward. In the same way, when you want to read more of the NYTimes.com site, you move your fingers upward. "Is it better to create objects on screen that appropriate the form of their physical world counterparts?"The argument should not be over whether one is more natural than the other. Let us not forget that we are using an electronic machine. This is not a natural object. The content onscreen is only real insofar as pixels light up and are arranged into a recognizable pattern. Those words are not real, they are the absence of light, in varying degrees if you have anti-aliasing cranked up, around recognizable patterns that our eyes and brain interpret as letters and words. The argument should be over which is the more successful design for a laptop or desktop operating system. Is it better to create objects on screen that appropriate the form of their physical world counterparts? Should a page in Microsoft Word look like a piece of paper? Should an icon for a hard disk drive look like a hard disk? What percentage of people using a computer have actually seen a hard disk drive? What if your new ultraportable laptop uses a set of interconnected solid state memory chips instead? Does the drive icon still look like a drive? Or is it better to create objects on screen that do not hew to the physical world? Certainly their form should suggest their function in order to be intuitive and useful, but they do not have to be photorealistic interpretations. They can suggest function through a more abstract image, or simply by their placement and arrangement. "How should we represent a Web browser, a feature that has no counterpart in real life?"In the former system, the computer interface becomes a part of the users world. The interface tries to fit in with symbols that are already familiar. I know what a printer looks like, so when I want to set up my new printer, I find the picture of the printer and I click on it. My email icon is a stamp. My music player icon is a CD. Wait, where did my CD go? I can’t find my CD?! What happened to my music!?!? Oh, there it is. Now it’s just a circle with a musical note. I guess that makes sense, since I hardly use CDs any more. In the latter system, the user becomes part of the interface. I have to learn the language of the interface design. This may sound like it is automatically more difficult than the former method of photorealism, but that may not be true. After all, when I want to change the brightness of my display, will my instinct really be to search for a picture of a cog and gears? And how should we represent a Web browser, a feature that has no counterpart in real life? Are we wasting processing power and time trying to create objects that look three dimensional on a two dimensional screen in a 2D space? I think the photorealistic approach, and Apple’s new natural scrolling, may be the more personal way to design an interface. Apple is clearly thinking of the intimate relationship between the user and the objects that we touch. It is literally a sensual relationship, in that we use a variety of our senses. We touch. We listen. We see. But perhaps I do not need, nor do I want, to have this relationship with my work computer. I carry my phone with me everywhere. I keep my tablet very close to me when I am using it. With my laptop, I keep some distance. I am productive. We have a lot to get done.
DNA circuits used to make neural network, store memoriesVia ars technica By Kyle Niemeyer -----
![]() Even as some scientists and engineers develop improved versions of current computing technology, others are looking into drastically different approaches. DNA computing offers the potential of massively parallel calculations with low power consumption and at small sizes. Research in this area has been limited to relatively small systems, but a group from Caltech recently constructed DNA logic gates using over 130 different molecules and used the system to calculate the square roots of numbers. Now, the same group published a paper in Nature that shows an artificial neural network, consisting of four neurons, created using the same DNA circuits. The artificial neural network approach taken here is based on the perceptron model, also known as a linear threshold gate. This models the neuron as having many inputs, each with its own weight (or significance). The neuron is fired (or the gate is turned on) when the sum of each input times its weight exceeds a set threshold. These gates can be used to construct compact Boolean logical circuits, and other circuits can be constructed to store memory. As we described in the last article on this approach to DNA computing, the authors represent their implementation with an abstraction called "seesaw" gates. This allows them to design circuits where each element is composed of two base-paired DNA strands, and the interactions between circuit elements occurs as new combinations of DNA strands pair up. The ability of strands to displace each other at a gate (based on things like concentration) creates the seesaw effect that gives the system its name. In order to construct a linear threshold gate, three basic seesaw gates are needed to perform different operations. Multiplying gates combine a signal and a set weight in a seesaw reaction that uses up fuel molecules as it converts the input signal into output signal. Integrating gates combine multiple inputs into a single summed output, while thresholding gates (which also require fuel) send an output signal only if the input exceeds a designated threshold value. Results are read using reporter gates that fluoresce when given a certain input signal. To test their designs with a simple configuration, the authors first constructed a single linear threshold circuit with three inputs and four outputs—it compared the value of a three-bit binary number to four numbers. The circuit output the correct answer in each case. For the primary demonstration on their setup, the authors had their linear threshold circuit play a computer game that tests memory. They used their approach to construct a four-neuron Hopfield network, where all the neurons are connected to the others and, after training (tuning the weights and thresholds) patterns can be stored or remembered. The memory game consists of three steps: 1) the human chooses a scientist from four options (in this case, Rosalind Franklin, Alan Turing, Claude Shannon, and Santiago Ramon y Cajal); 2) the human “tells” the memory network the answers to one or more of four yes/no (binary) questions used to identify the scientist (such as, “Did the scientist study neural networks?” or "Was the scientist British?"); and 3) after eight hours of thinking, the DNA memory guesses the answer and reports it through fluorescent signals. They played this game 27 total times, for a total of 81 possible question/answer combinations (34). You may be wondering why there are three options to a yes/no question—the state of the answers is actually stored using two bits, so that the neuron can be unsure about answers (those that the human hasn't provided, for example) using a third state. Out of the 27 experimental cases, the neural network was able to correctly guess all but six, and these were all cases where two or more answers were not given. In the best cases, the neural network was able to correctly guess with only one answer and, in general, it was successful when two or more answers were given. Like the human brain, this network was able to recall memory using incomplete information (and, as with humans, that may have been a lucky guess). The network was also able to determine when inconsistent answers were given (i.e. answers that don’t match any of the scientists). These results are exciting—simulating the brain using biological computing. Unlike traditional electronics, DNA computing components can easily interact and cooperate with our bodies or other cells—who doesn’t dream of being able to download information into your brain (or anywhere in your body, in this case)? Even the authors admit that it’s difficult to predict how this approach might scale up, but I would expect to see a larger demonstration from this group or another in the near future.
Wednesday, August 03. 2011A DIY UAV That Hacks Wi-Fi Networks, Cracks Passwords, and Poses as a Cell Phone TowerVia POPSCI By Clay Dillow -----
![]() Just a Boy and His Cell-Snooping, Password-Cracking, Hacktastic Homemade Spy Drone via Rabbit-Hole
Last year at the Black Hat and Defcon security conferences in Las Vegas, a former Air Force cyber security contractor and a former Air Force engineering systems consultant displayed their 14-pound, six-foot-long unmanned aerial vehicle, WASP (Wireless Aerial Surveillance Platform). Last year it was a work in progress, but next week when they unveil an updated WASP they’ll be showing off a functioning homemade spy drone that can sniff out Wi-Fi networks, autonomously crack passwords, and even eavesdrop on your cell phone calls by posing as a cell tower. WASP is built from a retired Army target drone, and creators Mike Tassey and Richard Perkins have crammed all kinds of technology aboard, including an HD camera, a small Linux computer packed with a 340-million-word dictionary for brute-forcing passwords as well as other network hacking implements, and eleven different antennae. Oh, and it’s autonomous; it requires human guidance for takeoff and landing, but once airborne WASP can fly a pre-set route, looping around an area looking for poorly defended data.
And on top of that, the duo has taught their WASP a new way to surreptitiously gather intel from the ground: pose as a GSM cell phone tower to trick phones into connecting through WASP rather than their carriers--a trick Tassey and Perkins learned from another security hacker at Defcon last year. Tassey and Perkins say they built WASP so show just how easy it is, and just how vulnerable you are. “We wanted to bring to light how far the consumer industry has progressed, to the point where public has access to technologies that put companies, and even governments at risk from this new threat vector that they’re not aware of,” Perkins told Forbes. Consider yourself warned. For details on the WASP design--including pointers on building your own--check out Tassey and Perkins site here.
Monday, August 01. 2011World's first 'printed' plane snaps together and fliesVia CNET By Eric Mach ----- English engineers have produced what is believed to be the world's first printed plane. I'm not talking a nice artsy lithograph of the Wright Bros. first flight. This is a complete, flyable aircraft spit out of a 3D printer. ![]() The SULSA began life in something like an inkjet and wound up in the air. (Credit: University of Southhampton) The SULSA (Southampton University Laser Sintered Aircraft) is an unmanned air vehicle that emerged, layer by layer, from a nylon laser sintering machine that can fabricate plastic or metal objects. In the case of the SULSA, the wings, access hatches, and the rest of the structure of the plane were all printed. As if that weren't awesome enough, the entire thing snaps together in minutes, no tools or fasteners required. The electric plane has a wingspan of just under 7 feet and a top speed of 100 mph. Jim Scanlon, one of the project leads at the University of Southhampton, explains in a statement that the technology allows for products to go from conception to reality much quicker and more cheaply. "The flexibility of the laser-sintering process allows the design team to revisit historical techniques and ideas that would have been prohibitively expensive using conventional manufacturing," Scanlon says. "One of these ideas involves the use of a Geodetic structure... This form of structure is very stiff and lightweight, but very complex. If it was manufactured conventionally it would require a large number of individually tailored parts that would have to be bonded or fastened at great expense." So apparently when it comes to 3D printing, the sky is no longer the limit. Let's just make sure someone double-checks the toner levels before we start printing the next international space station. ----- Personal comments: Industrial production tools back to people? Fab@Home project
Posted by Christian Babski
in Hardware, Physical computing, Technology
at
10:20
Defined tags for this entry: 3d printing, hardware, physical computing, rapid prototyping, technology
Thursday, July 28. 2011Decoding DNA With SemiconductorsVia New York Times By Christopher Capozziello -----
The inventor Jonathan Rothberg with a semiconductor chip used in the Ion Torrent machine.
The inventor of a new machine that decodes DNA with semiconductors has used it to sequence the genome of Gordon Moore, co-founder of Intel, a leading chip maker. The inventor, Jonathan Rothberg of Ion Torrent Systems in Guilford, Conn., is one of several pursuing the goal of a $1,000 human genome, which he said he could reach by 2013 because his machine is rapidly being improved. “Gordon Moore worked out all the tricks that gave us modern semiconductors, so he should be the first person to be sequenced on a semiconductor,” Dr. Rothberg said. At $49,000, the new DNA decoding device is cheaper than its several rivals. Its promise rests on the potential of its novel technology to be improved faster than those of machines based on existing techniques. Manufacturers are racing to bring DNA sequencing costs down to the point where a human genome can be decoded for $1,000, the sum at which enthusiasts say genome sequencing could become a routine part of medical practice. But the sequencing of Dr. Moore’s genome also emphasizes how far technology has run ahead of the ability to interpret the information it generates. Dr. Moore’s genome has a genetic variant that denotes a “56 percent chance of brown eyes,” one that indicates a “typical amount of freckling” and another that confers “moderately higher odds of smelling asparagus in one’s urine,” Dr. Rothberg and his colleagues reported Wednesday in the journal Nature. There are also two genetic variants in Dr. Moore’s genome said to be associated with “increased risk of mental retardation” — a risk evidently never realized. The clinical value of this genomic information would seem to be close to nil. Dr. Rothberg said he agreed that few genes right now yield useful genetic information and that it will be a 10- to 15-year quest to really understand the human genome. For the moment his machine is specialized for analyzing much smaller amounts of information, like the handful of genes highly active in cancer. The Ion Torrent machine requires only two hours to sequence DNA, although sample preparation takes longer. The first two genomes of the deadly E. coli bacteria that swept Europe in the spring were decoded on the company’s machines. The earliest DNA sequencing method depended on radioactivity to mark the four different units that make up genetic material, but as the system was mechanized, engineers switched to fluorescent chemicals. The new device is the first commercial system to decode DNA directly on a semiconductor chip and to work by detecting a voltage change, rather than light. About 1.2 million miniature wells are etched into the surface of the chip and filled with beads holding the DNA strands to be sequenced. A detector in the floor of the well senses the acidity of the solution in each well, which rises each time a new unit is added to the DNA strands on the bead. The cycle is repeated every few seconds until each unit in the DNA strand has been identified. Several years ago, Dr. Rothberg invented another DNA sequencing machine, called the 454, which was used to sequence the genome of James Watson, the co-discoverer of the structure of DNA. Dr. Rothberg said he was describing how the machine had “read” Dr. Watson’s DNA to his young son Noah, who asked why he did not invent a machine to read minds. Dr. Rothberg said he began his research with the idea of making a semiconductor chip that could detect an electrical signal moving across a slice of neural tissue. He then realized the device he had developed was more suited to sequencing DNA. George Church, a genome technologist at the Harvard Medical School, said he estimated the cost to sequence Dr. Moore’s genome at $2 million. This is an improvement on the $5.7 million it cost in 2008 to sequence Dr. Watson’s genome on the 454 machine, but not nearly as good as the $3,700 spent by Complete Genomics to sequence Dr. Church’s genome and others in 2009. Dr. Rothberg said he had already reduced the price of his chips to $99 from $250, and today could sequence Dr. Moore’s genome for around $200,000. Because of Moore’s Law — that the number of transistors placeable on a chip doubles about every two years — further reductions in the cost of the DNA sequencing chip are inevitable, Dr. Rothberg said. Stephan Schuster, a genome biologist at Penn State, said his two Ion Torrent machines were “outstanding,” and enabled a project that would usually have taken two months to be completed in five days. There is now “a race to the death as to who can sequence faster and cheaper, always with the goal of human resequencing in mind,” Dr. Schuster said.
« previous page
(Page 6 of 7, totaling 69 entries)
» next page
|
QuicksearchPopular Entries
CategoriesShow tagged entriesSyndicate This BlogCalendar
Blog Administration |