Friday, June 29. 2012Google's 16,000-brain neural network just wants to watch cat videosVia DVICE -----
The techno-wizards over at Google X, the company's R&D laboratory working on its self-driving cars and Project Glass, linked 16,000 processors together to form a neural network and then had it go forth and try to learn on its own. Turns out, massive digital networks are a lot like bored humans poking at iPads. The pretty amazing takeaway here is that this 16,000-processor neural network, spread out over 1,000 linked computers, was not told to look for any one thing, but instead discovered that a pattern revolved around cat pictures on its own. This happened after Google presented the network with image stills from 10 million random YouTube videos. The images were small thumbnails, and Google's network was sorting through them to try and learn something about them. What it found — and we have ourselves to blame for this — was that there were a hell of a lot of cat faces. "We never told it during the training, 'This is a cat,'" Jeff Dean, a Google fellow working on the project, told the New York Times. "It basically invented the concept of a cat. We probably have other ones that are side views of cats." The network itself does not know what a cat is like you and I do. (It wouldn't, for instance, feel embarrassed being caught watching something like this in the presence of other neural networks.) What it does realize, however, is that there is something that it can recognize as being the same thing, and if we gave it the word, it would very well refer to it as "cat." So, what's the big deal? Your computer at home is more than powerful enough to sort images. Where Google's neural network differs is that it looked at these 10 million images, recognized a pattern of cat faces, and then grafted together the idea that it was looking at something specific and distinct. It had a digital thought. Andrew Ng, a computer scientist at Stanford University who is co-leading the study with Dean, spoke to the benefit of something like a self-teaching neural network: "The idea is that instead of having teams of researchers trying to find out how to find edges, you instead throw a ton of data at the algorithm and you let the data speak and have the software automatically learn from the data." The size of the network is important, too, and the human brain is "a million times larger in terms of the number of neurons and synapses" than Google X's simulated mind, according to the researchers. "It'd be fantastic if it turns out that all we need to do is take current algorithms and run them bigger," Ng added, "but my gut feeling is that we still don't quite have the right algorithm yet." Wednesday, April 11. 2012The history of supercomputersVia Christian Babski ----- Extreme Tech has just published an interesting article on the history of super-comput[er,ing] that worth a reading. It is a bit spec' oriented but still gives a good overview on super-computer [r]evolution. Thursday, April 05. 2012MIT Project Aims to Deliver Printable, Mass-Market RobotsVia Wired ----- Insect printable robot. Photo: Jason Dorfman, CSAIL/MIT
Printers can make mugs, chocolate and even blood vessels. Now, MIT scientists want to add robo-assistants to the list of printable goodies. Today, MIT announced a new project, “An Expedition in Computing Printable Programmable Machines,” that aims to give everyone a chance to have his or her own robot. Need help peering into that unreasonably hard-to-reach cabinet, or wiping down your grimy 15th-story windows? Walk on over to robo-Kinko’s to print, and within 24 hours you could have a fully programmed working origami bot doing your dirty work. “No system exists today that will take, as specification, your functional needs and will produce a machine capable of fulfilling that need,” MIT robotics engineer and project manager Daniela Rus said. Unfortunately, the very earliest you’d be able to get your hands on an almost-instant robot might be 2017. The MIT scientists, along with collaborators at Harvard University and the University of Pennsylvania, received a $10 million grant from the National Science Foundation for the 5-year project. Right now, it’s at very early stages of development. So far, the team has prototyped two mechanical helpers: an
insect-like robot and a gripper. The 6-legged tick-like printable robot
could be used to check your basement for gas leaks or to play with your
cat, Rus says. And the gripper claw, which picks up objects, might be
helpful in manufacturing, or for people with disabilities, she says. The two prototypes cost about $100 and took about 70 minutes to build. The real cost to customers will depend on the robot’s specifications, its capabilities and the types of parts that are required for it to work. The researchers want to create a one-size-fits-most platform to circumvent the high costs and special hardware and software often associated with robots. If their project works out, you could go to a local robo-printer, pick a design from a catalog and customize a robot according to your needs. Perhaps down the line you could even order-in your designer bot through an app. Their approach to machine building could “democratize access to robots,” Rus said. She envisions producing devices that could detect toxic chemicals, aid science education in schools, and help around the house. Although bringing robots to the masses sounds like a great idea (a sniffing bot to find lost socks would come in handy), there are still several potential roadblocks to consider — for example, how users, especially novice ones, will interact with the printable robots. “Maybe this novice user will issue a command that will break the device, and we would like to develop programming environments that have the capability of catching these bad commands,” Rus said. As it stands now, a robot would come pre-programmed to perform a set of tasks, but if a user wanted more advanced actions, he or she could build up those actions using the bot’s basic capabilities. That advanced set of commands could be programmed in a computer and beamed wirelessly to the robot. And as voice parsing systems get better, Rus thinks you might be able to simply tell your robot to do your bidding. Durability is another issue. Would these robots be single-use only? If so, trekking to robo-Kinko’s every time you needed a bot to look behind the fridge might get old. These are all considerations the scientists will be grappling with in the lab. They’ll have at least five years to tease out some solutions. In the meantime, it’s worth noting that other other groups are also building robots using printers. German engineers printed a white robotic spider last year. The arachnoid carried a camera and equipment to assess chemical spills. And at Drexel University, paleontologist Kenneth Lacovara and mechanical engineer James Tangorra are trying to create a robotic dinosaur from dino-bone replicas. The 3-D-printed bones are scaled versions of laser-scanned fossils. By the end of 2012, Lacovara and Tangorra hope to have a fully mobile robotic dinosaur, which they want to use to study how dinosaurs, like large sauropods, moved. Lancovara thinks the MIT project is an exciting and promising one: “If it’s a plug-and-play system, then it’s feasible,” he said. But “obviously, it [also] depends on the complexity of the robot.” He’s seen complex machines with working gears printed in one piece, he says. Right now, the MIT researchers are developing an API that would facilitate custom robot design and writing algorithms for the assembly process and operations. If their project works out, we could all have a bot to call our own in a few years. Who said print was dead? Thursday, March 29. 2012LG flexible epaper devices promised for April launchVia Slash Gear -----
LG Display has launched a new, 6-inch flexible epaper display that the company expects to show up in bendable products by the beginning of next month. The panel, a 1024 x 768 monochrome sheet, can be bent up to 40-degrees without breaking; in addition, because LG Display has used a flexible plastic substrate rather than the more traditional glass, it’s less than half the weight of a traditional epaper panel.
That means lighter gadgets that are actually more durable since the panels should be more resilient to drops or bumps. They can also be thinner, too: the plastic panel is a third slimmer than glass equivalents, at just 0.7mm thick. LG Display says it can drop its new screen from 1.5m – the average height a device is held when it’s being used for reading, apparently – without any resulting damage. The company also hit the screen with a plastic hammer, leaving no scratches or breaks, ETNews reports. LG isn’t the only company to be working on flexible screens this year. Samsung has already confirmed that it is looking at launching devices using flexible AMOLED panels in 2012, though it’s unclear whether the screens will actually fold or bend, or simply be used to wrap around smartphones for new types of UI. The first products using the LG Display flexible panel are on track for a release in the European market in early April, the company claims. No word on what vendors will be offering them, nor how pricing will compare to traditional glass-substrate epaper.
Tuesday, March 27. 2012TI Demos OMAP5 WiFi Display Mirroring on Development PlatformVia AnandTech -----
On our last day at MWC 2012, TI pulled me aside for a private demonstration of WiFi Display functionality they had only just recently finalized working on their OMAP 5 development platform. The demo showed WiFi Display mirroring working between the development device’s 720p display and an adjacent notebook which was being used as the WiFi Display sink. TI emphasized that what’s different about their WiFi Display implementation is that it works using the display framebuffer natively and not a memory copy which would introduce delay and take up space. In addition, the encoder being used is the IVA-HD accelerator doing the WiFi Display specification’s mandatory H.264 baseline Level 3.1 encode, not a software encoder running on the application processor. The demo was running mirroring the development tablet’s 720p display, but TI says they could easily do 1080p as well, but would require a 1080p framebuffer to snoop on the host device. Latency between the development platform and display sink was just 15ms - essentially one frame at 60 Hz. The demonstration worked live over the air at TI’s MWC booth and also used a WiLink 8 series WLAN combo chip. There was some stuttering, however this is understandable given the fact that this demo was using TCP (live implementations will use UDP) and of course just how crowded 2.4 and 5 GHz spectrum is at these conferences. In addition, TI collaborated with Screenovate for their application development and WiFi Display optimization secret sauce, which I’m guessing has to do with adaptive bitrate or possibly more. Enabling higher than 480p software encoded WiFi Display is just one more obvious piece of the puzzle which will eventually enable smartphones and tablets to obviate standalone streaming devices. ----- Personal Comment: Kind of obvious and interesting step forward as it is more and more requested by mobile devices users to be able to beam or 'to TV' mobile device's screens... which should lead to transform any (mobile) device in a full-duplex video broadcasting enabled device (user interaction included!) ... and one may then succeed in getting rid of some cables in the same sitting?! Monday, March 26. 2012New Samsung sensor captures image, depth simultaneouslyVia electronista -----
Samsung has developed a new camera sensor technology that offers the ability to simultaneously capture image and depth. The breakthrough could potentially be applied to smartphones and other devices as an alternative method of control where hand gestures could be used to carry out functions without having to touch a screen or other input. According to Tech-On, it uses a CMOS sensor with red, blue and green pixels, combined with an additional z-pixel for capturing depth. The
new Samsung sensor can capture images at a resolution of 1,920x720
using its traditional RGB array, while it can also capture a depth image
at a resolution of 480x360 with the z-pixel. It is able to achieve its
depth capabilities by a special process whereby the z-pixel is located
beneath the RGB pixel array. Samsung’s boffins then placed a special
barrier between the RGB and z pixels allowing the light they capture to
give the effect that the z-pixel is three times its actual size.
----- Personal Comment: Some additional information on BSI (Backside illumination)/FSI (Frontside Illumination): Monday, February 06. 2012The Great Disk Drive in the Sky: How Web giants store big—and we mean big—dataTuesday, January 24. 2012A tale of Apple, the iPhone, and overseas manufacturingVia CNET ----- (Credit: Apple) A new report on Apple offers up an interesting detail about the evolution of the iPhone and gives a fascinating--and unsettling--look at the practice of overseas manufacturing. The article, an in-depth report by Charles Duhigg and Keith Bradsher of The New York Times, is based on interviews with, among others, "more than three dozen current and former Apple employees and contractors--many of whom requested anonymity to protect their jobs." The piece uses Apple and its recent history to look at why the success of some U.S. firms hasn't led to more U.S. jobs--and to examine issues regarding the relationship between corporate America and Americans (as well as people overseas). One of the questions it asks is: Why isn't more manufacturing taking place in the U.S.? And Apple's answer--and the answer one might get from many U.S. companies--appears to be that it's simply no longer possible to compete by relying on domestic factories and the ecosystem that surrounds them. The iPhone detail crops up relatively early in the story, in an anecdote about then-Apple CEO Steve Jobs. And it leads directly into questions about offshore labor practices: In 2007, a little over a month before the iPhone was scheduled to appear in stores, Mr. Jobs beckoned a handful of lieutenants into an office. For weeks, he had been carrying a prototype of the device in his pocket. A tall order. And another anecdote suggests that Jobs' staff went overseas to fill it--along with other requirements for the top-secret phone project (code-named, the Times says, "Purple 2"):
One former executive described how the company relied upon a Chinese factory to revamp iPhone manufacturing just weeks before the device was due on shelves. Apple had redesigned the iPhone's screen at the last minute, forcing an assembly line overhaul. New screens began arriving at the plant near midnight. That last quote there, like several others in the story, leaves one feeling almost impressed by the no-holds-barred capabilities of these manufacturing plants--impressed and queasy at the same time. Here's another quote, from Jennifer Rigoni, Apple's worldwide supply demand manager until 2010: "They could hire 3,000 people overnight," she says, speaking of Foxconn City, Foxconn Technology's complex of factories in China. "What U.S. plant can find 3,000 people overnight and convince them to live in dorms?" The article says that cheap and willing labor was indeed a factor in Apple's decision, in the early 2000s, to follow most other electronics companies in moving manufacturing overseas. But, it says, supply chain management, production speed, and flexibility were bigger incentives. "The entire supply chain is in China now," the article quotes a former high-ranking Apple executive as saying. "You need a thousand rubber gaskets? That's the factory next door. You need a million screws? That factory is a block away. You need that screw made a little bit different? It will take three hours." It also makes the point that other factors come into play. Apple analysts, the Times piece reports, had estimated that in the U.S., it would take the company as long as nine months to find the 8,700 industrial engineers it would need to oversee workers assembling the iPhone. In China it wound up taking 15 days. The article and its sources paint a vivid picture of how much easier it is for companies to get things made overseas (which is why so many U.S. firms go that route--Apple is by no means alone in this). But the underlying humanitarian issues nag at the reader. Perhaps there's hope--at least for overseas workers--in last week's news that Apple has joined the Fair Labor Association, and that it will be providing more transparency when it comes to the making of its products. As for manufacturing returning to the U.S.? The Times piece cites an unnamed guest at President Obama's 2011 dinner with Silicon Valley bigwigs. Obama had asked Steve Jobs what it would take to produce the iPhone in the states, why that work couldn't return. The Times' source quotes Jobs as having said, in no uncertain terms, "Those jobs aren't coming back." Apple, by the way, would not provide a comment to the Times about the article. And Foxconn disputed the story about employees being awakened at midnight to work on the iPhone, saying strict regulations about working hours would have made such a thing impossible.
Monday, January 23. 2012Quantum physics enables perfectly secure cloud computing
Via eurekalert ----- Researchers have succeeded in combining the power of quantum computing with the security of quantum cryptography and have shown that perfectly secure cloud computing can be achieved using the principles of quantum mechanics. They have performed an experimental demonstration of quantum computation in which the input, the data processing, and the output remain unknown to the quantum computer. The international team of scientists will publish the results of the experiment, carried out at the Vienna Center for Quantum Science and Technology (VCQ) at the University of Vienna and the Institute for Quantum Optics and Quantum Information (IQOQI), in the forthcoming issue of Science. Quantum computers are expected to play an important role in future information processing since they can outperform classical computers at many tasks. Considering the challenges inherent in building quantum devices, it is conceivable that future quantum computing capabilities will exist only in a few specialized facilities around the world – much like today's supercomputers. Users would then interact with those specialized facilities in order to outsource their quantum computations. The scenario follows the current trend of cloud computing: central remote servers are used to store and process data – everything is done in the "cloud." The obvious challenge is to make globalized computing safe and ensure that users' data stays private. The latest research, to appear in Science, reveals that quantum computers can provide an answer to that challenge. "Quantum physics solves one of the key challenges in distributed computing. It can preserve data privacy when users interact with remote computing centers," says Stefanie Barz, lead author of the study. This newly established fundamental advantage of quantum computers enables the delegation of a quantum computation from a user who does not hold any quantum computational power to a quantum server, while guaranteeing that the user's data remain perfectly private. The quantum server performs calculations, but has no means to find out what it is doing – a functionality not known to be achievable in the classical world. The scientists in the Vienna research group have demonstrated the concept of "blind quantum computing" in an experiment: they performed the first known quantum computation during which the user's data stayed perfectly encrypted. The experimental demonstration uses photons, or "light particles" to encode the data. Photonic systems are well-suited to the task because quantum computation operations can be performed on them, and they can be transmitted over long distances. The process works in the following manner. The user prepares qubits – the fundamental units of quantum computers – in a state known only to himself and sends these qubits to the quantum computer. The quantum computer entangles the qubits according to a standard scheme. The actual computation is measurement-based: the processing of quantum information is implemented by simple measurements on qubits. The user tailors measurement instructions to the particular state of each qubit and sends them to the quantum server. Finally, the results of the computation are sent back to the user who can interpret and utilize the results of the computation. Even if the quantum computer or an eavesdropper tries to read the qubits, they gain no useful information, without knowing the initial state; they are "blind." ###
The research at the Vienna Center for Quantum Science and Technology (VCQ) at the University of Vienna and at the Institute for Quantum Optics and Quantum Information (IQOQI) of the Austrian Academy of Sciences was undertaken in collaboration with the scientists who originally invented the protocol, based at the University of Edinburgh, the Institute for Quantum Computing (University of Waterloo), the Centre for Quantum Technologies (National University of Singapore), and University College Dublin. Publication: "Demonstration of Blind Quantum Computing" Stefanie Barz, Elham Kashefi, Anne Broadbent, Joseph Fitzsimons, Anton Zeilinger, Philip Walther. DOI: 10.1126/science.1214707
Tuesday, January 17. 2012HTML bringing to us old boot sessionsFor anyone of us who thinks that past was better... or to show to new comers that, some time ago, a computer device was not supposed to be always switched on!
« previous page
(Page 3 of 9, totaling 82 entries)
» next page
|
QuicksearchPopular Entries
CategoriesShow tagged entriesSyndicate This BlogCalendar
Blog Administration |