Entries tagged as programmingRelated tags 3d printing 3d scanner crowd-sourcing diy evolution facial copy food hardware innovation&society medecin microsoft physical computing piracy rapid prototyping recycling robot software technology virus ai algorythm android apple arduino automation data mining data visualisation mobile network neural network sensors siri web artificial intelligence big data cloud computing coding fft program ad app google history htc ios linux mobile phone os sdk super collider tablet usb API facial recognition glass interface mirror windows 8 satellite art game car privacy super computer drone light wifi c c++ cobol fortran language lisp pascal smalltalk code flickr gui internet internet of things maps photos databse sql dna genome surveillance army cameraThursday, February 23. 2012MOTHER artificial intelligence forces nerds to do the chores… or elseVia Slash Gear ----- If you’ve ever been inside a dormitory full of computer science undergraduates, you know what horrors come of young men free of responsibility. To help combat the lack of homemaking skills in nerds everywhere, a group of them banded together to create MOTHER, a combination of home automation, basic artificial intelligence and gentle nagging designed to keep a domicile running at peak efficiency. And also possibly kill an entire crew of space truckers if they should come in contact with a xenomorphic alien – but that code module hasn’t been installed yet. The project comes from the LVL1 Hackerspace, a group of like-minded
programmers and engineers. The aim is to create an AI suited for a home
environment that detect issues and gets its users (i.e. the people living in
the home) to fix it. Through an array of digital sensors, MOTHER knows
when the trash needs to be taken out, when the door is left unlocked, et
cetera. If something isn’t done soon enough, MOTHER’s abilities are technically limitless – since it’s all based on open source software, those with the skill, inclination and hardware components can add functions at any time. Some of the more humorous additions already in the project include an instant dubstep command. You can build your own MOTHER (boy, there’s a sentence I never thought I’d be writing) by reading through the official Wiki and assembling the right software, sensors, servers and the like. Or you could just invite your mom over and take your lumps. Your choice.
Posted by Christian Babski
in Hardware, Innovation&Society, Network, Programming
at
10:39
Defined tags for this entry: ai, arduino, hardware, innovation&society, network, programming, sensors
Monday, February 13. 2012Computer Algorithm Used To Make Movie For Sundance Film FestivalVia Singularity Hub -----
Indie movie makers can be a strange bunch, pushing the envelope of their craft and often losing us along the way. In any case, if you’re going to produce something unintelligible anyway, why not let a computer do it? Eve Sussmam and the Rufus Corporation did just that. She and lead actor Jeff Wood traveled to the Kazakhstan border of the Caspian Sea for two years of filming. But instead of a movie with a beginning, middle and end, they shot 3,000 individual and unrelated clips. To the clips they added 80 voice-overs and 150 pieces of music, mixed it all together and put it in a computer. A program on her Mac G5 tower, known at Rufus as the “serendipity machine,” then splices the bits together to create a final product. As you might imagine, the resultant film doesn’t always make sense. But that’s part of the fun! As the Rufus Corporation writes on their website, “The unexpected juxtapositions create a sense of suspense alluding to a story that the viewer composes.” It’s a clever experiment even if some viewers end up wanting to gouge their eyes out after a sitting. And there is some method to their madness. The film, titled “whiteonwhite:algorithnoir,” is centered on a geophysicist named Holz (played by Wood) who’s stuck in a gloomy, 1970’s-looking city operated by the New Method Oil Well Cementing Company. Distinct scenes such as wire tapped conversations or a job interview for Mr. Holz are (hopefully) woven together by distinct voiceovers and dialogues. When the scenes and audio are entered into the computer they’re tagged with keywords. The program then pieces them together in a way similar to Pandora’s stringing together of like music. If a clip is tagged “white,” the computer will randomly select from tens of other clips also having the “white” tag. The final product is intended to be a kind of “dystopian futuropolis.” What that means, however, changes with each viewing as no two runs are the same. Watching the following trailer, I actually got a sense…um, I think…of a story.
Rufus Corporation says the movie was “inspired by Suprematist quests for transcendence, pure space and artistic higher ground.” I have no idea what that means but I hope they’ve achieved it. Beautiful things can happen when computers create art. And it’s only a matter of time before people attempt the same sort of thing with novel writing. Just watching the trailer, it’s hard to tell if the movie’s any good or not. I missed the showings at the Sundance Film Festival, but even so, they probably didn’t resemble the trailer anyway. And that’s okay, because that’s the whole point. [image credits: Rufus Corporation and PRI via YouTube][video credit: PRI via YouTube]
Tuesday, February 07. 2012It's Kinect day! The Kinect For Windows SDK v1 is out!Via Channel 9 -----
As everyone reading this blog, and those in the Kinect for Windows space, knows today is a big day. From what was a cool peripheral for the XBox 360 last year, the Kinect for Windows SDK and now dedicated Kinect for Windows hardware device, has taken the world by storm. In the last year we've seen some simply amazing ideas and projects, many highlighted here in the Kinect for Windows Gallery, from health to education, to music expression to simply just fun. And that was all with beta software and a device meant for a gaming console. With a fully supported, allowed for use in commercial products, dedicated device and updated SDK, today the world changes again. Welcome to the Kinect for Windows SDK v1! Kinect for Windows is now Available!
If you're like me, you want to know more about what's new... So here's a snip from the Kinect for Windows SDK v1 Release Notes;
Thursday, January 19. 2012The faster-than-fast Fourier transformVia PhysOrg ----- The Fourier transform is one of the most fundamental concepts in the information sciences. It’s a method for representing an irregular signal — such as the voltage fluctuations in the wire that connects an MP3 player to a loudspeaker — as a combination of pure frequencies. It’s universal in signal processing, but it can also be used to compress image and audio files, solve differential equations and price stock options, among other things. The reason the Fourier transform is so prevalent is an algorithm called the fast Fourier transform (FFT), devised in the mid-1960s, which made it practical to calculate Fourier transforms on the fly. Ever since the FFT was proposed, however, people have wondered whether an even faster algorithm could be found. At the Association for Computing Machinery’s Symposium on Discrete Algorithms (SODA) this week, a group of MIT researchers will present a new algorithm that, in a large range of practically important cases, improves on the fast Fourier transform. Under some circumstances, the improvement can be dramatic — a tenfold increase in speed. The new algorithm could be particularly useful for image compression, enabling, say, smartphones to wirelessly transmit large video files without draining their batteries or consuming their monthly bandwidth allotments. Like the FFT, the new algorithm works on digital signals. A digital signal is just a series of numbers — discrete samples of an analog signal, such as the sound of a musical instrument. The FFT takes a digital signal containing a certain number of samples and expresses it as the weighted sum of an equivalent number of frequencies. “Weighted” means that some of those frequencies count more toward the total than others. Indeed, many of the frequencies may have such low weights that they can be safely disregarded. That’s why the Fourier transform is useful for compression. An eight-by-eight block of pixels can be thought of as a 64-sample signal, and thus as the sum of 64 different frequencies. But as the researchers point out in their new paper, empirical studies show that on average, 57 of those frequencies can be discarded with minimal loss of image quality. Heavyweight division Signals whose Fourier transforms include a relatively small number of heavily weighted frequencies are called “sparse.” The new algorithm determines the weights of a signal’s most heavily weighted frequencies; the sparser the signal, the greater the speedup the algorithm provides. Indeed, if the signal is sparse enough, the algorithm can simply sample it randomly rather than reading it in its entirety. “In nature, most of the normal signals are sparse,” says Dina Katabi, one of the developers of the new algorithm. Consider, for instance, a recording of a piece of chamber music: The composite signal consists of only a few instruments each playing only one note at a time. A recording, on the other hand, of all possible instruments each playing all possible notes at once wouldn’t be sparse — but neither would it be a signal that anyone cares about. The new algorithm — which associate professor Katabi and professor Piotr Indyk, both of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), developed together with their students Eric Price and Haitham Hassanieh — relies on two key ideas. The first is to divide a signal into narrower slices of bandwidth, sized so that a slice will generally contain only one frequency with a heavy weight. In signal processing, the basic tool for isolating particular frequencies is a filter. But filters tend to have blurry boundaries: One range of frequencies will pass through the filter more or less intact; frequencies just outside that range will be somewhat attenuated; frequencies outside that range will be attenuated still more; and so on, until you reach the frequencies that are filtered out almost perfectly. If it so happens that the one frequency with a heavy weight is at the edge of the filter, however, it could end up so attenuated that it can’t be identified. So the researchers’ first contribution was to find a computationally efficient way to combine filters so that they overlap, ensuring that no frequencies inside the target range will be unduly attenuated, but that the boundaries between slices of spectrum are still fairly sharp. Zeroing in Once they’ve isolated a slice of spectrum, however, the researchers still have to identify the most heavily weighted frequency in that slice. In the SODA paper, they do this by repeatedly cutting the slice of spectrum into smaller pieces and keeping only those in which most of the signal power is concentrated. But in an as-yet-unpublished paper, they describe a much more efficient technique, which borrows a signal-processing strategy from 4G cellular networks. Frequencies are generally represented as up-and-down squiggles, but they can also be though of as oscillations; by sampling the same slice of bandwidth at different times, the researchers can determine where the dominant frequency is in its oscillatory cycle. Two University of Michigan researchers — Anna Gilbert, a professor of mathematics, and Martin Strauss, an associate professor of mathematics and of electrical engineering and computer science — had previously proposed an algorithm that improved on the FFT for very sparse signals. “Some of the previous work, including my own with Anna Gilbert and so on, would improve upon the fast Fourier transform algorithm, but only if the sparsity k” — the number of heavily weighted frequencies — “was considerably smaller than the input size n,” Strauss says. The MIT researchers’ algorithm, however, “greatly expands the number of circumstances where one can beat the traditional FFT,” Strauss says. “Even if that number k is starting to get close to n — to all of them being important — this algorithm still gives some improvement over FFT.”
Wednesday, January 11. 2012Holographic codeVia johncook.com -----
In a hologram, information about each small area of image is scattered throughout the holograph. You can’t say this little area of the hologram corresponds to this little area of the image. At least that’s what I’ve heard; I don’t really know how holograms work. I thought about holograms the other day when someone was describing some source code with deeply nested templates. He told me “You can’t just read it. You can only step through the code with a debugger.” I’ve ran into similar code. The execution sequence of the code at run time is almost unrelated to the sequence of lines in the source code. The run time behavior is scattered through the source code like image information in a holograph. Holographic code is an advanced anti-pattern. It’s more likely to result from good practice taken to an extreme than from bad practice. Somewhere along the way, programmers learn the “DRY” principle: Don’t Repeat Yourself. This is good advice, within reason. But if you wring every bit of redundancy out of your code, you end up with something like Huffman encoded source. In fact, DRY is very much a compression algorithm. In moderation, it makes code easier to maintain. But carried too far, it makes reading your code like reading a zip file. Sometimes a little redundancy makes code much easier to read and maintain. Code is like wine: a little dryness is good, but too much is bitter or sour. Note that functional-style code can be holographic just like conventional code. A pure function is self-contained in the sense that everything the function needs to know comes in as arguments, i.e. there is no dependence on external state. But that doesn’t mean that everything the programmer needs to know is in one contiguous chuck of code. If you have to jump all over your code base to understand what’s going on anywhere, you have holographic code, regardless of what style it was written in. However, I imagine functional programs would usually be less holographic. Monday, January 09. 2012NoSQL’s great, but bring your A gameVia GIGAOM ----- MongoDB might be a popular choice in NoSQL databases, but it’s not perfect — at least out of the box. At last week’s MongoSV conference in Santa Clara, Calif., a number of users, including from Disney, Foursquare and Wordnik, shared their experiences with the product. The common theme: NoSQL is necessary for a lot of use cases, but it’s not for companies afraid of hard work. If you’re in the cloud, avoid the disk
As he explained, Wordnik actually launched on Amazon Web Services and used MySQL, but the database hit a wall at around a billion records, he said. So, Wordnik switched to MongoDB, which solved the scaling problem but caused its own disk I/O problems that resulted in a major performance slowdown. So, Wordnik ported everything back onto some big physical servers, which drastically improved performance. And then came the scalability problem again, only this time it was in terms of infrastructure. So, it was back to the cloud. But this time, Wordnik got smart and tuned the application to account for the strengths and weaknesses of MongoDB (“Your app should be smarter than your database,” he says), and MongoDB to account for the strengths and weaknesses of the cloud. Among his observations was that in the cloud, virtual disks have virtual performance, “meaning it’s not really there.” Luckily, he said, you can design to take advantage of virtual RAM. It will fill up fast if you let it, though, and there’s trouble brewing if requests start hitting the disk. “If you hit indexes on disk,” he warned, “mute your pager.”
EBS also brings along issues of its own. At least once a day, Bethea said, queued reads and writes to EBS start backing up excessively, and the only solution is to “kill it with fire.” What that means changes depending on the problem, but it generally means stopping the MongoDB process and rebuilding the affected replica set from scratch. Monitor everythingCurt Stevens of the Disney Interactive Media Group explained how his team monitors the large MongoDB deployment that underpins Disney’s online games. MongoDB actually has its own tool called the Mongo Monitoring System that Stevens said he swears by, but it isn’t always enough. It shows traffic and performance patterns over time, which is helpful, but only the starting point.
Oh, and just monitoring everything isn’t enough when you’re talking about a large-scale system, Stevens said. You have to have alerts in place to tell you when something’s wrong, and you have to monitor the monitors. If MMS or any other monitoring tools go down, you might think everything is just fine while the kids trying to have a magical Disney experience online are paying the price. By the numbersIf you’re wondering what kind of performance and scalability requirements forced these companies to MongoDB, and then to customize it so heavily, here are some statistics:
For more-technical details about their trials and tribulations with MongoDB, all three presentations are available online, along with the rest of the conference’s talks. Personal Comments: Here are some basics and information on NoSQL: Wiki, NoSQL Databases, MongoDB Tuesday, November 01. 2011Multi-Device Web Design: An EvolutionVia LUKEW By Luke Wroblewski -----
As mobile devices have continued to evolve and spread, so has the process of designing and developing Web sites and services that work across a diverse range of devices. From responsive Web design to future friendly thinking, here's how I've seen things evolve over the past year and a half. If you haven't been keeping up with all the detailed conversations about multi-device Web design, I hope this overview and set of resources can quickly bring you up to speed. I'm only covering the last 18 months because it has been a very exciting time with lots of new ideas and voices. Prior to these developments, most multi-device Web design problems were solved with device detection and many still are. But the introduction of Responsive Web Design really stirred things up. Responsive Web DesignResponsive Web Design is a combination of fluid grids and images with media queries to change layout based on the size of a device viewport. It uses feature detection (mostly on the client) to determine available screen capabilities and adapt accordingly. RWD is most useful for layout but some have extended it to interactive elements as well (although this often requires Javascript). Responsive Web Design allows you to use a single URL structure for a site, thereby removing the need for separate mobile, tablet, desktop, etc. sites. For a short overview read Ethan Marcotte's original article. For the full story read Ethan Marcotte's book. For a deeper dive into the philosophy behind RWD, read over Jeremy Keith's supporting arguments. To see a lot of responsive layout examples, browse around the mediaqueri.es site. ChallengesResponsive Web Design isn't a silver bullet for mobile Web experiences. Not only does client-side adaptation require a careful approach, but it can also be difficult to optimize source order, media, third-party widgets, URL structure, and application design within a RWD solution. Jason Grigsby has written up many of the reasons RWD doesn't instantly provide a mobile solution especially for images. I've documented (with concrete) examples why we opted for separate mobile and desktop templates in my last startup -a technique that's also employed by many Web companies like Facebook, Twitter, Google, etc. In short, separation tends to give greater ability to optimize specifically for mobile. Mobile First Responsive DesignMobile First Responsive Design takes Responsive Web Design and flips the process around to address some of the media query challenges outlined above. Instead of starting with a desktop site, you start with the mobile site and then progressively enhance to devices with larger screens. The Yiibu team was one of the first to apply this approach and wrote about how they did it. Jason Grigsby has put together an overview and analysis of where Mobile First Responsive Design is being applied. Brad Frost has a more high-level write-up of the approach. For a more in-depth technical discussion, check out the thread about mobile-first media queries on the HMTL5 boilerplate project. TechniquesMany folks are working through the challenges of designing Web sites for multiple devices. This includes detailed overviews of how to set up Mobile First Responsive Design markup, style sheet, and Javascript solutions. Ethan Marcotte has shared what it takes for teams of developers and designers to collaborate on a responsive workflow based on lessons learned on the Boston Globe redesign. Scott Jehl outlined what Javascript is doing (PDF) behind the scenes of the Globe redesign (hint: a lot!). Stephanie Rieger assembled a detailed overview (PDF) of a real-world mobile first responsive design solution for hundreds of devices. Stephan Hay put together a pragmatic overview of designing with media queries. Media adaptation remains a big challenge for cross-device design. In particular, images, videos, data tables, fonts, and many other "widgets" need special care. Jason Grigsby has written up the situation with images and compiled many approaches for making images responsive. A number of solutions have also emerged for handling things like videos and data tables. Server Side ComponentsCombining Mobile First Responsive Design with server side component (not full page) optimization is a way to extend client-side only solutions. With this technique, a single set of page templates define an entire Web site for all devices but key components within that site have device-class specific implementations that are rendered server side. Done right, this technique can deliver the best of both worlds without the challenges that can hamper each. I've put together an overview of how a Responsive Design + Server Side Components structure can work with concrete examples. Bryan Rieger has outlined an extensive set of thoughts on server-side adaption techniques and Lyza Gardner has a complete overview of how all these techniques can work together. After analyzing many client-side solutions to dynamic images, Jason Grigsby outlined why using a server-side solution is probably the most future friendly. Future ThinkingIf all the considerations above seem like a lot to take in to create a Web site, they are. We are in a period of transition and still figuring things out. So expect to be learning and iterating a lot. That's both exciting and daunting. It also prepares you for what's ahead. We've just begun to see the onset of cheap networked devices of every shape and size. The zombie apocalypse of devices is coming. And while we can't know exactly what the future will bring, we can strive to design and develop in a future-friendly way so we are better prepared for what's next. ResourcesI referenced lots of great multi-device Web design resources above. Here they are in one list. Read them in order and rock the future Web!
Tuesday, September 20. 2011Google offers Ice Cream Sandwich guidance to Android app devsThe next major version of Google's Android platform, codenamed Ice Cream Sandwich (ICS), is expected to reach the market in October or November. ICS is expected to bring some significant changes to Android because it will unify Google's tablet and phone variants into a single environment. Although the SDK is not available yet, Google has published some technical guidance to help third-party application developers start preparing for the ICS transition. An entry posted this week on the Android developer blog describes some steps that developers can take to better accommodate the breadth of screen sizes supported by ICS. The documentation also provides some insight into how several elements of the Honeycomb user interface could be translated to phone-sized screens in ICS. For example, it includes mockups that show the distinctive Honeycomb action bar on a tablet and a phone. It's not clear, however, if the mockups accurately depict the user experience that will be delivered in ICS. ![]() This seems to suggest that tablet user interfaces developed with standard Honeycomb APIs will largely work on phones in ICS without requiring much modification by third-party developers. That's good news, especially if it ends up being equally true the other way, which would allow phone applications built for ICS to look and feel more native on tablet devices. Google's existing Fragments framework will also help simplify interface scalability by making it easy to transition data-driven applications between single-pane to multi-pane layouts. Developers who use Fragments and stick to the standard Honeycomb user interface components are on the right track for the upcoming ICS release, but developers who have built more complicated tablet-specific user interfaces or haven't stayed within the boundaries imposed by the documented APIs might face some challenges. Honeycomb applications that were designed only for the tablet screen size might not scale down correctly on phones. That's a problem, because Android's versioning model doesn't prevent old applications from running on devices with new versions of the operating system—it's going to be possible for users to install Honeycomb tablet applications on ICS phones, even in cases where the result is going to be a very broken application. In cases where third-party developers can't adapt their tablet software to work well at phone sizes, Google suggests changing the application manifest file to block the application from being installed on devices with small screens. Another challenge is the large body of legacy devices that aren't going to be updated to ICS. Developers who want to reach the largest possible audience will have to refrain from using the new APIs, which means that it will be harder for them to take advantage of the platform's increasingly sophisticated capabilities for scaling across different screen sizes. Google has already partially addressed this issue by backporting the Fragments framework and making it available as a static library for older versions of the operating system. It might be beneficial for them to go a step further and do the same with the Action Bar and other critical user interface components that will be designed to scale seamlessly in ICS. It's going to take some time for the Android application ecosystem to sort all of this out after ICS is released, but Google's approach seems reasonably practical. In theory, developers who are solely targeting ICS APIs might not have to make a significant development investment to get their application working well across tablet and phone form factors. Monday, September 12. 2011Narrative Science Is a Computer Program That Can WriteVia Nexus 404 by J Angelo Racoma ----- Automatic content generators are the scourge of most legitimate writers and publishers, especially if these take some original content, spin it around and generate a mashup of different content but obviously based on something else. An app made by computer science and journalism experts involves artificial intelligence that writes like a human, though. Developers at the Northwestern University’s Intelligent Information Laboratory have come up with a program called Narrative Science, which composes articles and reports based on data, facts, and styles plugged in. The application is worth more than 10 years’ work by Kris Hammond and Larry Birnbaum who are both professors of journalism and computer science at the university. Artificial vs Human IntelligenceCurrently being used by 20 companies, an example of work done by Narrative Science include post-game reports for collegiate athletics events and articles for medical journals, in which the software can compose an entire, unique article in about 60 seconds or so. What’s striking is that even language experts say you won’t know the difference between the software and an actual human, in terms of style, tone and usage. The developers have recently received $6 million in venture capital, which is indicative that the technology has potential in business and revenue-generating applications. AI today is increasingly becoming sophisticated in terms of understanding and generating language. AI has gone a long way from spewing out pre-encoded responses from a list of sentences and words in reaction to keywords. Narrative Science can actually compose paragraphs using a human-like voice. The question here is whether the technology will undermine traditional journalism. Will AI simply assist humans in doing research and delivering content? Or, will AI eventually replace human beings in reporting the news, generating editorials and even communicating with other people? What Does it Mean for Journalism and the Writing Profession?Perhaps the main indicator here will be cost. Narrative Science charges about $10 each 500-word article, which is not really far from how human copy writers might charge for content. Once this technology becomes popular with newspapers and other publications, will this mean writers and journalists — tech bloggers included — need to find a new career? It seems it’s not just the manufacturing industry that’s prone to being replaced by machines. Maybe we can just input a few keywords like iPhone, iOS, Jailbreak, Touchscreen, Apple and the like, and the Narrative Science app will be able to create an entirely new rumor about the upcoming iPhone 5, for instance! The potential is great, although the possibility for abuse is also there. Think of spammers and scammers using the software to create more appealing emails that recipients are more likely to act on. Still, with tools like these, it’s only up for us humans to up the ante in terms of quality. And yes, in case you’re wondering, a real human did write this post. ----- Personal comments: One book to read about this: "Exemplaire de démonstration" by Philippe Vasset Tuesday, August 09. 2011DNA circuits used to make neural network, store memoriesVia ars technica By Kyle Niemeyer -----
![]() Even as some scientists and engineers develop improved versions of current computing technology, others are looking into drastically different approaches. DNA computing offers the potential of massively parallel calculations with low power consumption and at small sizes. Research in this area has been limited to relatively small systems, but a group from Caltech recently constructed DNA logic gates using over 130 different molecules and used the system to calculate the square roots of numbers. Now, the same group published a paper in Nature that shows an artificial neural network, consisting of four neurons, created using the same DNA circuits. The artificial neural network approach taken here is based on the perceptron model, also known as a linear threshold gate. This models the neuron as having many inputs, each with its own weight (or significance). The neuron is fired (or the gate is turned on) when the sum of each input times its weight exceeds a set threshold. These gates can be used to construct compact Boolean logical circuits, and other circuits can be constructed to store memory. As we described in the last article on this approach to DNA computing, the authors represent their implementation with an abstraction called "seesaw" gates. This allows them to design circuits where each element is composed of two base-paired DNA strands, and the interactions between circuit elements occurs as new combinations of DNA strands pair up. The ability of strands to displace each other at a gate (based on things like concentration) creates the seesaw effect that gives the system its name. In order to construct a linear threshold gate, three basic seesaw gates are needed to perform different operations. Multiplying gates combine a signal and a set weight in a seesaw reaction that uses up fuel molecules as it converts the input signal into output signal. Integrating gates combine multiple inputs into a single summed output, while thresholding gates (which also require fuel) send an output signal only if the input exceeds a designated threshold value. Results are read using reporter gates that fluoresce when given a certain input signal. To test their designs with a simple configuration, the authors first constructed a single linear threshold circuit with three inputs and four outputs—it compared the value of a three-bit binary number to four numbers. The circuit output the correct answer in each case. For the primary demonstration on their setup, the authors had their linear threshold circuit play a computer game that tests memory. They used their approach to construct a four-neuron Hopfield network, where all the neurons are connected to the others and, after training (tuning the weights and thresholds) patterns can be stored or remembered. The memory game consists of three steps: 1) the human chooses a scientist from four options (in this case, Rosalind Franklin, Alan Turing, Claude Shannon, and Santiago Ramon y Cajal); 2) the human “tells” the memory network the answers to one or more of four yes/no (binary) questions used to identify the scientist (such as, “Did the scientist study neural networks?” or "Was the scientist British?"); and 3) after eight hours of thinking, the DNA memory guesses the answer and reports it through fluorescent signals. They played this game 27 total times, for a total of 81 possible question/answer combinations (34). You may be wondering why there are three options to a yes/no question—the state of the answers is actually stored using two bits, so that the neuron can be unsure about answers (those that the human hasn't provided, for example) using a third state. Out of the 27 experimental cases, the neural network was able to correctly guess all but six, and these were all cases where two or more answers were not given. In the best cases, the neural network was able to correctly guess with only one answer and, in general, it was successful when two or more answers were given. Like the human brain, this network was able to recall memory using incomplete information (and, as with humans, that may have been a lucky guess). The network was also able to determine when inconsistent answers were given (i.e. answers that don’t match any of the scientists). These results are exciting—simulating the brain using biological computing. Unlike traditional electronics, DNA computing components can easily interact and cooperate with our bodies or other cells—who doesn’t dream of being able to download information into your brain (or anywhere in your body, in this case)? Even the authors admit that it’s difficult to predict how this approach might scale up, but I would expect to see a larger demonstration from this group or another in the near future.
« previous page
(Page 3 of 4, totaling 31 entries)
» next page
|
QuicksearchPopular Entries
CategoriesShow tagged entriesSyndicate This BlogCalendar
Blog Administration |