The Kilobots are an inexpensive system for testing synchronized and collaborative behavior in a very large swarm of robots. Photo courtesy of Michael Rubenstein
The Kilobots are coming. Computer scientists and engineers at Harvard University have developed and licensed technology that will make it easy to test collective algorithms on hundreds, or even thousands, of tiny robots.
Called Kilobots, the quarter-sized bug-like devices scuttle around on three toothpick-like legs, interacting and coordinating their own behavior as a team. AJune 2011 Harvard Technical Reportdemonstrated a collective of 25 machines implementing swarming behaviors such as foraging, formation control, and synchronization.
Once up and running, the machines are fully autonomous, meaning there is no need for a human to control their actions.
The communicative critters were created by members of the Self-Organizing Systems Research Group led by Radhika Nagpal, the Thomas D. Cabot Associate Professor of Computer Science at the Harvard School of Engineering and Applied Sciences (SEAS) and a Core Faculty Member at the Wyss Institute for Biologically Inspired Engineering at Harvard. Her team also includes Michael Rubenstein, a postdoctoral fellow at SEAS; and Christian Ahler, a fellow of SEAS and the Wyss Institute.
Thanks to a technology licensing deal with the K-Team Corporation, a Swiss manufacturer of high-quality mobile robots, researchers and robotics enthusiasts alike can now take command of their own swarm.
One key to achieving high-value applications for multi-robot systems in the future is the development of sophisticated algorithms that can coordinate the actions of tens to thousands of robots.
"The Kilobot will provide researchers with an important new tool for understanding how to design and build large, distributed, functional systems," says Michael Mitzenmacher, Area Dean for Computer Science at SEAS.
The name "Kilobot" does not refer to anything nefarious; rather, it describes the researchers' goal of quickly and inexpensively creating a collective of a thousand bots.
Inspired by nature, such swarms resemble social insects, such as ants and bees, that can efficiently search for and find food sources in large, complex environments, collectively transport large objects, and coordinate the building of nests and other structures.
Due to reasons of time, cost, and simplicity, the algorithms being developed today in research labs are only validated in computer simulation or using a few dozen robots at most.
In contrast, the design by Nagpal's team allows a single user to easily oversee the operation of a large Kilobot collective, including programming, powering on, and charging all robots, all of which would be difficult (if not impossible) using existing robotic systems.
So, what can you do with a thousand tiny little bots?
Robot swarms might one day tunnel through rubble to find survivors, monitor the environment and remove contaminants, and self-assemble to form support structures in collapsed buildings.
They could also be deployed to autonomously perform construction in dangerous environments, to assist with pollination of crops, or to conduct search and rescue operations.
For now, the Kilobots are designed to provide scientists with a physical testbed for advancing the understanding of collective behavior and realizing its potential to deliver solutions for a wide range of challenges.
-----
Personal comment:
This remembers me one project I have worked on, back in 2007, called "Variable Environment", which was involving swarm based robots called "e-puck" developed at EPFL. E-pucks were reacting in an autonomous manner to human activity around them.
At the beginning of last week, I launched GreedAndFearIndex
- a SaaS platform that automatically reads thousands of financial news
articles daily to deduce what companies are in the news and whether
financial sentiment is positive or negative.
It’s an app built largely on Scala, with MongoDB and Akka playing prominent roles to be able to deal with the massive amounts of data on a relatively small and cheap amount of hardware.
The app itself took about 4-5 weeks to build, although the underlying
technology in terms of web crawling, data cleansing/normalization, text
mining, sentiment analysis, name recognition, language grammar
comprehension such as subject-action-object resolution and the
underlying “God”-algorithm that underpins it all took considerably
longer to get right.
Doing it all was not only lots of late nights of coding, but also
reading more academic papers than I ever did at university, not only on
machine learning but also on neuroscience and research on the human
neocortex.
What I am getting at is that financial news and sentiment analysis
might be a good showcase and the beginning, but it is only part of a
bigger picture and problem to solve.
Unlocking True Machine Intelligence & Predictive Power The
human brain is an amazing pattern matching & prediction machine -
in terms of being able to pull together, associate, correlate and
understand causation between disparate, seemingly unrelated strands of
information it is unsurpassed in nature and also makes much of what has
passed for “Artificial Intelligence” look like a joke.
However, the human brain is also severely limited: it is slow, it’s
immediate memory is small, we can famously only keep track of 7 (+-)
things at any one time unless we put considerable effort into it. We are
awash in amounts of data, information and noise that our brain is
evolutionary not yet adapted to deal with.
So the bigger picture of what I’m working on is not a SaaS sentiment
analysis tool, it is the first step of a bigger picture (which
admittedly, I may not solve, or not solve in my lifetime):
What if we could make machines match our own ability to find patterns
based on seemingly unrelated data, but far quicker and with far more
than 5-9 pieces of information at a time?
What if we could accurately predict the movements of financial
markets, the best price point for a product, the likelihood of natural
disasters, the spreading patterns of infectious diseases or even unlock
the secrets of solving disease and aging themselves?
The Enablers I see a number of enablers that are making this future a real possibility within my lifetime:
Advances in neuroscience: our understanding of
the human brain is getting better year by year, the fact that we can now
look inside the brain on a very small scale and that we are starting to
build a basic understanding of the neocortex will be the key to the
future of machine learning. Computer Science and Neuroscience must
intermingle to a higher degree to further both fields.
Cloud Computing, parallelism & increased computing power:
Computing power is cheaper than ever with the cloud, the software to
take advantage of multi-core computers is finally starting to arrive and
Moore’s law is still advancing at ever (the latest generation of
MacBook Pro’s have roughly 2.5 times the performance of my barely 2 year
old MBP).
“Big Data”: we have the data needed to both train
and apply the next generation of machine learning algorithms on
abundantly available to us. It is no longer locked away in the silos of
corporations or the pages of paper archives, it’s available and
accessible to anyone online.
Crowdsourcing: There are two things that are very
time intensive when working with machine learning - training the
algorithms, and once in production, providing them with feedback (“on
the job training”) to continually improve and correct. The internet and
crowdsourcing lowers the barriers immensely. Digg, Reddit, Tweetmeme,
DZone are all early examples of simplistic crowdsourcing with little
learning, but where participants have a personal interest in
participating in the crowdsourcing. Combine that with machine learning
and you have a very powerful tool at your disposal.
Babysteps & The Perfect Storms All
things considered, I think we are getting closer to the perfect storm of
taking machine intelligence out of the dark ages where they have
lingered far too long and quite literally into a brave new world where
one day we may struggle to distinguish machine from man and artificial
intelligence from biological intelligence.
It will be a road fraught with setbacks, trial and error where the
errors will seem insurmountable, but we’ll eventually get there one
babystep at a time. I’m betting on it and the first natural step is
predictive analytics & adaptive systems able to automatically detect
and solve problems within well-defined domains.
Outside of its remarkable sales, the real star of the iPhone 4S show
has been Siri, Apple’s new voice recognition software. The intuitive
voice recognition software is the closest to A.I. we’ve seen on a
smartphone to date.
Over the weekend I noted
that Siri has some resemblance to the IBM supercomputer, Watson, and
speculated that someday Watson would be in our pockets while the
supercomputers of the future might look a lot more like the Artificial
Intelligence we’ve read about in science fiction novels today, such as
the mysterious Wintermute from William Gibson’s Neuromancer.
Over at Wired, John Stokes explains how Siri and the Apple cloud could lead to the advent of a real Artificial Intelligence:
In
the traditional world of canned, chatterbot-style “AI,” users had to
wait for a software update to get access to new input/output pairs. But
since Siri is a cloud application, Apple’s engineers can continuously
keep adding these hard-coded input/output pairs to it. Every time an
Apple engineer thinks of a clever response for Siri to give to a
particular bit of input, that engineer can insert the new pair into
Siri’s repertoire instantaneously, so that the very next instant every
one of the service’s millions of users will have access to it. Apple
engineers can also take a look at the kinds of queries that are popular
with Siri users at any given moment, and add canned responses based on
what’s trending.
In this way, we can expect Siri’s repertoire of clever
comebacks to grow in real-time through the collective effort of hundreds
of Apple employees and tens or hundreds of millions of users, until it
reaches the point where an adult user will be able to carry out a
multipart exchange with the bot that, for all intents and purposes,
looks like an intelligent conversation.
Meanwhile, the technology undergirding the software and iPhone
hardware will continue to improve. Now, this may not be the AI we had in
mind, but it also probably won’t be the final word in Artificial
Intelligence either. Other companies, such as IBM, are working to
develop other ‘cognitive computers‘ as well.
And while the Singularity may indeed be far, far away, it’s still exciting to see how some forms of A.I. may emerge at least in part through cloud-sourcing.
While voice control has been part of Android since the dawn of time, Siri
came along and ruined the fun with its superior search and
understanding capabilities. However, an industrious team of folks from Dexetra.com, led by Narayan Babu, built a Siri-alike in just 8 hours during a hackathon.
Iris allows you to search on various subjects including conversions,
art, literature, history, and biology. You can ask it “What is a fish?”
and it will reply with a paragraph from Wikipedia focusing on our finned
friends.
The app will soon be available soon from the Android Marketplace but I
tried it recently and found it a bit sparse but quite cool. It uses
Android’s speech-to-text functions to understand basic questions and
Narayan and his buddies are improving the app all the time.
The coolest thing? The finished the app in eight hours.
When we started seeing results, everyone got excited and
started a high speed coding race. In no time, we added Voice input,
Text-to-speech, also a lot of hueristic humor into Iris. Not until late
evening we decided on the name “iris.”, which would be Siri in reverse.
And we also reverse engineered a crazy expansion – Intelligent Rival
Imitator of Siri. We were still in the fun mode, but when we started
using it the results were actually good, really good.
You can grab the early, early beta APK here
but I recommend waiting for the official version to arrive this week.
It just goes to show you that amazing things can pop up everywhere.
Ahh, Watson. Your performance on Jeopardy
let the world know that computers were about more than just storing and
processing data the way computers always have. Watson showed us all
that computers were capable of thinking in very human ways, which is
both an extremely exciting and mildly frightening prospect.
A research group at MIT
has been working on a project along the same lines — a computer that
can process information in a human-like manner and then apply what it’s
learned to a specific situation. In this case, the information was the
instruction manual for the classic PC game Civilization. After
reading the manual, the computer was ready to do battle with the game’s
AI. The result: 79% of the time, the computer was victorious.
This is an undeniably impressive development, but we’re clearly not
in any real danger until the computer decides to man up and play without reading the instructions like any real gamer
would. MIT tried that as well, and while a 46% success rate doesn’t
look all that good percentage-wise, it’s pretty darn amazing when you
remember this is a computer playing Civilization with no
orientation of any kind. I’ve got plenty of friends that couldn’t
compete with that, though they all insist it’s because the game was
boring and they hated it.
The ultimate goal of the project was to prove that computers were
capable of processing natural language the way we do — and actually
learning from it, not merely spitting out responses the way an
intelligent voice response (IVR) system does, for example. A system like
this could one day power something like a tricorder, diagnosing
symptoms based on a cavernous cache of medical data. Don’t worry,
doctors, it’s going to be a while before computers actually replace you.