Next week at the World Cup, a
paralyzed volunteer from the Association for Assistance to Disabled
Children will walk onto the field and open the tournament with a
ceremonial kick. This modern miracle is made possible by a robotic
exoskeleton that will move the user's limbs, taking commands directly
from his or her thoughts.
This demonstration is the debut of the Walk Again Project,
a consortium of more than 150 scientists and engineers from around the
globe who have come together to show off recent advances in the field of
brain machine interfaces, or BMI. The paralyzed person inside will be
wearing an electroencephalographic (EEG) headset that records brainwave
activity. A backpack computer will translate those electrical signals
into commands the exoskeleton can understand. As the robotic frame
moves, it also sends its own signals back to the body, restoring not
just the ability to walk, but the sensation as well.
Just how well the wearer will walk and kick are uncertain. The project has been criticized by other neuroscientists as an exploitative spectacle that uses the disabled to promote research which may not be the best path
for restoring health to paralyzed patients. And just weeks before the
project is set to debut on television to hundreds of millions of fans,
it still hasn’t been tested outdoors and awaits some final pieces and
construction. It's not even clear which of the eight people from the
study will be the one inside the suit.
The point of the project is not
to show finished research, however, or sell a particular technology.
The Walk Again Project is meant primarily to inspire. It's a
demonstration that we’re on the threshold of achieving science fiction:
technologies that will allow humans to truly step into the cyborg era.
It’s only taken a little over two centuries to get there.
The past
Scientists have been studying
the way electricity interacts with our biology since 1780, when Luigi
Galvani made the legs of a dead frog dance by zapping them with a spark,
but the modern history behind the technology that allows our brains to
talk directly to machines goes back to the 1950s and John Lilly. He
implanted several hundred electrodes into different parts of a monkey’s
brain and used these implants to apply shocks, causing different body
parts to move. A decade later in 1963, professor Jose Delgado of Yale
tested this theory again like a true Spaniard, stepping into the ring to
face a charging bull, which he stopped in its tracks with a zap to the brain.
In 1969, professor Eberhard Fetz was able to isolate and record the
firing of a single neuron onto a microelectrode he had implanted into
the brain of a monkey. Fetz learned that primates could actually tune
their brain activity to better interact with the implanted machine. He
rewarded them with banana pellets every time they triggered the
microelectrode, and the primates quickly improved in their ability to
activate this specific section of their brain. This was a critical
observation, demonstrating brain’s unique plasticity, its ability to
create fresh pathways to fit a new language.
Today, BMI research has
advanced to not only record the neurons firing in primates’ brains, but
to understand what actions the firing of those neurons represent. "I
spend my life chasing the storms that emanate from the hundreds of
billions of cells that inhabit our brains," explained Miguel Nicolelis, PhD, one of the founders of Center for Neuroengineering
at Duke University and the driving force behind the Walk Again Project.
"What we want to do is listen to these brain symphonies and try to
extract them from the messages they carry."
Nicolelis and his colleagues at
Duke were able to record brain activity and match it to actions. From
there they could translate that brain activity into instructions a
computer could understand. Beginning in the year 2000, Nicolelis and
his colleagues at Duke made a series of breakthroughs. In the most well
known, they implanted a monkey with an array of microelectrodes that
could record the firing of clusters of neurons in different parts of the
brain. The monkey stood on a treadmill and began to walk. On the other
side of the planet, a robot in Japan received the signal emanating from
the primate’s brain and began to walk.
Primates
in the Duke lab learned to control robotic arms using only their
thoughts. And like in the early experiments done by Fetz, the primates
showed a striking ability to improve the control of these new limbs.
"The brain is a remarkable instrument," says professor Craig Henriquez,
who helped to found the Duke lab. "It has the ability to rewire itself,
to create new connections. That’s what gives the BMI paradigm its power.
You are not limited just by what you can physically engineer, because
the brain evolves to better suit the interface."
The present
After his success with
primates, Nicolelis was eager to apply the advances in BMI to people.
But there were some big challenges in the transition from lab animals to
human patients, namely that many people weren’t willing to undergo
invasive brain surgery for the purposes of clinical research. "There is
an open question of whether you need to have implants to get really fine
grained control," says Henriquez. The Walk Again Project hopes to
answer that question, at least partially. While it is based on research
in animals that required surgery, it will be using only external EEG
headsets to gather brain activity.
The fact that these patients
were paralyzed presented another challenge. Unlike the lab monkeys, who
could move their own arms and observe how the robot arm moved in
response, these participants can’t move their legs, or for many, really
remember the subconscious thought process that takes place when you want
to travel by putting one foot in front of the other. The first step was
building up the pathways in the brain that would send mental commands
to the BMI to restore locomotion.
To train the patients in this
new way of thinking about movement, researchers turned to virtual
reality. Each subject was given an EEG headset and an Oculus Rift.
Inside the head-mounted display, the subjects saw a virtual avatar of
themselves from the waist down. When they thought about walking, the
avatar legs walked, and this helped the brain to build new connections
geared towards controlling the exoskeleton. "We also simulate the
stadium, and the roar of the crowd," says Regis Kopper, who runs Duke’s
VR lab. "To help them prepare for the stress of the big day."
Once
the VR training had established a baseline for sending commands to the
legs, there was a second hurdle. Much of walking happens at the level of
reflex, and without the peripheral nervous system that helps people
balance, coordinate, and adjust to the terrain, walking can be a very
challenging task. That’s why even the most advanced robots have trouble navigating stairs
or narrow hallways that would seem simple to humans. If the patients
were going to successfully walk or kick a ball, it wasn’t enough that
they be able to move the exoskeleton’s legs — they had to feel them as
well.
The breakthrough was a special
shirt with vibrating pads on its forearm. As the robot walked, the
contact of its heel and toe on the ground made corresponding sensations
occur along parts of the right and left arms. "The brain essentially
remapped one part of the body onto another," says Henriquez. "This
restored what we call proprioception, the spacial awareness humans need
for walking."
In recent weeks all eight of
the test subjects have successfully walked using the exoskeleton, with
one completing an astonishing 132 steps. The plan is to have the
volunteer who works best with the exoskeleton perform the opening kick.
But the success of the very public demonstration is still up in the air.
The suit hasn’t been completely finished and it has yet to be tested in
an outdoor environment. The group won't confirm who exactly will be
wearing the suit. Nicolelis, for his part, isn’t worried. Asked when he
thought the entire apparatus would be ready, he replied: "Thirty minutes
before."
The future
The Walk Again project may be
the most high-profile example of BMI, but there have been a string of
breakthrough applications in recent years. A patient at the University of Pittsburgh
achieved unprecedented levels of fine motor control with a robotic arm
controlled by brain activity. The Rehabilitation Institute of Chicago
introduced the world’s first mind controlled prosthetic leg. For now the use of advanced BMI technologies is largely confined to academic and medical research, but some projects, like DARPA’s Deka arm,
have received FDA approval and are beginning to move into the real
world. As it improves in capability and comes down in cost, BMI may
open the door to a world of human enhancement that would see people
merging with machines, not to restore lost capabilities, but to augment
their own abilities with cyborg power-ups.
"From the standpoint of
defense, we have a lot of good reasons to do it," says Alan Rudolph, a
former DARPA scientist and Walk Again Project member. Rudolph, for
example, worked on the Big Dog,
and says BMI may allow human pilots to control mechanical units with
their minds, giving them the ability to navigate uncertain or dynamic
terrain in a way that has so far been impossible while keeping soldiers
out of harms way. Our thoughts might control a robot on the surface of
Mars or a microsurgical bot navigating the inside of the human body.
There is a subculture of DIY biohackers and grinders
who are eager to begin adopting cyborg technology and who are willing,
at least in theory, to amputate functional limbs if it’s possible to
replace them with stronger, more functional, mechanical ones. "I know
what the limits of the human body are like," says Tim Sarver, a member
of the Pittsburgh biohacker collective Grindhouse Wetwares. "Once you’ve
seen the capabilities of a 5000psi hydraulic system, it’s no
comparison."
For now, this sci-fi vision
all starts with a single kick on the World Cup pitch, but our inevitable
cyborg future is indeed coming. A recent demonstration
at the University of Washington enabled one person’s thoughts to
control the movements of another person’s body — a brain-to-brain
interface — and it holds the key to BMI’s most promising potential
application. "In this futuristic scenario, voluntary electrical brain
waves, the biological alphabet that underlies human thinking, will
maneuver large and small robots, control airships from afar," wrote
Nicolelis. "And perhaps even allow for the sharing of thoughts and
sensations with one individual to another."
Small cubes with no exterior moving parts can propel themselves forward,
jump on top of each other, and snap together to form arbitrary shapes.
A prototype of a new modular robot,
with its innards exposed and its
flywheel — which gives it the ability to move independently — pulled
out.
Photo: M. Scott Brauer
In 2011, when an MIT senior named John Romanishin proposed a new design
for modular robots to his robotics professor, Daniela Rus, she said,
“That can’t be done.”
Two years later, Rus showed her colleague
Hod Lipson, a robotics researcher at Cornell University, a video of
prototype robots, based on Romanishin’s design, in action. “That can’t
be done,” Lipson said.
In November, Romanishin — now a research
scientist in MIT’s Computer Science and Artificial Intelligence
Laboratory (CSAIL) — Rus, and postdoc Kyle Gilpin will establish once
and for all that it can be done, when they present a paper describing
their new robots at the IEEE/RSJ International Conference on Intelligent
Robots and Systems.
Known as M-Blocks, the robots are cubes with
no external moving parts. Nonetheless, they’re able to climb over and
around one another, leap through the air, roll across the ground, and
even move while suspended upside down from metallic surfaces.
Inside
each M-Block is a flywheel that can reach speeds of 20,000 revolutions
per minute; when the flywheel is braked, it imparts its angular momentum
to the cube. On each edge of an M-Block, and on every face, are
cleverly arranged permanent magnets that allow any two cubes to attach
to each other.
“It’s one of these things that the
[modular-robotics] community has been trying to do for a long time,”
says Rus, a professor of electrical engineering and computer science and
director of CSAIL. “We just needed a creative insight and somebody who
was passionate enough to keep coming at it — despite being discouraged.”
Embodied abstraction
As Rus explains,
researchers studying reconfigurable robots have long used an abstraction
called the sliding-cube model. In this model, if two cubes are face to
face, one of them can slide up the side of the other and, without
changing orientation, slide across its top.
The sliding-cube
model simplifies the development of self-assembly algorithms, but the
robots that implement them tend to be much more complex devices. Rus’
group, for instance, previously developed a modular robot called the Molecule,
which consisted of two cubes connected by an angled bar and had 18
separate motors. “We were quite proud of it at the time,” Rus says.
According
to Gilpin, existing modular-robot systems are also “statically stable,”
meaning that “you can pause the motion at any point, and they’ll stay
where they are.” What enabled the MIT researchers to drastically
simplify their robots’ design was giving up on the principle of static
stability.
“There’s a point in time when the cube is essentially
flying through the air,” Gilpin says. “And you are depending on the
magnets to bring it into alignment when it lands. That’s something
that’s totally unique to this system.”
That’s also what made Rus
skeptical about Romanishin’s initial proposal. “I asked him build a
prototype,” Rus says. “Then I said, ‘OK, maybe I was wrong.’”
Sticking the landing
To
compensate for its static instability, the researchers’ robot relies on
some ingenious engineering. On each edge of a cube are two cylindrical
magnets, mounted like rolling pins. When two cubes approach each other,
the magnets naturally rotate, so that north poles align with south, and
vice versa. Any face of any cube can thus attach to any face of any
other.
The cubes’ edges are also beveled, so when two cubes are
face to face, there’s a slight gap between their magnets. When one cube
begins to flip on top of another, the bevels, and thus the magnets,
touch. The connection between the cubes becomes much stronger, anchoring
the pivot. On each face of a cube are four more pairs of smaller
magnets, arranged symmetrically, which help snap a moving cube into
place when it lands on top of another.
As with any modular-robot
system, the hope is that the modules can be miniaturized: the ultimate
aim of most such research is hordes of swarming microbots that can
self-assemble, like the “liquid steel” androids in the movie “Terminator
II.” And the simplicity of the cubes’ design makes miniaturization
promising.
But the researchers believe that a more refined
version of their system could prove useful even at something like its
current scale. Armies of mobile cubes could temporarily repair bridges
or buildings during emergencies, or raise and reconfigure scaffolding
for building projects. They could assemble into different types of
furniture or heavy equipment as needed. And they could swarm into
environments hostile or inaccessible to humans, diagnose problems, and
reorganize themselves to provide solutions.
Strength in diversity
The
researchers also imagine that among the mobile cubes could be
special-purpose cubes, containing cameras, or lights, or battery packs,
or other equipment, which the mobile cubes could transport. “In the vast
majority of other modular systems, an individual module cannot move on
its own,” Gilpin says. “If you drop one of these along the way, or
something goes wrong, it can rejoin the group, no problem.”
“It’s
one of those things that you kick yourself for not thinking of,”
Cornell’s Lipson says. “It’s a low-tech solution to a problem that
people have been trying to solve with extraordinarily high-tech
approaches.”
“What they did that was very interesting is they
showed several modes of locomotion,” Lipson adds. “Not just one cube
flipping around, but multiple cubes working together, multiple cubes
moving other cubes — a lot of other modes of motion that really open the
door to many, many applications, much beyond what people usually
consider when they talk about self-assembly. They rarely think about
parts dragging other parts — this kind of cooperative group behavior.”
In
ongoing work, the MIT researchers are building an army of 100 cubes,
each of which can move in any direction, and designing algorithms to
guide them. “We want hundreds of cubes, scattered randomly across the
floor, to be able to identify each other, coalesce, and autonomously
transform into a chair, or a ladder, or a desk, on demand,” Romanishin
says.
A new robot unveiled this
week highlights the psychological and technical challenges of designing
a humanoid that people actually want to have around.
Like all little boys, Roboy likes to show off.
He can say a few words. He can shake hands and wave. He is
learning to ride a tricycle. And - every parent's pride and joy - he has
a functioning musculoskeletal anatomy.
But when Roboy is unveiled this Saturday at the Robots on
Tour event in Zurich, he will be hoping to charm the crowd as well as
wow them with his skills.
"One of the goals is for Roboy to be a messenger of a new
generation of robots that will interact with humans in a friendly way,"
says Rolf Pfeifer from the University of Zurich - Roboy's
parent-in-chief.
As manufacturers get ready to market robots for the home it has become essential for them to overcome the public's suspicion of them. But designing a robot that is fun to be with - as well as useful and safe - is quite difficult.
-----
The uncanny valley: three theories
Researchers have speculated about why we might feel uneasy in the presence of realistic robots.
They remind us of corpses or zombies
They look unwell
They do not look and behave as expected
-----
Roboy's main technical innovation
is a tendon-driven design that mimics the human muscular system.
Instead of whirring motors in the joints like most robots, he has around
70 imitation muscles, each containing motors that wind interconnecting
tendons. Consequently, his movements will be much smoother and less
robotic.
But although the technical team was inspired by human beings,
it chose not to create a robot that actually looked like one. Instead
of a skin-like covering, Roboy has a shiny white casing that proudly
reveals his electronic innards.
Behind this design is a long-standing hypothesis about how people feel in the presence of robots.
In 1970, the Japanese roboticist Masahiro Mori speculated that
the more lifelike robots become, the more human beings feel familiarity
and empathy with them - but that a robot too similar to a human
provokes feelings of revulsion.
Mori called this sudden dip in human beings' comfort levels the "uncanny valley".
"There are quite a number of studies that suggest that as
long as people can clearly see that the robot is a machine, even if they
project their feelings into it, then they feel comfortable," says
Pfeifer.
Roboy was styled as a boy - albeit quite a brawny one - to
lower his perceived threat levels to humans. His winsome smile - on a
face picked by Facebook users from a selection - hasn't hurt the team in
their search for corporate sponsorship, either.
But the image problem of robots is not just about the way they look. An EU-wide survey last year found that although most Europeans have a positive view of robots, they feel they should know their place.
Eighty-eight per cent of respondents agreed with the
statement that robots are "necessary as they can do jobs that are too
hard or dangerous for people", such as space exploration, warfare and
manufacturing. But 60% thought that robots had no place in the care of
children, elderly people and those with disabilities.
The computer scientist and psychologist Noel Sharkey has,
however, found 14 companies in Japan and South Korea that are in the
process of developing childcare robots.
South Korea has already tried out robot prison guards, and three years ago launched a plan to deploy more than 8,000 English-language teachers in kindergartens.
-----
A robot buddy?
In the film Robot and Frank, set in the near-future, ageing
burglar Frank is provided with a robot companion to be a helper and
nurse when he develops dementia
The story plays out rather like a futuristic buddy movie -
although he is initially hostile to the robot, Frank is soon programming
him to help him in his schemes, which are not entirely legal
-----
Cute, toy-like robots are already available for the home.
Take the Hello Kitty robot, which has been on the market for
several years and is still available for around $3,000 (£2,000).
Although it can't walk, it can move its head and arms. It also has
facial recognition software that allows it to call children by name and
engage them in rudimentary conversation.
A tongue-in-cheek customer review of the catbot on Amazon reveals a certain amount of scepticism.
"Hello Kitty Robo me was babysit," reads the review.
"Love me hello kitty robo, thank robo for make me talk
good... Use lots battery though, also only for rich baby, not for no
money people."
-----
Asimo
At just 130cm high, Honda's Asimo jogs around on bended knee
like a mechanical version of Dobby, the house elf from Harry Potter. He
can run, climb up and down stairs and pour a bottle of liquid in a cup.
Since 1986, Honda have been working on humanoids with the ultimate aim of providing an aid to those with mobility impairments.
-----
The product description says the
robot is "a perfect companion for times when your child needs a little
extra comfort and friendship" and "will keep your child happily
occupied". In other words, it's something to turn on to divert your
infant for a few moments, but it is not intended as a replacement
child-minder.
An ethical survey
of "robot nannies" by Noel and Amanda Sharkey suggests that as robots
become more sophisticated parents may be increasingly tempted to hand
them too much responsibility.
The survey also raises the question of what long-term effects
will result from children forming an emotional bond with a lump of
plastic. They cite one case study in which a 10-year-old girl, who had
been given a robot doll for several weeks, reported that "now that she's
played with me a lot more, she really knows me".
Noel Sharkey says that he loves the idea of children playing
with robots but has serious concerns about them being brought up by
them. "[Imagine] the kind of attachment disorders they would develop,"
he says.
But despite their limitations, humanoid robots might yet
prove invaluable in narrow, fixed roles in hospitals, schools and homes.
"Something really interesting happens between some kids with
autism and robots that doesn't happen between those children and other
humans," says Maja J Mataric, a roboticist at the University of Southern
California. She's found that such children respond positively to
humanoids and she is trying to work out how they can be used
therapeutically.
In their study, the Sharkeys make the observation that robots have one
advantage over adult humans. They can have physical contact with
children - something now frowned upon or forbidden in schools.
"These restrictions would not apply to a
robot," they write, "because it could not be accused of having sexual
intent and so there are no particular ethical concerns. The only concern
would be the child's safety - for example, not being crushed by a
hugging robot."
When it comes to robots, there is such a thing as too much love.
"If you were having a mechanical assistant in the home that
was powerful enough to be useful, it would necessarily be powerful
enough to be dangerous," says Peter Robinson of Cambridge University.
"Therefore it'd better have really good understanding and
communication."
His team are developing robots with sophisticated facial
recognition. These machines won't just be able to tell Bill from Brenda
but they will be able to infer from his expression whether Bill is
feeling confused, tired, playful or in physical agony.
Roboy's muscles and tendons may actually make him a safer
robot to have around. His limbs have more elasticity than a conventional
robot's, allowing for movement even when he has a power failure.
Rolf Pfeifer has a favourite question which he puts to those sceptical about using robots in caring situations.
"If you couldn't walk upstairs any more, would you want a person to carry you or would you want to take the elevator?"
Most people opt for the elevator, which is - if you think about it - a kind of robot.
Pfeifer believes robots will enter our homes. What is not yet
clear is whether the future lies in humanoid servants with a wide range
of limited skills or in intelligent, responsive appliances designed to
do specific tasks, he says.
"I think the market will ultimately determine what kind of robot we're going to have."
If you’re terrified of the
possibility that humanity will be dismembered by an insectoid master
race, equipped with robotic exoskeletons (or would that be
exo-exoskeletons?), look away now. Researchers at the University of
Tokyo have strapped a moth into a robotic exoskeleton, with the moth
successfully controlling the robot to reach a specific location inside a
wind tunnel.
In all, fourteen male silkmoths were tested, and
they all showed a scary aptitude for steering a robot. In the tests, the
moths had to guide the robot towards a source of female sex pheromone.
The researchers even introduced a turning bias — where one of the
robot’s motors is stronger than the other, causing it to veer to one
side — and yet the moths still reached the target.
As you can see
in the photo above, the actual moth-robot setup is one of the most
disturbing and/or awesome things you’ll ever see. In essence, the
polystyrene (styrofoam) ball acts like a trackball mouse. As the
silkmoth walks towards the female pheromone, the ball rolls around.
Sensors detect these movements and fire off signals to the robot’s drive
motors. At this point you should watch the video below — and also not
think too much about what happens to the moth when it’s time to remove
the glued-on stick from its back.
Fortunately, the Japanese
researchers aren’t actually trying to construct a moth master race: In
reality, it’s all about the moth’s antennae and sensory-motor system.
The researchers are trying to improve the performance of autonomous
robots that are tasked with tracking the source of chemical leaks and
spills. “Most chemical sensors, such as semiconductor sensors, have a
slow recovery time and are not able to detect the temporal dynamics of
odours as insects do,” says Noriyasu Ando, the lead author of the
research. “Our results will be an important indication for the selection
of sensors and models when we apply the insect sensory-motor system to
artificial systems.”
Of course, another possibility is that we simply keep the moths. After all,
why should we spend time and money on an artificial system when mother
nature, as always, has already done the hard work for us? In much the
same way that miners used canaries and border police use sniffer dogs,
why shouldn’t robots be controlled by insects? The silkmoth is graced
with perhaps the most sensitive olfactory system in the world. For now
it might only be sensitive to not-so-useful scents like the female sex
pheromone, but who’s to say that genetic engineering won’t allow for silkmoths that can sniff out bombs or drugs or chemical spills?
Who
nose: Maybe genetically modified insects with robotic exoskeletons are
merely an intermediary step towards real nanobots that fly around,
fixing, cleaning, and constructing our environment.
(Washington, DC) – Governments should pre-emptively ban fully autonomous
weapons because of the danger they pose to civilians in armed conflict,
Human Rights Watch said in a report released today. These future
weapons, sometimes called “killer robots,” would be able to choose and
fire on targets without human intervention.
The United Kingdom’s Taranis
combat aircraft, whose prototype was unveiled in 2010, is designed
strike distant targets, “even in another continent.” While the Ministry
of Defence has stated that humans will remain in the loop, the Taranis
exemplifies the move toward increased autonomy.
The 50-page report, “Losing Humanity: The Case Against Killer Robots,”
outlines concerns about these fully autonomous weapons, which would
inherently lack human qualities that provide legal and non-legal checks
on the killing of civilians. In addition, the obstacles to holding
anyone accountable for harm caused by the weapons would weaken the law’s
power to deter future violations.
“Giving machines the power to decide who lives and dies on the battlefield would take technology too far,” said Steve Goose,
Arms Division director at Human Rights Watch. “Human control of robotic
warfare is essential to minimizing civilian deaths and injuries.”
The South Korean SGR-1 sentry
robot, a precursor to a fully autonomous weapon, can detect people in
the Demilitarized Zone and, if a human grants the command, fire its
weapons. The robot is shown here during a test with a surrendering enemy
soldier.
“Losing Humanity” is the first major publication about fully
autonomous weapons by a nongovernmental organization and is based on
extensive research into the law, technology, and ethics of these
proposed weapons. It is jointly published by Human Rights Watch and the
Harvard Law School International Human Rights Clinic.
Human Rights Watch and the International Human Rights Clinic called for
an international treaty that would absolutely prohibit the development,
production, and use of fully autonomous weapons. They also called on
individual nations to pass laws and adopt policies as important measures
to prevent development, production, and use of such weapons at the
domestic level.
Fully autonomous weapons do not yet exist, and major powers, including
the United States, have not made a decision to deploy them. But
high-tech militaries are developing or have already deployed precursors
that illustrate the push toward greater autonomy for machines on the
battlefield. The United States is a leader in this technological
development. Several other countries – including China, Germany, Israel,
South Korea, Russia, and the United Kingdom – have also been involved.
Many experts predict that full autonomy for weapons could be achieved in
20 to 30 years, and some think even sooner.
“It is essential to stop the development of killer robots before they
show up in national arsenals,” Goose said. “As countries become more
invested in this technology, it will become harder to persuade them to
give it up.”
Fully autonomous weapons could not meet the requirements of
international humanitarian law, Human Rights Watch and the Harvard
clinic said. They would be unable to distinguish adequately between
soldiers and civilians on the battlefield or apply the human judgment
necessary to evaluate the proportionality of an attack – whether
civilian harm outweighs military advantage.
These robots would also undermine non-legal checks on the killing of
civilians. Fully autonomous weapons could not show human compassion for
their victims, and autocrats could abuse them by directing them against
their own people. While replacing human troops with machines could save
military lives, it could also make going to war easier, which would
shift the burden of armed conflict onto civilians.
Finally, the use of fully autonomous weapons would create an
accountability gap. Trying to hold the commander, programmer, or
manufacturer legally responsible for a robot’s actions presents
significant challenges. The lack of accountability would undercut the
ability to deter violations of international law and to provide victims
meaningful retributive justice.
While most militaries maintain that for the immediate future humans
will retain some oversight over the actions of weaponized robots, the
effectiveness of that oversight is questionable, Human Rights Watch and
the Harvard clinic said. Moreover, military statements have left the
door open to full autonomy in the future.
“Action is needed now, before killer robots cross the line from science fiction to feasibility,” Goose said.
Soon you can get your hands on the Mobot modular
robot for a very reasonable $270 a module (pre-orders
available now). A number of connection plates and
attachments will also be available, and I
guess you can 3D print your own stuff.
Mobot by Barobo.com
I like the gripper that is powered and controlled by the
rotating faceplate. I am sure the same concept can be
used to 3D print some cool things in the future.
A connector would be an awesome thing and definitely
worth a price of some sort.
In general, it seems to be a very competent modular
robotics system. It uses a snap together connector,
making it simple and fast to use, but maybe not as
strong as a system that screws together.
There is a Graphical User Interface RobotController,
and you can program it with the C/C++ interpreter Ch
so everyone from beginner to hard core hacker should
be able to do some really cool stuff.
Insect printable robot. Photo: Jason Dorfman, CSAIL/MIT
Printers can make mugs, chocolate and even blood vessels. Now, MIT scientists want to add robo-assistants to the list of printable goodies.
Today, MIT announced a new project, “An Expedition in Computing
Printable Programmable Machines,” that aims to give everyone a chance to
have his or her own robot.
Need help peering into that unreasonably hard-to-reach cabinet, or
wiping down your grimy 15th-story windows? Walk on over to robo-Kinko’s
to print, and within 24 hours you could have a fully programmed working origami bot doing your dirty work.
“No system exists today that will take, as specification, your
functional needs and will produce a machine capable of fulfilling that
need,” MIT robotics engineer and project manager Daniela Rus said.
Unfortunately, the very earliest you’d be able to get your hands on
an almost-instant robot might be 2017. The MIT scientists, along with
collaborators at Harvard University and the University of Pennsylvania,
received a $10 million grant from the National Science Foundation for
the 5-year project. Right now, it’s at very early stages of development.
So far, the team has prototyped two mechanical helpers: an
insect-like robot and a gripper. The 6-legged tick-like printable robot
could be used to check your basement for gas leaks or to play with your
cat, Rus says. And the gripper claw, which picks up objects, might be
helpful in manufacturing, or for people with disabilities, she says.
Printable gripper. Photo: Jason Dorfman, CSAIL/MIT
The two prototypes cost about $100 and took about 70 minutes to
build. The real cost to customers will depend on the robot’s
specifications, its capabilities and the types of parts that are
required for it to work.
The researchers want to create a one-size-fits-most platform to
circumvent the high costs and special hardware and software often
associated with robots. If their project works out, you could go to a
local robo-printer, pick a design from a catalog and customize a robot
according to your needs. Perhaps down the line you could even order-in
your designer bot through an app.
Their approach to machine building could “democratize access to
robots,” Rus said. She envisions producing devices that could detect
toxic chemicals, aid science education in schools, and help around the
house.
Although bringing robots to the masses sounds like a great idea (a
sniffing bot to find lost socks would come in handy), there are still
several potential roadblocks to consider — for example, how users,
especially novice ones, will interact with the printable robots.
“Maybe this novice user will issue a command that will break the
device, and we would like to develop programming environments that have
the capability of catching these bad commands,” Rus said.
As it stands now, a robot would come pre-programmed to perform a set
of tasks, but if a user wanted more advanced actions, he or she could
build up those actions using the bot’s basic capabilities. That advanced
set of commands could be programmed in a computer and beamed wirelessly
to the robot. And as voice parsing systems get better, Rus thinks you
might be able to simply tell your robot to do your bidding.
Durability is another issue. Would these robots be single-use only?
If so, trekking to robo-Kinko’s every time you needed a bot to look
behind the fridge might get old. These are all considerations the
scientists will be grappling with in the lab. They’ll have at least five
years to tease out some solutions.
In the meantime, it’s worth noting that other other groups are also building robots using printers. German engineers printed a white robotic spider last year. The arachnoid carried a camera and equipment to assess chemical spills.
And at Drexel University, paleontologist Kenneth Lacovara and mechanical engineer James Tangorra are trying to create a robotic dinosaur from dino-bone replicas.
The 3-D-printed bones are scaled versions of laser-scanned fossils. By
the end of 2012, Lacovara and Tangorra hope to have a fully mobile
robotic dinosaur, which they want to use to study how dinosaurs, like
large sauropods, moved.
Lancovara thinks the MIT project is an exciting and promising one:
“If it’s a plug-and-play system, then it’s feasible,” he said. But
“obviously, it [also] depends on the complexity of the robot.” He’s seen
complex machines with working gears printed in one piece, he says.
Right now, the MIT researchers are developing an API that would
facilitate custom robot design and writing algorithms for the assembly
process and operations.
If their project works out, we could all have a bot to call our own in a few years. Who said print was dead?
The Kilobots are an inexpensive system for testing synchronized and collaborative behavior in a very large swarm of robots. Photo courtesy of Michael Rubenstein
The Kilobots are coming. Computer scientists and engineers at Harvard University have developed and licensed technology that will make it easy to test collective algorithms on hundreds, or even thousands, of tiny robots.
Called Kilobots, the quarter-sized bug-like devices scuttle around on three toothpick-like legs, interacting and coordinating their own behavior as a team. AJune 2011 Harvard Technical Reportdemonstrated a collective of 25 machines implementing swarming behaviors such as foraging, formation control, and synchronization.
Once up and running, the machines are fully autonomous, meaning there is no need for a human to control their actions.
The communicative critters were created by members of the Self-Organizing Systems Research Group led by Radhika Nagpal, the Thomas D. Cabot Associate Professor of Computer Science at the Harvard School of Engineering and Applied Sciences (SEAS) and a Core Faculty Member at the Wyss Institute for Biologically Inspired Engineering at Harvard. Her team also includes Michael Rubenstein, a postdoctoral fellow at SEAS; and Christian Ahler, a fellow of SEAS and the Wyss Institute.
Thanks to a technology licensing deal with the K-Team Corporation, a Swiss manufacturer of high-quality mobile robots, researchers and robotics enthusiasts alike can now take command of their own swarm.
One key to achieving high-value applications for multi-robot systems in the future is the development of sophisticated algorithms that can coordinate the actions of tens to thousands of robots.
"The Kilobot will provide researchers with an important new tool for understanding how to design and build large, distributed, functional systems," says Michael Mitzenmacher, Area Dean for Computer Science at SEAS.
The name "Kilobot" does not refer to anything nefarious; rather, it describes the researchers' goal of quickly and inexpensively creating a collective of a thousand bots.
Inspired by nature, such swarms resemble social insects, such as ants and bees, that can efficiently search for and find food sources in large, complex environments, collectively transport large objects, and coordinate the building of nests and other structures.
Due to reasons of time, cost, and simplicity, the algorithms being developed today in research labs are only validated in computer simulation or using a few dozen robots at most.
In contrast, the design by Nagpal's team allows a single user to easily oversee the operation of a large Kilobot collective, including programming, powering on, and charging all robots, all of which would be difficult (if not impossible) using existing robotic systems.
So, what can you do with a thousand tiny little bots?
Robot swarms might one day tunnel through rubble to find survivors, monitor the environment and remove contaminants, and self-assemble to form support structures in collapsed buildings.
They could also be deployed to autonomously perform construction in dangerous environments, to assist with pollination of crops, or to conduct search and rescue operations.
For now, the Kilobots are designed to provide scientists with a physical testbed for advancing the understanding of collective behavior and realizing its potential to deliver solutions for a wide range of challenges.
-----
Personal comment:
This remembers me one project I have worked on, back in 2007, called "Variable Environment", which was involving swarm based robots called "e-puck" developed at EPFL. E-pucks were reacting in an autonomous manner to human activity around them.
The code its self is very simple (albeit not very good, it was written after I got back from clubbing @JWZ’s club [DNA lounge]). There is something about a lack of sleep which makes perl code and regexs seem like a good idea. If despite the previous warnings you still want to look at the codehttps://github.com/holdenk/holdensmagicalunicornis the place to go. It works by doing a github search for all the README files in markdown format and then running a limited spell checker on them. Documents with a known misspelled word are flagged and output to a file. Thanks to the wonderfulgithub apithe next steps is are easy. It forks the repo and clones it locally, performs the spelling correction, commits, pushes and submits a pull request.
The spelling correction is based onPod::Spell::CommonMistakes, it works using a very restricted set of misspelled words to corrections.
Writing a “future directions” sections always seems like such a cliche, but here it is anyways. The code as it stands is really simple. For example it only handles one repo of a given name, and the dictionary is small, etc. The next version should probably also try and only submit corrections against the conical repo. Some future plans extending the dictionary. In the longer term I think it would be awesome to attempt detect really simple bugs in actual code (things like memcpy(dest,0,0)).
Comments, suggestions, and patches always appreciated. -holdenkarau(although I’m going to be AFK at burning man for awhile, you can find me @ 6:30 & D)