False-color micrograph of Caenorhabditis elegans
(Science Photo Library/Corbis)
If the brain is a collection
of electrical signals, then, if you could catalog all those those
signals digitally, you might be able upload your brain into a computer, thus achieving digital immortality.
While the plausibility—and ethics—of this upload for humans can be debated, some people are forging ahead in the field of whole-brain emulation. There are massive efforts to map the connectome—all the connections in
the brain—and to understand how we think. Simulating brains could lead
us to better robots and artificial intelligence, but the first steps
need to be simple.
So, one group of scientists started with the roundworm Caenorhabditis elegans, a critter whose genes and simple nervous system we know intimately.
The OpenWorm project
has mapped the connections between the worm’s 302 neurons and simulated
them in software. (The project’s ultimate goal is to completely
simulate C. elegans as a virtual organism.) Recently, they put that software program in a simple Lego robot.
The worm’s body parts and neural networks now have
LegoBot equivalents: The worm’s nose neurons were replaced by a sonar
sensor on the robot. The motor neurons running down both sides of the
worm now correspond to motors on the left and right of the robot, explains Lucy Black for I Programmer. She writes:
---
It is claimed that the robot behaved in ways that are similar to observed C. elegans. Stimulation
of the nose stopped forward motion. Touching the anterior and posterior
touch sensors made the robot move forward and back accordingly.
Stimulating the food sensor made the robot move forward.
---
Timothy Busbice, a founder for the OpenWorm project, posted a video of the Lego-Worm-Bot stopping and backing:
The simulation isn’t
exact—the program has some simplifications on the thresholds needed to
trigger a "neuron" firing, for example. But the behavior is impressive
considering that no instructions were programmed into this robot. All it
has is a network of connections mimicking those in the brain of a
worm.
Of course, the goal of uploading our brains assumes that we aren’t alreadyliving in a computer simulation.
Hear out the logic: Technologically advanced civilizations will
eventually make simulations that are indistinguishable from reality. If
that can happen, odds are it has. And if it has, there are probably
billions of simulations making their own simulations. Work out that
math, and "the odds are nearly infinity to one that we are all living in
a computer simulation," writes Ed Grabianowski for io9.
Aiming to do for Machine Learning what MySQL did for database servers, U.S. and UK-based PredictionIO has raised $2.5 million in seed funding from a raft of investors including Azure Capital, QuestVP, CrunchFund (of which TechCrunch founder Mike Arrington is a Partner), Stanford University‘s StartX Fund, France-based Kima Ventures,
IronFire, Sood Ventures and XG Ventures. The additional capital will be
used to further develop its open source Machine Learning server, which
significantly lowers the barriers for developers to build more
intelligent products, such as recommendation or prediction engines,
without having to reinvent the wheel.
Being an open source company — after pivoting from offering a “user behavior prediction-as-a-service” under its old TappingStone product name — PredictionIO
plans to generate revenue in the same way MySQL and other open source
products do. “We will offer an Enterprise support license and, probably,
an enterprise edition with more advanced features,” co-founder and CEO
Simon Chan tells me.
The problem PredictionIO is setting out to solve is that building
Machine Learning into products is expensive and time-consuming — and in
some instances is only really within the reach of major and
heavily-funded tech companies, such as Google or Amazon, who can afford a
large team of PhDs/data scientists. By utilising the startup’s open
source Machine Learning server, startups or larger enterprises no longer
need to start from scratch, while also retaining control over the
source code and the way in which PredictionIO integrates with their
existing wares.
In fact, the degree of flexibility and reassurance an open source
product offers is the very reason why PredictionIO pivoted away from a
SaaS model and chose to open up its codebase. It did so within a couple
of months of launching its original TappingStone product. Fail fast, as
they say.
“We changed from TappingStone (Machine Learning as a Service) to
PredictionIO (open source server) in the first 2 months once we built
the first prototype,” says Chan. “As developers ourselves, we realise
that Machine Learning is useful only if it’s customizable to each unique
application. Therefore, we decided to open source the whole product.”
The pivot appears to be working, too, and not just validated by
today’s funding. To date, Chan says its open source Machine Learning
server is powering thousands of applications with 4000+ developers
engaged with the project. “Unlike other data science tools that focus on
solving data researchers’ problems, PredictionIO is built for every
programmer,” he adds.
Other competitors Chan cites include “closed ‘black box” MLaaS
services or software’, such as Google Prediction API, Wise.io, BigML,
and Skytree.
Examples of who is currently using PredictionIO include Le Tote, a
clothing subscription/rental service that is using PredictionIO to
predict customers’ fashion preferences, and PerkHub, which is using
PredictionIO to personalize product recommendations in the weekly ‘group
buying’ emails they send out.
Can computers learn to read? We think so. "Read the Web" is a
research project that attempts to create a computer system that learns
over time to read the web. Since January 2010, our computer system
called NELL (Never-Ending Language Learner) has been running
continuously, attempting to perform two tasks each day:
First, it attempts to "read," or extract facts from text found in hundreds of millions of web pages (e.g., playsInstrument(George_Harrison, guitar)).
Second, it attempts to improve its reading competence, so that tomorrow it can extract more facts from the web, more accurately.
So far, NELL has accumulated over 50 million candidate beliefs by
reading the web, and it is considering these at different levels of
confidence. NELL has high confidence in 2,132,551 of these beliefs —
these are displayed on this website. It is not perfect, but NELL is
learning. You can track NELL's progress below or @cmunell on Twitter, browse and download its knowledge base, read more about our technical approach, or join the discussion group.
Small cubes with no exterior moving parts can propel themselves forward,
jump on top of each other, and snap together to form arbitrary shapes.
A prototype of a new modular robot,
with its innards exposed and its
flywheel — which gives it the ability to move independently — pulled
out.
Photo: M. Scott Brauer
In 2011, when an MIT senior named John Romanishin proposed a new design
for modular robots to his robotics professor, Daniela Rus, she said,
“That can’t be done.”
Two years later, Rus showed her colleague
Hod Lipson, a robotics researcher at Cornell University, a video of
prototype robots, based on Romanishin’s design, in action. “That can’t
be done,” Lipson said.
In November, Romanishin — now a research
scientist in MIT’s Computer Science and Artificial Intelligence
Laboratory (CSAIL) — Rus, and postdoc Kyle Gilpin will establish once
and for all that it can be done, when they present a paper describing
their new robots at the IEEE/RSJ International Conference on Intelligent
Robots and Systems.
Known as M-Blocks, the robots are cubes with
no external moving parts. Nonetheless, they’re able to climb over and
around one another, leap through the air, roll across the ground, and
even move while suspended upside down from metallic surfaces.
Inside
each M-Block is a flywheel that can reach speeds of 20,000 revolutions
per minute; when the flywheel is braked, it imparts its angular momentum
to the cube. On each edge of an M-Block, and on every face, are
cleverly arranged permanent magnets that allow any two cubes to attach
to each other.
“It’s one of these things that the
[modular-robotics] community has been trying to do for a long time,”
says Rus, a professor of electrical engineering and computer science and
director of CSAIL. “We just needed a creative insight and somebody who
was passionate enough to keep coming at it — despite being discouraged.”
Embodied abstraction
As Rus explains,
researchers studying reconfigurable robots have long used an abstraction
called the sliding-cube model. In this model, if two cubes are face to
face, one of them can slide up the side of the other and, without
changing orientation, slide across its top.
The sliding-cube
model simplifies the development of self-assembly algorithms, but the
robots that implement them tend to be much more complex devices. Rus’
group, for instance, previously developed a modular robot called the Molecule,
which consisted of two cubes connected by an angled bar and had 18
separate motors. “We were quite proud of it at the time,” Rus says.
According
to Gilpin, existing modular-robot systems are also “statically stable,”
meaning that “you can pause the motion at any point, and they’ll stay
where they are.” What enabled the MIT researchers to drastically
simplify their robots’ design was giving up on the principle of static
stability.
“There’s a point in time when the cube is essentially
flying through the air,” Gilpin says. “And you are depending on the
magnets to bring it into alignment when it lands. That’s something
that’s totally unique to this system.”
That’s also what made Rus
skeptical about Romanishin’s initial proposal. “I asked him build a
prototype,” Rus says. “Then I said, ‘OK, maybe I was wrong.’”
Sticking the landing
To
compensate for its static instability, the researchers’ robot relies on
some ingenious engineering. On each edge of a cube are two cylindrical
magnets, mounted like rolling pins. When two cubes approach each other,
the magnets naturally rotate, so that north poles align with south, and
vice versa. Any face of any cube can thus attach to any face of any
other.
The cubes’ edges are also beveled, so when two cubes are
face to face, there’s a slight gap between their magnets. When one cube
begins to flip on top of another, the bevels, and thus the magnets,
touch. The connection between the cubes becomes much stronger, anchoring
the pivot. On each face of a cube are four more pairs of smaller
magnets, arranged symmetrically, which help snap a moving cube into
place when it lands on top of another.
As with any modular-robot
system, the hope is that the modules can be miniaturized: the ultimate
aim of most such research is hordes of swarming microbots that can
self-assemble, like the “liquid steel” androids in the movie “Terminator
II.” And the simplicity of the cubes’ design makes miniaturization
promising.
But the researchers believe that a more refined
version of their system could prove useful even at something like its
current scale. Armies of mobile cubes could temporarily repair bridges
or buildings during emergencies, or raise and reconfigure scaffolding
for building projects. They could assemble into different types of
furniture or heavy equipment as needed. And they could swarm into
environments hostile or inaccessible to humans, diagnose problems, and
reorganize themselves to provide solutions.
Strength in diversity
The
researchers also imagine that among the mobile cubes could be
special-purpose cubes, containing cameras, or lights, or battery packs,
or other equipment, which the mobile cubes could transport. “In the vast
majority of other modular systems, an individual module cannot move on
its own,” Gilpin says. “If you drop one of these along the way, or
something goes wrong, it can rejoin the group, no problem.”
“It’s
one of those things that you kick yourself for not thinking of,”
Cornell’s Lipson says. “It’s a low-tech solution to a problem that
people have been trying to solve with extraordinarily high-tech
approaches.”
“What they did that was very interesting is they
showed several modes of locomotion,” Lipson adds. “Not just one cube
flipping around, but multiple cubes working together, multiple cubes
moving other cubes — a lot of other modes of motion that really open the
door to many, many applications, much beyond what people usually
consider when they talk about self-assembly. They rarely think about
parts dragging other parts — this kind of cooperative group behavior.”
In
ongoing work, the MIT researchers are building an army of 100 cubes,
each of which can move in any direction, and designing algorithms to
guide them. “We want hundreds of cubes, scattered randomly across the
floor, to be able to identify each other, coalesce, and autonomously
transform into a chair, or a ladder, or a desk, on demand,” Romanishin
says.
Researchers have created software that predicts when and where disease outbreaks might occur based on two decades of New York Times articles and other online data. The research comes from Microsoft and the Technion-Israel Institute of Technology.
The system could someday help aid organizations and others be more
proactive in tackling disease outbreaks or other problems, says Eric Horvitz,
distinguished scientist and codirector at Microsoft Research. “I truly
view this as a foreshadowing of what’s to come,” he says. “Eventually
this kind of work will start to have an influence on how things go for
people.” Horvitz did the research in collaboration with Kira Radinsky, a PhD researcher at the Technion-Israel Institute.
The system provides striking results when tested on historical data.
For example, reports of droughts in Angola in 2006 triggered a warning
about possible cholera outbreaks in the country, because previous events
had taught the system that cholera outbreaks were more likely in years
following droughts. A second warning about cholera in Angola was
triggered by news reports of large storms in Africa in early 2007; less
than a week later, reports appeared that cholera had become established.
In similar tests involving forecasts of disease, violence, and a
significant numbers of deaths, the system’s warnings were correct
between 70 to 90 percent of the time.
Horvitz says the performance is good enough to suggest that a more
refined version could be used in real settings, to assist experts at,
for example, government aid agencies involved in planning humanitarian
response and readiness. “We’ve done some reaching out and plan to do
some follow-up work with such people,” says Horvitz.
The system was built using 22 years of New York Times archives, from 1986 to 2007, but it also draws on data from the Web to learn about what leads up to major news events.
“One source we found useful was DBpedia,
which is a structured form of the information inside Wikipedia
constructed using crowdsourcing,” says Radinsky. “We can understand, or
see, the location of the places in the news articles, how much money
people earn there, and even information about politics.” Other sources
included WordNet, which helps software understand the meaning of words, and OpenCyc, a database of common knowledge.
All this information provides valuable context that’s not available
in news article, and which is necessary to figure out general rules for
what events precede others. For example, the system could infer
connections between events in Rwandan and Angolan cities based on the
fact that they are both in Africa, have similar GDPs, and other factors.
That approach led the software to conclude that, in predicting cholera
outbreaks, it should consider a country or city’s location, proportion
of land covered by water, population density, GDP, and whether there had
been a drought the year before.
Horvitz and Radinsky are not the first to consider using online news
and other data to forecast future events, but they say they make use of
more data sources—over 90 in total—which allows their system to be more
general-purpose.
There’s already a small market for predictive tools. For example, a startup called Recorded Future
makes predictions about future events harvested from forward-looking
statements online and other sources, and it includes government
intelligence agencies among its customers (see “See the Future With a Search”).
Christopher Ahlberg, the company’s CEO and cofounder, says that the new
research is “good work” that shows how predictions can be made using
hard data, but also notes that turning the prototype system into a
product would require further development.
Microsoft doesn’t have plans to commercialize Horvitz and Radinsky’s
research as yet, but the project will continue, says Horvitz, who wants
to mine more newspaper archives as well as digitized books.
Many things about the world have changed in recent decades, but human
nature and many aspects of the environment have stayed the same,
Horvitz says, so software may be able to learn patterns from even very
old data that can suggest what’s ahead. “I’m personally interested in
getting data further back in time,” he says.
If you’re terrified of the
possibility that humanity will be dismembered by an insectoid master
race, equipped with robotic exoskeletons (or would that be
exo-exoskeletons?), look away now. Researchers at the University of
Tokyo have strapped a moth into a robotic exoskeleton, with the moth
successfully controlling the robot to reach a specific location inside a
wind tunnel.
In all, fourteen male silkmoths were tested, and
they all showed a scary aptitude for steering a robot. In the tests, the
moths had to guide the robot towards a source of female sex pheromone.
The researchers even introduced a turning bias — where one of the
robot’s motors is stronger than the other, causing it to veer to one
side — and yet the moths still reached the target.
As you can see
in the photo above, the actual moth-robot setup is one of the most
disturbing and/or awesome things you’ll ever see. In essence, the
polystyrene (styrofoam) ball acts like a trackball mouse. As the
silkmoth walks towards the female pheromone, the ball rolls around.
Sensors detect these movements and fire off signals to the robot’s drive
motors. At this point you should watch the video below — and also not
think too much about what happens to the moth when it’s time to remove
the glued-on stick from its back.
Fortunately, the Japanese
researchers aren’t actually trying to construct a moth master race: In
reality, it’s all about the moth’s antennae and sensory-motor system.
The researchers are trying to improve the performance of autonomous
robots that are tasked with tracking the source of chemical leaks and
spills. “Most chemical sensors, such as semiconductor sensors, have a
slow recovery time and are not able to detect the temporal dynamics of
odours as insects do,” says Noriyasu Ando, the lead author of the
research. “Our results will be an important indication for the selection
of sensors and models when we apply the insect sensory-motor system to
artificial systems.”
Of course, another possibility is that we simply keep the moths. After all,
why should we spend time and money on an artificial system when mother
nature, as always, has already done the hard work for us? In much the
same way that miners used canaries and border police use sniffer dogs,
why shouldn’t robots be controlled by insects? The silkmoth is graced
with perhaps the most sensitive olfactory system in the world. For now
it might only be sensitive to not-so-useful scents like the female sex
pheromone, but who’s to say that genetic engineering won’t allow for silkmoths that can sniff out bombs or drugs or chemical spills?
Who
nose: Maybe genetically modified insects with robotic exoskeletons are
merely an intermediary step towards real nanobots that fly around,
fixing, cleaning, and constructing our environment.
(Washington, DC) – Governments should pre-emptively ban fully autonomous
weapons because of the danger they pose to civilians in armed conflict,
Human Rights Watch said in a report released today. These future
weapons, sometimes called “killer robots,” would be able to choose and
fire on targets without human intervention.
The United Kingdom’s Taranis
combat aircraft, whose prototype was unveiled in 2010, is designed
strike distant targets, “even in another continent.” While the Ministry
of Defence has stated that humans will remain in the loop, the Taranis
exemplifies the move toward increased autonomy.
The 50-page report, “Losing Humanity: The Case Against Killer Robots,”
outlines concerns about these fully autonomous weapons, which would
inherently lack human qualities that provide legal and non-legal checks
on the killing of civilians. In addition, the obstacles to holding
anyone accountable for harm caused by the weapons would weaken the law’s
power to deter future violations.
“Giving machines the power to decide who lives and dies on the battlefield would take technology too far,” said Steve Goose,
Arms Division director at Human Rights Watch. “Human control of robotic
warfare is essential to minimizing civilian deaths and injuries.”
The South Korean SGR-1 sentry
robot, a precursor to a fully autonomous weapon, can detect people in
the Demilitarized Zone and, if a human grants the command, fire its
weapons. The robot is shown here during a test with a surrendering enemy
soldier.
“Losing Humanity” is the first major publication about fully
autonomous weapons by a nongovernmental organization and is based on
extensive research into the law, technology, and ethics of these
proposed weapons. It is jointly published by Human Rights Watch and the
Harvard Law School International Human Rights Clinic.
Human Rights Watch and the International Human Rights Clinic called for
an international treaty that would absolutely prohibit the development,
production, and use of fully autonomous weapons. They also called on
individual nations to pass laws and adopt policies as important measures
to prevent development, production, and use of such weapons at the
domestic level.
Fully autonomous weapons do not yet exist, and major powers, including
the United States, have not made a decision to deploy them. But
high-tech militaries are developing or have already deployed precursors
that illustrate the push toward greater autonomy for machines on the
battlefield. The United States is a leader in this technological
development. Several other countries – including China, Germany, Israel,
South Korea, Russia, and the United Kingdom – have also been involved.
Many experts predict that full autonomy for weapons could be achieved in
20 to 30 years, and some think even sooner.
“It is essential to stop the development of killer robots before they
show up in national arsenals,” Goose said. “As countries become more
invested in this technology, it will become harder to persuade them to
give it up.”
Fully autonomous weapons could not meet the requirements of
international humanitarian law, Human Rights Watch and the Harvard
clinic said. They would be unable to distinguish adequately between
soldiers and civilians on the battlefield or apply the human judgment
necessary to evaluate the proportionality of an attack – whether
civilian harm outweighs military advantage.
These robots would also undermine non-legal checks on the killing of
civilians. Fully autonomous weapons could not show human compassion for
their victims, and autocrats could abuse them by directing them against
their own people. While replacing human troops with machines could save
military lives, it could also make going to war easier, which would
shift the burden of armed conflict onto civilians.
Finally, the use of fully autonomous weapons would create an
accountability gap. Trying to hold the commander, programmer, or
manufacturer legally responsible for a robot’s actions presents
significant challenges. The lack of accountability would undercut the
ability to deter violations of international law and to provide victims
meaningful retributive justice.
While most militaries maintain that for the immediate future humans
will retain some oversight over the actions of weaponized robots, the
effectiveness of that oversight is questionable, Human Rights Watch and
the Harvard clinic said. Moreover, military statements have left the
door open to full autonomy in the future.
“Action is needed now, before killer robots cross the line from science fiction to feasibility,” Goose said.
The techno-wizards over at Google X, the company's R&D laboratory working on its self-driving cars and Project Glass,
linked 16,000 processors together to form a neural network and then had
it go forth and try to learn on its own. Turns out, massive digital
networks are a lot like bored humans poking at iPads.
The pretty amazing takeaway here is that this 16,000-processor neural
network, spread out over 1,000 linked computers, was not told to look
for any one thing, but instead discovered that a pattern revolved around cat pictures on its own.
This happened after Google presented the network with image stills
from 10 million random YouTube videos. The images were small thumbnails,
and Google's network was sorting through them to try and learn
something about them. What it found — and we have ourselves to blame for
this — was that there were a hell of a lot of cat faces.
"We never told it during the training, 'This is a cat,'" Jeff Dean, a Google fellow working on the project, told the New York Times. "It basically invented the concept of a cat. We probably have other ones that are side views of cats."
The network itself does not know what a cat is like you and I do. (It
wouldn't, for instance, feel embarrassed being caught watching
something like this
in the presence of other neural networks.) What it does realize,
however, is that there is something that it can recognize as being the
same thing, and if we gave it the word, it would very well refer to it
as "cat."
So, what's the big deal? Your computer at home is more than powerful
enough to sort images. Where Google's neural network differs is that it
looked at these 10 million images, recognized a pattern of cat faces,
and then grafted together the idea that it was looking at something
specific and distinct. It had a digital thought.
Andrew Ng, a computer scientist at Stanford University who is
co-leading the study with Dean, spoke to the benefit of something like a
self-teaching neural network: "The idea is that instead of having teams
of researchers trying to find out how to find edges, you instead throw a
ton of data at the algorithm and you let the data speak and have the
software automatically learn from the data." The size of the network is
important, too, and the human brain is "a million times larger in terms
of the number of neurons and synapses" than Google X's simulated mind,
according to the researchers.
"It'd be fantastic if it turns out that all we need to do is take
current algorithms and run them bigger," Ng added, "but my gut feeling
is that we still don't quite have the right algorithm yet."
Soon you can get your hands on the Mobot modular
robot for a very reasonable $270 a module (pre-orders
available now). A number of connection plates and
attachments will also be available, and I
guess you can 3D print your own stuff.
Mobot by Barobo.com
I like the gripper that is powered and controlled by the
rotating faceplate. I am sure the same concept can be
used to 3D print some cool things in the future.
A connector would be an awesome thing and definitely
worth a price of some sort.
In general, it seems to be a very competent modular
robotics system. It uses a snap together connector,
making it simple and fast to use, but maybe not as
strong as a system that screws together.
There is a Graphical User Interface RobotController,
and you can program it with the C/C++ interpreter Ch
so everyone from beginner to hard core hacker should
be able to do some really cool stuff.
If you’ve ever been inside a dormitory full of
computer science undergraduates, you know what horrors come of young men
free of responsibility. To help combat the lack of homemaking skills in
nerds everywhere, a group of them banded together to create MOTHER,
a combination of home automation, basic artificial intelligence and
gentle nagging designed to keep a domicile running at peak efficiency.
And also possibly kill an entire crew of space truckers if they should
come in contact with a xenomorphic alien – but that code module hasn’t
been installed yet.
The project comes from the LVL1 Hackerspace, a group of like-minded
programmers and engineers. The aim is to create an AI suited for a home
environment that detect issues and gets its users (i.e. the people living in
the home) to fix it. Through an array of digital sensors, MOTHER knows
when the trash needs to be taken out, when the door is left unlocked, et
cetera. If something isn’t done soon enough, she it can even
disable the Internet connection for individual computers. MOTHER can
notify users of tasks that need to be completed through a standard
computer, phones or email, or stock ticker-like displays. In addition,
MOTHER can use video and audio tools to recognize individual users,
adjust the lighting, video or audio to their tastes, and generally keep
users informed and creeped out at the same time.
MOTHER’s abilities are technically limitless – since it’s all based
on open source software, those with the skill, inclination and hardware
components can add functions at any time. Some of the more humorous
additions already in the project include an instant dubstep command. You
can build your own MOTHER (boy, there’s a sentence I never thought I’d
be writing) by reading through the official Wiki
and assembling the right software, sensors, servers and the like. Or
you could just invite your mom over and take your lumps. Your choice.