The tech-nerd legion bent on saving humanity from asteroids, contagions, and robot revolutions
 
  
 
 
						Illustration by Asaf Hanuka
				
 
 
 
Rick Schwall retired seven years ago after a successful career in 
Silicon Valley. He says he’s a millionaire but declines to reveal where 
he worked or how he made his money. “I consider all of that stuff to be 
absolutely pointless,” he says. “What is important is that in 2006 I 
stumbled upon existential risk.”
 
For the uninitiated, existential risk is a broad term covering 
catastrophic events that could wipe out the human species. Some 
existential risk devotees agonize over nuclear wars, climate change, and
 virus outbreaks. Others, such as Schwall, put more energy into worrying
 about the potential downside of information technology. They fret about
 a super-powerful artificial intelligence run amok and hordes of killer 
nanobots. “There are a number of people who have knowledge in this field
 that estimate humanity’s chance at making it through this century at 
about 50 percent,” Schwall says. “Even if that number is way off and 
it’s one in a billion, that’s too high for me.”
 
In August, Schwall started an organization called Saving Humanity 
from Homo Sapiens. The nonprofit, which boasts an eye-catching logo of a
 man holding a gun to his throat, looks to fund researchers who have 
plans for taming artificial intelligence and developing safeguards that 
protect man from machines. So far, Schwall has doled out a few thousand 
dollars to a handful of researchers, but it’s early days for SHFHS. 
Schwall, after all, is thinking big and answering the grandest of 
callings. “There are so many people who cannot wrap their minds around 
all of humanity,” he says. “I don’t know why I rose above that. I have 
no clue.”
 
Religious groups have long dominated talk of the apocalypse. Most 
often the world ends at the hands of a god who transfers people to a 
better place. These days, though, you’ll find plenty of atheistic types 
in Silicon Valley meditating on man’s potential for self-inflicted 
destruction, and it doesn’t often lead much of anywhere. These people 
design the most sophisticated technology on the planet but bemoan its 
dark potential. They’re adherents of the Singularity, a sort of nerd 
rapture that will occur when machines become smarter than people and 
begin advancing technological change on their own, eventually outpacing 
and—in a worst-case scenario—enslaving people before getting bored and 
grinding us up into fleshy pulp. This, as it happens, resembles the 
prospect that had the Unabomber, Ted Kaczynski, all worked up.
 
One of the gripes emanating from the existential risk adherents is 
that people have not taken these warnings seriously enough. Sure, 
governments, research organizations, and philanthropists fund work to 
curb global warming, contain nuclear weapons arsenals, and prevent viral
 outbreaks. But where’s the money for a much needed artificial 
intelligence force field or an asteroid blocker? With some people 
predicting the Singularity’s arrival as early as the next decade, the 
race is on for man to defend himself from his own creations. 
 
To properly address such threats before it’s too late, a booming 
subculture of tech-minded thinkers, entrepreneurs, and nongovernmental 
organizations has stepped into the existential risk realm. Many of the 
groups, like SHFHS, focus on worries about artificial intelligence (AI).
 Others have secured some serious cash to fund a broader set of projects
 to protect us from annihilation in whatever form it might take. 
 
Consider, for example, the Lifeboat Foundation. It’s an organization 
run out of the Minden (Nev.) home of Eric Klien, a technologist who has 
dabbled in the fields of cryonics and online dating. This group frets 
about science fiction scenarios such as computers gone bad, alien 
attacks, and the arrival of nasty man-made synthetic creatures. To date,
 the Lifeboat Foundation has raised more than $500,000 from corporations
 such as Google (GOOG), Oracle (ORCL), Hewlett-Packard (HPQ), and Fannie Mae (FNMA)
 and from hundreds of individuals. Asked to comment, a spokesman for 
Fannie Mae was surprised to learn of the donations, which were part of 
an employer match program.
 
The Lifeboat Foundation’s flashiest project is the A-Prize, a contest
 to create an artificial life form “with an emphasis on the safety of 
the researchers, public, and environment.” Thus far, donors have pledged
 $29,000 to the winner. The real down-and-dirty work, however, revolves 
around shields, with projects under way to build Asteroid, Brain, Alien,
 Internet, Black Hole, and Antimatter shields. Other work includes the 
creation of space habitats and personality preservers. 
 
It’s unclear how far along any of these projects is. Most of the 
Lifeboat Foundation’s money seems to go toward supporting conferences 
and publishing papers. But laying down the rigorous theoretical 
groundwork for such projects ensures their viability when the 
existential hammer falls. Lifeboat remains one of the only places where 
people think about the panoply of nontraditional risks to mankind. 
 
Klien would like to see some bigger donors step up and allow the 
Lifeboat Foundation to tackle truly massive endeavors. Part of the 
problem is that people have not gotten a real taste for a near-death 
experience that awakens their existential risk spirit. “There will be a 
9/11 with dirty bombs or nuclear bombs,” he says. “It will make it a lot
 easier for us at that point.”
 
The major success story of the existential risk movement is the 
Singularity Institute for Artificial Intelligence, which focuses on 
making sure we end up with “friendly” AI. Every year it holds an event 
called the Singularity Summit where some speakers dazzle the crowd with 
cutting-edge technology, while others reinforce the existential risk 
cause. 
 
The Singularity Institute prides itself on examining existential risk
 with a rational eye. One of its thought leaders and board members is 
Eliezer Yudkowsky, a prolific blogger who spends a great deal of time 
laying out the logical reasons people should be concerned about 
existential risk and developing a mathematical framework for friendly 
AI. Yudkowsky has a knack for walking people through the logical 
constraints that a computer scientist might want to consider when 
building an artificial intelligence to help make sure it doesn’t light 
up and take over the world. “He is a good candidate for being the most 
important person on the planet,” Schwall says of Yudkowsky. Backers of 
the Singularity Institute and this type of work include Peter Thiel, the
 first investor in Facebook, Jaan Tallinn, one of the programmers who 
helped build Skype Technologies, and companies such as Microsoft (MSFT), Motorola (MSI), and Fidelity Investments. 
 
Tallinn attended this year’s summit and delivered an impassioned 
speech about the need to direct more money toward the prevention of 
existential risk. Estimates bandied about at the conference placed 
worldwide spending on existential risk at about $59 million per year. 
With this in mind, Tallinn made a $100,000 donation to the Singularity 
Institute on the spot and then called on other philanthropists to stop 
thinking about boosting their “social status” by donating to the usual 
do-gooder causes. Instead, the rich should support longer-term efforts. 
“Future societies will look back on us and feel depressed because of the
 actions we did not do,” he said. 
 
This kind of talk isn’t limited to technophiles suffering from 
midlife crises; there is, in fact, a youthful existential risk 
contingent, too. Thomas Eliot, 23, bounded around the Singularity Summit
 in a uniform consisting of red Converse All-Stars, jeans, a bow tie, 
and rosy, fresh-faced cheeks. Eliot, who had just obtained a math degree
 from Willamette University, plans to spend the next year or two living 
off his savings while he studies machine learning and AI. He’s also been
 tapped by Schwall as the executive director of SHFSH. “An unfriendly 
artificial intelligence could cause a negative Singularity and turn the 
entire planet into paper clips,” Eliot warns. “Even if the chances of 
something like this happening are low, it would be the worst thing 
ever.”