Tuesday, June 30. 2015
The ancient Library of Alexandria may have been the largest collection of human knowledge in its time, and scholars still mourn its destruction. The risk of so devastating a loss diminished somewhat with the advent of the printing press and further still with the rise of the Internet. Yet centralized repositories of specialized information remain, as does the threat of a catastrophic loss.
Take GitHub, for example.
GitHub has in recent years become the world’s biggest collection of open source software. That’s made it an invaluable education and business resource. Beyond providing installers for countless applications, GitHub hosts the source code for millions of projects, meaning anyone can read the code used to create those applications. And because GitHub also archives past versions of source code, it’s possible to follow the development of a particular piece of software and see how it all came together. That’s made it an irreplaceable teaching tool.
The odds of Github meeting a fate similar to that of the Library of Alexandria are slim. Indeed, rumor has it that Github soon will see a new round of funding that will place the company’s value at $2 billion. That should ensure, financially at least, that GitHub will stay standing.
But GitHub’s pending emergence as Silicon Valley’s latest unicorn holds a certain irony. The ideals of open source software center on freedom, sharing, and collective benefit—the polar opposite of venture capitalists seeking a multibillion-dollar exit. Whatever its stated principles, GitHub is under immense pressure to be more than just a sustainable business. When profit motives and community ideals clash, especially in the software world, the end result isn’t always pretty.
Sourceforge: A Cautionary Tale
Sourceforge is another popular hub for open source software that predates GitHub by nearly a decade. It was once the place to find open source code before GitHub grew so popular.
There are many reasons for GitHub’s ascendance, but Sourceforge hasn’t helped its own cause. In the years since career services outfit DHI Holdings acquired it in 2012, users have lamented the spread of third-party ads that masquerade as download buttons, tricking users into downloading malicious software. Sourceforge has tools that enable users to report misleading ads, but the problem has persisted. That’s part of why the team behind GIMP, a popular open source alternative to Adobe Photoshop, quit hosting its software on Sourceforge in 2013.
Instead of trying to make nice, Sourceforge stirred up more hostility earlier this month when it declared the GIMP project “abandoned” and began hosting “mirrors” of its installer files without permission. Compounding the problem, Sourceforge bundled installers with third party software some have called adware or malware. That prompted other projects, including the popular media player VLC, the code editor Notepad++, and WINE, a tool for running Windows apps on Linux and OS X, to abandon ship.
It’s hard to say how many projects have truly fled Sourceforge because of the site’s tendency to “mirror” certain projects. If you don’t count “forks” in GitHub—copies of projects developers use to make their own tweaks to the code before submitting them to the main project—Sourceforge may still host nearly as many projects as GitHub, says Bill Weinberg of Black Duck Software, which tracks and analyzes open source software.
But the damage to Sourceforge’s reputation may already have been done. Gaurav Kuchhal, managing director of the division of DHI Holdings that handles Sourceforge, says the company stopped its mirroring program and will only bundle installers with projects whose originators explicitly opt in for such add-ons. But misleading “download” ads likely will continue to be a game of whack-a-mole as long as Sourceforge keeps running third-party ads. In its hunt for revenue, Sourceforge is looking less like an important collection of human knowledge and more like a plundered museum full of dangerous traps.
No Ads (For Now)
GitHub has a natural defense against ending up like this: it’s never been an ad-supported business. If you post your code publicly on GitHub, the service is free. This incentivizes code-sharing and collaboration. You pay only to keep your code private. GitHub also makes money offering tech companies private versions of GitHub, which has worked out well: Facebook, Google and Microsoft all do this.
Still, it’s hard to tell how much money the company makes from this model. (It’s certainly not saying.) Yes, it has some of the world’s largest software companies as customers. But it also hosts millions of open source projects free of charge, without ads to offset the costs storage, bandwidth, and the services layered on top of all those repos. Investors will want a return eventually, through an acquisition or IPO. Once that happens, there’s no guarantee new owners or shareholders will be as keen on offering an ad-free loss leader for the company’s enterprise services.
Other freemium services that have raised large rounds of funding, like Box and Dropbox, face similar pressures. (Box even more so since going public earlier this year.) But GitHub is more than a convenient place to store files on the web. It’s a cornerstone of software development—a key repository of open-source code and a crucial body of knowledge. Amassing so much knowledge in one place raises the specter of a catastrophic crash and burn or disastrous decay at the hands of greedy owners loading the site with malware.
Yet GitHub has a defense mechanism the librarians of ancient Alexandria did not. Their library also was a hub. But it didn’t have Git.
The “Git” part of GitHub is an open source technology that helps programmers manage changes in their code. Basically, a team will place a master copy of the code in a central location, and programmers make copies on their own computers. These programmers then periodically merge their changes with the master copy, the “repository” that remains the canonical version of the project.
Git’s “versioning” makes managing projects much easier when multiple people must make changes to the original code. But it also has an interesting side effect: everyone who works on a GitHub project ends up with a copy own their computers. It’s as if everyone who borrowed a book from the library could keep a copy forever, even after returning it. If GitHub vanished entirely, it could be rebuilt using individual users’ own copies of all the projects. It would take ages to accomplish, but it could be done.
Still, such work would be painful. In addition to the source code itself, GitHub is also home to countless comments, bug reports and feature requests, not to mention the rich history of changes. But the decentralized nature of Git does make it far easier to migrate projects to other hosts, such as GitLab, an open source alternative to GitHub that you can run on your own server.
In short, if GitHub as we know it went away, or under future financial pressures became an inferior version of itself, the world’s code will survive. Libraries didn’t end with Alexandria. The question is ultimately whether GitHub will find ways to stay true to its ideals while generating returns—or wind up the stuff of legend.
Friday, June 19. 2015
Google, Microsoft, Mozilla And Others Team Up To Launch WebAssembly, A New Binary Format For The Web
Via Tech Crunch
The idea is that WebAssembly will provide developers with a single compilation target for the web that will, eventually, become a web standard that’s implemented in all browsers.
Mozilla’s asm.js has long aimed to bring near-native speeds to the web. Google’s Native Client project for running native code in the browser had similar aims, but got relatively little traction. It looks like WebAssemly may be able to bring the best of these projects to the browser now.
As a first step, the WebAssembly team aims to offer about the same functionality as asm.js (and developers will be able to use the same Emscripten tool for WebAssembly as they use for compiling asm.js code today).
It’s not often that we see all the major browser vendors work together on a project like this, so this is definitely something worth watching in the months and years ahead.
Thursday, June 18. 2015
Via Tech Crunch
While companies like Facebook have been relatively open about their data center networking infrastructure, Google has generally kept pretty quiet about how it connects the thousands of servers inside its data centers to each other (with a few exceptions). Today, however, the company revealed a bit more about the technology that lets its servers talk to each other.
It’s no secret that Google often builds its own custom hardware for its data centers, but what’s probably less known is that Google uses custom networking protocols that have been tweaked for use in its data centers instead of relying on standard Internet protocols to power its networks.
Google says its current ‘Jupiter’ networking setup — which represents the fifth generation of the company’s efforts in this area — offers 100x the capacity of its first in-house data center network. The current generation delivers 1 Petabit per second of bisection bandwidth (that is, the bandwidth between two parts of the network). That’s enough to allow 100,000 servers to talk to each other at 10GB/s each.
Google’s technical lead for networking, Amin Vahdat, notes that the overall network control stack “has more in common with Google’s distributed computing architectures than traditional router-centric Internet protocols.”
Here is how he describes the three key principles behind the design of Google’s data center networks:
Monday, June 08. 2015
According to the Labor Department, the U.S. economy is in its strongest stretch in corporate hiring since 1997. Given the rapidly escalating competition for talent, it is important for employers, job seekers, and policy leaders to understand the dynamics behind some of the fastest growing professional roles in the job market.
For adults with a bachelor’s degree or above, the unemployment rate stood at just 2.7 percent in May 2015. The national narrative about “skills gaps” often focuses on middle-skill jobs that rely on shorter-term or vocational training – but the more interesting pressure point is arguably at the professional level, which has accounted for much of the wage and hiring growth in the U.S. economy in recent years. Here, the reach and impact of technology into a range of professional occupations and industry sectors is impressive.
Software is eating the world
In 2011, Netscape and Andreessen Horowitz co-founder Marc Andreessen coined the phrase “software is eating the world” in an article outlining his hypothesis that economic value was increasingly being captured by software-focused businesses disrupting a wide range of industry sectors. Nearly four years later, it is fascinating that around 1 in every 20 open job postings in the U.S. job market relates to software development/engineering.
Although most of these positions exist at the experienced level, it is no surprise that computer science and engineering are among the top three most-demanded college majors in this spring’s undergraduate employer recruiting season, according to the National Association of Colleges and Employers.
Discussion about the robust demand and competition for software developers in the job market is very often focused around high-growth technology firms such Uber, Facebook, and the like. But from the “software is eating the world” perspective, it is notable that organizations of all types are competing for this same talent – from financial firms and hospitals to government agencies. The demand for software skills is remarkably broad.
For example, the top employers with the greatest number of developer job openings over the last year include JP Morgan Chase, UnitedHealth, Northrup Gruman, and General Motors, according to job market database firm Burning Glass Technologies.
Data science is just the tip of the iceberg
Another surge of skills need related to technology is analytics and the ability to work with, process, and interpret insights from big data. Far more than just a fad or buzzword, references to analytical and data-oriented skills appeared in 4 million postings over the last year – and data analysis is one of the most demanded skills by U.S. employers, according to Burning Glass data.
The Harvard Business Review famously labeled data scientist roles “the sexiest job of the 21st century” – but while this is a compelling new profession by any measure, data scientists sit at the top of the analytics food chain and likely only account for tens of thousands of positions in a job market of 140 million.
What often goes unrecognized is that similar to and even more so than software development, the demand for analytical skills cuts across all levels and functions in an organization, from financial analysts and web developers to risk managers. Further, a wide range of industries is hungry for analytics skills – ranging from the nursing field and public health to criminal justice and even the arts and cultural sector.
As suggested by analytics experts such as Tom Davenport, organizations that are leveraging analytics in their strategy have not only world-class data scientists – but they also support “analytical amateurs” and embed analytics throughout all levels of their organization and culture. For this reason, the need for analytics skills is exploding within a variety of employers, and analytics and data-related themes top many corporate strategy agendas.
Analytics: Digital marketing demands experienced talent
Change is also afoot as digital and mobile channels are disrupting the marketing landscape. According to the CMO Council, spending on mobile marketing is doubling each year, and two-thirds of the growth in consumer advertising is in digital. In an economic expansion cycle, awareness-building and customer acquisition is where many companies are investing. For these reasons, marketing managers are perhaps surprisingly hard to find.
For example, at high-growth tech companies such as Amazon and Facebook, the highest volume job opening after software developer/engineer is marketing manager. These individuals are navigating new channels, as well as approaches to customer acquisition, and they are increasingly utilizing analytics. The marketing manager is an especially critical station in the marketing and sales career ladder and corporate talent bench – with junior creative types aspiring to it and senior product and marketing leadership coming from it.
The challenge is that marketing management requires experience: Those with a record of results in the still nascent field of digital marketing will be especially in demand.
Social media: not just a marketing and communications skill
Traditionally thought of in a marketing context, social media skills represent a final “softer” area that is highly in demand and spans a range of functional silos and levels in the job market — as social media becomes tightly woven into the fabric of how we live, work, consume and play.
While many organizations are, of course, hiring for social media-focused marketing roles, a quick search of job listings at an aggregator site such as Indeed.com reveals 50,000 job openings referencing social media. These range from privacy officers in legal departments that need to account for social media in policy and practice, to technologists who need to integrate social media APIs with products, and project managers and chiefs of staff to CEOs who will manage and communicate with internal and external audiences through social media.
Just as skills in Microsoft Office have become a universal foundation for most professional roles, it will be important to monitor how the use of social media platforms, including optimization and analytics, permeates the job market.
The aforementioned in-demand skills areas represent more of a structural shift than an issue du jour or passing trend. It is precisely the rapid, near daily change in software- and technology-related skills needs that necessitates new approaches to human capital development. While traditional long-term programs such as college degrees remain meaningful, new software platforms, languages, apps and tools rise annually. Who in the mainstream a few years ago had heard of Hadoop or Ruby?
Each month, new partnerships and business models are being formed between major employers, educational institutions and startups – all beginning to tackle novel approaches to skills development in these areas. Certificate programs, boot camps, new forms of executive education, and credentialing are all targeting the problem of producing more individuals with acumen in these areas.
As technology continues to extend its reach and reshape the workforce, it will be important to monitor these issues and explore new solutions to talent development.
Sunday, May 17. 2015
Via The Register
Mozilla has released the first version of its Firefox browser to include support for Encrypted Media Extensions, a controversial World Wide Web Consortium (W3C) spec that brings digital rights management (DRM) to HTML5's video tag.
"Nearly everyone who implements DRM says they are forced to do it" the FSF said at the time, "and this lack of accountability is how the practice sustains itself."
Nonetheless, Mozilla promoted Firefox 38 to the Release channel on Tuesday, complete with EME enabled – although it said it's still doing so reluctantly.
"We don't believe DRM is a desirable market solution, but it's currently the only way to watch a sought-after segment of content," Mozilla senior veep of legal affairs Danielle Dixon-Thayer said in a blog post.
The first firm to leap at the chance to shovel its DRM into Firefox was Adobe, whose Primetime Content Delivery Module for decoding encrypted content shipped with Firefox 38 on Tuesday. Thayer said various companies, including Netflix, are already evaluating Adobe's tech to see if it meets their requirements.
Mozilla says that because Adobe's CDM is proprietary "black box" software, it has made certain to wrap it in a sandbox within Firefox so that its code can't interfere with the rest of the browser. (Maybe that's why it took a year to get it integrated.)
The CDM will issue an alert when it's on a site that uses DRM-wrapped content, so people who don't want to use it will have the option of bowing out.
Restricted content ahead ...
If you don't want your browser tainted by DRM at all, you still have options. You can disable the Adobe Primetime CDM so it never activates. If that's not good enough, there's a menu option in Firefox that lets you opt out of DRM altogether, after which you can delete the Primetime CDM (or any future CDMs from other vendors) from your hard drive.
Finally, if you don't want DRM in your browser and you don't want to bother with any of the above, Mozilla has made available a separate download that doesn't include the Primetime CDM and has DRM disabled by default.As it happens, however, many users won't have to deal with the issue at all – at least not for now. The first version of Adobe's CDM for Firefox is only available on Windows Vista and later and then only for 32-bit versions of the browser. Windows XP, OS X, Linux, and 64-bit versions of Firefox are not yet supported, and there's no word yet on when they might be.
Monday, April 27. 2015
Via The Verge
After months of rumors, Microsoft is revealing its plans to get mobile apps on Windows 10 today. While the company has been investigating emulating Android apps, it has settled on a different solution, or set of solutions, that will allow developers to bring their existing code to Windows 10.
iOS and Android developers will be able to port their apps and games directly to Windows universal apps, and Microsoft is enabling this with two new software development kits. On the Android side, Microsoft is enabling developers to use Java and C++ code on Windows 10, and for iOS developers they’ll be able to take advantage of their existing Objective C code. "We want to enable developers to leverage their current code and current skills to start building those Windows applications in the Store, and to be able to extend those applications," explained Microsoft’s Terry Myerson during an interview with The Verge this morning.
The idea is simple, get apps on Windows 10 without the need for developers to rebuild them fully for Windows. While it sounds simple, the actual process will be a little more complicated than just pushing a few buttons to recompile apps. "Initially it will be analogous to what Amazon offers," notes Myerson, referring to the Android work Microsoft is doing. "If they’re using some Google API… we have created Microsoft replacements for those APIs." Microsoft’s pitch to developers is to bring their code across without many changes, and then eventually leverage the capabilities of Windows like Cortana, Xbox Live, Holograms, Live Tiles, and more. Microsoft has been testing its new tools with some key developers like King, the maker of Candy Crush Saga, to get games ported across to Windows. Candy Crush Saga as it exists today on Windows Phone has been converted from iOS code using Microsoft’s tools without many modifications.
During Microsoft’s planning for bringing iOS and Android apps to Windows, Myerson admits it wasn’t always an obvious choice to have both. "At times we’ve thought, let's just do iOS," Myerson explains. "But when we think of Windows we really think of everyone on the planet. There’s countries where iOS devices aren’t available." Supporting both Android and iOS developers allows Microsoft to capture everyone who is developing for mobile platforms right now, even if most companies still continue to target iOS first and port their apps to Android at the same time or shortly afterward. By supporting iOS developers, Microsoft wants to be third in line for these ported apps, and that’s a better situation than it faces today.
Alongside the iOS and Android SDKs, Microsoft is also revealing ways for websites and Windows desktop apps to make their way over to Windows universal apps. Microsoft has created a way for websites to run inside a Windows universal app, and use system services like notifications and in-app purchases. This should allow website owners to easily create web apps without much effort, and list those apps in the Windows Store. It’s not the best alternative to a native app for a lot of scenarios, but for simple websites it offers up a new way to create an app without its developers having to learn new code languages. Microsoft is also looking toward existing Windows desktop app developers with Windows 10. Developers will be able to leverage their .NET and Win32 work and bring this to Windows universal apps. "Sixteen million .NET and Win32 apps are still being used every month on Windows 7 and Windows 8," explains Myerson, so it’s clear Microsoft needs to get these into Windows 10.
Microsoft is using some of its HyperV work to virtualize these existing desktop apps on Windows 10. Adobe is one particular test case where Microsoft has been working closely with the firm to package its apps ready for Windows 10. Adobe Photoshop Elements is coming to the Windows Store as a universal app, using this virtualization technology. Performance is key for many desktop apps, so it will be interesting to see if Microsoft has managed to maintain a fluid app experience with this virtualization.
Collectively, Microsoft is referring to these four new SDKs as bridges or ramps to get developers interested in Windows 10. It’s a key moment for the company to really win back developers and prove that Windows is still relevant in a world that continues to be dominated by Android and iOS. The aim, as Myerson puts it, is to get Windows 10 on 1 billion devices within the next two to three years. That’s a big goal, and the company will need the support of developers and apps to help it get there.
These SDKs will generate questions among Microsoft’s core development community, especially those who invested heavily in the company’s Metro-style design and the unique features of Windows apps in the past. The end result for consumers is, hopefully, more apps, but for developers it’s a question of whether to simply port their existing iOS and Android work across and leave it at that, or extend those apps to use Windows features or even some design elements. "We want to structure the platform so it’s not an all or nothing," says Myerson. "If you use everything together it’s beautiful, but that’s not required to get started."
Microsoft still has the tricky mix of ported apps to contend with, and that could result in an app store similar to Amazon's, or even one where developers still aren't interested in porting. This is just the beginning, and Windows universal apps, while promising, still face a rocky and uncertain future.
Friday, April 24. 2015
Via SD Times
I recently attended Facebook’s F8 developer conference in San Francisco, where I had a revelation on why it is going to be impossible to succeed as a technology vendor in the long run without deeply embracing open source. Of the many great presentations I listened to, I was most captivated by the ones that explained how Facebook internally developed software. I was impressed by how quickly the company is turning such important IP back into the community.
To be sure, many major Web companies like Google and Yahoo have been leveraging open-source dynamics aggressively and contribute back to the community. My aim is not to single out Facebook, except that it was during the F8 conference I had the opportunity to reflect on the drivers behind Facebook’s actions and why other technology providers may be wise to learn from them.
Here are my 10 reasons why open-source software is effectively becoming inevitable for infrastructure and application platform companies:
Monday, April 13. 2015
Back in December, we reported on the alpha for BitTorrent’s Maelstrom, a browser that uses BitTorrent’s P2P technology in order to place some control of the Web back in users’ hands by eliminating the need for centralized servers.
Along with the beta comes the first set of developer tools for the browser, helping publishers and programmers to build their websites around Maelstrom’s P2P technology. And they need to – Maelstrom can’t decentralize the Internet if there isn’t any native content for the platform.
It’s only available on Windows at the moment but if you’re interested and on Microsoft’s OS, you can download the beta from BitTorrent now.? Project Maelstrom
Thursday, February 26. 2015
Let's face it, we're always at risk, and I speak for human kind, not just the personal risks we take each time we leave our homes. Some of these potential terrors are unavoidable -- we can't control the asteroid we find hurtling towards us or the next super volcano that may erupt as the Siberian Traps once did.
Some risks however, are well within our control, yet we continue down paths that are both exciting and potentially dangerous. In his book Demon Haunted World, the great astronomer, teacher and TV personality Carl Sagan wrote "Avoidable human misery is more often caused not so much by stupidity as by ignorance, particularly our ignorance about ourselves".
Now researchers have published a list of the risks we face and several of them are self-created. Perhaps the most prominent is artificial intelligence, or AI as it generally referred to. The technology has been fairly prominent in the news recently as both Elon Musk and Bill Gates have warned of its dangers. Musk went as far as to invest in some of the companies so that he could keep an eye on things.
The new report states "extreme intelligences could not easily be controlled (either by the groups creating them, or by some international regulatory regime), and would probably act to boost their own intelligence and acquire maximal resources for almost all initial AI motivations".
Stephen Hawking, perhaps the world's most famous scientist, told the BBC "The development of full artificial intelligence could spell the end of the human race".
That's three obviously intelligent men telling us it's a bad idea, but of course that will not deter those who wish to develop it and if it is controlled correctly then it may not be the huge danger we worry about.
What else is on the list of doom and gloom? Several more man-made problems, including nuclear war, global system collapse, synthetic biology, and nanotechnology. There is also the usual array of asteroids, super volcanoes and global pandemics. For good measure, the scientists even added in bad global governance.
If you would like to read the report for yourself it can be found at the Global Challenges Foundation website. It may keep you awake at night -- even better than a good horror movie could.
Tuesday, February 24. 2015
Via ars technica
Long the domain of science fiction, researchers are now working to create software that perfectly models human and animal brains. With an approach known as whole brain emulation (WBE), the idea is that if we can perfectly copy the functional structure of the brain, we will create software perfectly analogous to one. The upshot here is simple yet mind-boggling. Scientists hope to create software that could theoretically experience everything we experience: emotion, addiction, ambition, consciousness, and suffering.
“Right now in computer science, we make computer simulations of neural networks to figure out how the brain works," Anders Sandberg, a computational neuroscientist and research fellow at the Future of Humanity Institute at Oxford University, told Ars. "It seems possible that in a few decades we will take entire brains, scan them, turn them into computer code, and make simulations of everything going on in our brain.”
Everything. Of course, a perfect copy does not necessarily mean equivalent. Software is so… different. It's a tool that performs because we tell it to perform. It's difficult to imagine that we could imbue it with those same abilities that we believe make us human. To imagine our computers loving, hungering, and suffering probably feels a bit ridiculous. And some scientists would agree.
But there are others—scientists, futurists, the director of engineering at Google—who are working very seriously to make this happen.
For now, let’s set aside all the questions of if or when. Pretend that our understanding of the brain has expanded so much and our technology has become so great that this is our new reality: we, humans, have created conscious software. The question then becomes how to deal with it.
And while success in this endeavor of fantasy turning fact is by no means guaranteed, there has been quite a bit of debate among those who think about these things whether WBEs will mean immortality for humans or the end of us. There is far less discussion about how, exactly, we should react to this kind of artificial intelligence should it appear. Will we show a WBE human kindness or human cruelty—and does that even matter?
The ethics of pulling the plug on an AI
In a recent article in the Journal of Experimental and Theoretical Artificial Intelligence, Sandberg dives into some of the ethical questions that would (or at least should) arise from successful whole brain emulation. The focus of his paper, he explained, is “What are we allowed to do to these simulated brains?” If we create a WBE that perfectly models a brain, can it suffer? Should we care?
Again, discounting if and when, it's likely that an early successful software brain will mirror an animal’s. Animal brains are simply much smaller, less complex, and more available. So would a computer program that perfectly models an animal receive the same consideration an actual animal would? In practice, this might not be an issue. If a software animal brain emulates a worm or insect, for instance, there will be little worry about the software’s legal and moral status. After all, even the strictest laboratory standards today place few restrictions on what researchers do with invertebrates. When wrapping our minds around the ethics of how to treat an AI, the real question is what happens when we program a mammal?
“If you imagine that I am in a lab, I reach into a cage and pinch the tail of a little lab rat, the rat is going to squeal, it is going to run off in pain, and it’s not going to be a very happy rat. And actually, the regulations for animal research take a very stern view of that kind of behavior," Sandberg says. "Then what if I go into the computer lab, put on virtual reality gloves, and reach into my simulated cage where I have a little rat simulation and pinch its tail? Is this as bad as doing this to a real rat?”
As Sandberg alluded to, there are ethical codes for the treatment of mammals, and animals are protected by laws designed to reduce suffering. Would digital lab animals be protected under the same rules? Well, according to Sandberg, one of the purposes of developing this software is to avoid the many ethical problems with using carbon-based animals.
To get at these issues, Sandberg’s article takes the reader on a tour of how philosophers define animal morality and our relationships with animals as sentient beings. These are not easy ideas to summarize. “Philosophers have been bickering about these issues for decades," Sandberg says. "I think they will continue to bicker until we upload a philosopher into a computer and ask him how he feels.”
While many people might choose to respond, “Oh, it's just software,” this seems much too simplistic for Sandberg. “We have no experience with not being flesh and blood, so the fact that we have no experience of software suffering, that might just be that we haven’t had a chance to experience it. Maybe there is something like suffering, or something even worse than suffering software could experience,” he says.
Ultimately, Sandberg argues that it's better to be safe than sorry. He concludes a cautious approach would be best, that WBEs "should be treated as the corresponding animal system absent countervailing evidence.” When asked what this evidence would look like—that is, software designed to model an animal brain without the consciousness of one—he considered that, too. “A simple case would be when the internal electrical activity did not look like what happens in the real animal. That would suggest the simulation is not close at all. If there is the counterpart of an epileptic seizure, then we might also conclude there is likely no consciousness, but now we are getting closer to something that might be worrisome,” he says.
So the evidence that the software animal’s brain is not conscious looks…exactly like evidence that a biological animal’s brain is not conscious.
Despite his pleas for caution, Sandberg doesn’t advocate eliminating emulation experimentation entirely. He thinks that if we stop and think about it, compassion for digital test animals could arise relatively easy. After all, if we know enough to create a digital brain capable of suffering, we should know enough to bypass its pain centers. “It might be possible to run virtual painkillers which are way better than real painkillers," he says. "You literally leave out the signals that would correspond to pain. And while I’m not worried about any simulation right now… in a few years I think that is going to change.”
This, of course, assumes that animals' only source of suffering is pain. In that regard, to worry whether a software animal may suffer in the future probably seems pointless when we accept so much suffering in biological animals today. If you find a rat in your house, you are free to dispose of it how you see fit. We kill animals for food and fashion. Why worry about a software rat?
One answer—beyond basic compassion—is that we'll need the practice. If we can successfully emulate the brains of other mammals, then emulating a human is inevitable. And the ethics of hosting human-like consciousness becomes much more complicated.
Beyond pain and suffering, Sandberg considers a long list of possible ethical issues in this scenario: a blindingly monotonous environment, damaged or disabled emulations, perpetual hibernation, the tricky subject of copies, communications between beings who think at vastly different speeds (software brains could easily run a million times faster than ours), privacy, and matters of self-ownership and intellectual property.
All of these may be sticky issues, Sandberg predicts, but if we can resolve them, human brain emulations could achieve some remarkable feats. They are ideally suited for extreme tasks like space exploration, where we could potentially beam them through the cosmos. And if it came down to it, the digital versions of ourselves might be the only survivors in a biological die-off.
(Page 1 of 17, totaling 167 entries) » next page
Show tagged entries