On Friday, Microsoft released its 3D Builder app, which allows Windows 8.1 users to print 3D objects, but not much else.
The simple, simplistic, free app from Microsoft provides a basic way to
print common 3D objects, as well as to import other files from SkyDrive
or elsewhere. But the degree of customization that the app allows is
small, so 3D Builder basically serves as an introduction to the world of
3D printing.
In fact, that’s Microsoft’s intention, with demonstrations of the MakerBot Replicator 2 slated for Microsoft’s retails stores this weekend. Microsoft customers can buy a new Windows 8.1 PC, as well as the $2199 MakerBot Replicator 2, both online as well as in the brick-and-mortar stores themselves.
One of the selling points of Windows 8.1 was its ability to print 3D objects,
a complement to traditional paper printing. Although Microsoft is
pitching 3D Builder as a consumer app, the bulk of spending on 3D
printing will come from businesses, which will account for $325 million
out of the $415 million that will be spent this year on 3D printing,
according to an October report from Gartner. However, 3D printers have made their way into Staples,
and MakerBot latched onto an endorsement of the technology from
President Obama during his State of the Union address, recently
encouraging U.S. citizens to crowd-fund an effort to 3D printers in
every high school in America. (MakerBot also announced a Windows 8.1
software driver on Thursday.)
MicrosoftMicrosoft’s 3D Builder includes some basic modification options.
Microsoft’s 3D Builder app could certainly be a part of that effort.
Frankly, there’s little to the app itself besides a library of
pre-selected objects, most of which seem to be built around small,
unpowered model trains of the “Thomas the Tank Engine” variety. After
selecting one, the user has the option of moving it around a 3D space,
increasing or decreasing the size to a particular width or height—and
not much else.
Users can also import models made elsewhere. Again, however, 3D Builder
isn’t really designed to modify the designs. It’s also not clear which
3D formats are supported.
On the other hand, some might be turned off by the perceived complexity
of 3D printing. If you have two grand to spend on a 3D printer but
aren’t really sure how to use it, 3D Builder might be a good place to
start.
In June 1977 Apple Computer shipped their first mass-market computer: the Apple II.
Unlike the Apple I, the Apple II was fully assembled and ready to use
with any display monitor. The version with 4K of memory cost $1298. It
had color, graphics, sound, expansion slots, game paddles, and a
built-in BASIC programming language.
What it didn’t have was a disk drive. Programs and data had to be
saved and loaded from cassette tape recorders, which were slow and
unreliable. The problem was that disks – even floppy disks – needed both
expensive hardware controllers and complex software.
Steve Wozniak solved the first problem. He
designed an incredibly clever floppy disk controller using only 8
integrated circuits, by doing in programmed logic what other controllers
did with hardware. With some
rudimentary software written by Woz and Randy Wigginton, it was
demonstrated at the Consumer Electronics Show in January 1978.
But where were they going to get the higher-level software to
organize and access programs and data on the disk? Apple only had about
15 employees, and none of them had both the skills and the time to work
on it.
The magician who pulled that rabbit out of the hat was Paul Laughton,
a contract programmer for Shepardson Microsystems, which was located in
the same Cupertino office park as Apple.
On April 10, 1978 Bob Shepardson and Steve Jobs signed a $13,000
one-page contract for a file manager, a BASIC interface, and utilities.
It specified that “Delivery will be May 15?, which was incredibly
aggressive. But, amazingly, “Apple II DOS version 3.1? was released in
June 1978.
Now that the extent of the U.S. National Security Agency’s surveillance programs has been exposed
by former NSA contractor Edward Snowden, it’s beholden on the public to
fight back or else find themselves “complicit” in the activities,
according to Massachusetts Institute of Technology linguistics professor
and philosopher Noam Chomsky.
The freedoms U.S. citizens have “weren’t granted by gifts from above,”
Chomsky said during a panel discussion Friday at MIT. “They were won by
popular struggle.”
While U.S. officials have long cited national security as a rationale
for domestic surveillance programs, that same argument has been used by
the “most monstrous systems” in history, such as the Stasi secret
police in the former East Germany, Chomsky said.
“The difference with the totalitarian states is the citizens couldn’t do
a lot about it,” in contrast to the U.S., he added. “If we do not
expose the plea of security and separate the parts that are valid from
the parts that are not valid, then we are complicit.”
He cited the still-in-development Trans-Pacific Partnership trade
agreement, which critics say could have far-reaching implications for
Internet use and intellectual property. Wikileaks recently posted a draft of the treaty’s chapter on intellectual property.
Now that the information is out there, “we can do something about [the proposed TPP],” Chomsky said.
What’s needed for sure “is a serious debate about what the lines should
be” when it comes to government surveillance, said investigative
reporter Barton Gellman, who has received NSA document leaks from
Snowden, leading to a series of stories this year in the Washington
Post. “Knowledge is power and it’s much easier to win if the other side
doesn’t know there’s a game.”
“We can be confident that any system of power is going to try to use the
best available technology to control and dominate and maximize their
power,” Chomsky said. “We can also be confident ... that they want to do
it in secret.”
But there’s a crucial difference between the U.S. activities and that of
the Stasi, Gellman said. “The Stasi was knowingly, deliberately, and
cautiously squashing dissent,” he said. “I don’t think that’s what we’re
seeing here at all.”
A smartphone is an excellent tracking device “from my location, to who I
communicate with, to what I search for,” he said while holding up his
personal device. “I am paying Verizon Wireless on the order of $1000 a
year for this.”
Meanwhile, although telcos are making money by selling phone users’
personal information to third parties, at the same time “the NSA could
not do part of its job as efficiently if the companies weren’t selling
and retaining [customer] data,” Gellman said.
Company disclosures and terms of service have limited benefit as well.
“Generally the terms of service are written to say we can do whatever we
want, in a lot of words,” he said. Even if a customer reads through
carefully and notes what pledges are being made, “you have no way of
monitoring what they do,” Gellman added.
Since publishing stories on the NSA surveillance programs,
Gellman has stepped up his personal privacy efforts significantly,
through “layered defenses” including “locked rooms, safes, and
air-gapped computers that never have and never will touch the ‘Net,” he
said. The extra steps are “a giant tax on my time,” Gellman added.
It’s not clear how many more revelations will come to light from the
materials Snowden gave Gellman and other journalists. Snowden reportedly
gave reporters up to 200,000 documents.
“The [NSA] documents are far from complete,” often providing clues to
things that end up being wrong after further investigation, Gellman
said.
Probably the single most troubling thing about the inescapable
advance of technological obsolescence is the rate at which old devices
are being thrown out. It’s not just the landfills full of last year’s
superphone, nor the rare Earth elements we’re mining at incredible
speeds, but the sheer, simple waste of it, as well.
But what if electronics were designed on the molecular level to be
biodegradable? What if recycling a phone was as simple as buying a small
bottle of solvent and leaving the phone for several hours? What if you
could pour out your old iPhone and return the insoluble metals left over for a discount on your next handset?
That’s one of the many possible uses for a new technology that could
see integrated circuits built on soluble chips, along with many of the
other pieces they require. Professor John A. Rogers and his team of
researchers at the University of Illinois have made significant progress
in the field, and they have a lot of ideas about how it might change the world.
Beyond the applications for recycling, the team sees biomedical
science as a major application. Currently, inserting foreign technology
is difficult not just in the implantation, but in the extraction as
well; many pieces of technology are simply left inside a patient, since
that often ends up being less dangerous than an additional surgery. This
research could lead to a future for implanted technology in which
implants simply melt away into the blood-stream, either as a slow,
natural reaction beginning at the moment of insertion or as a catalyzed
reaction begun by an injected agent.
In either case, the ability to break down electronics
in a biologically safe way has huge benefits. Environmental monitors
could be peppered throughout an area without the need to worry about
collecting them again later. Whether it’s tracking bird populations on a
grassy tundra or measuring the chemistry of a oceanographic oil spill,
the ability to use technology with a built-in timer will open up all new
applications, or make feasible old ideas that could never succeed
practically.
Though
the team doesn’t mention it in the video, the US military has taken an
interest in the concept. It’s not hard to image why, as covert
technology advances along lines from miniaturization to autonomous
artificial intelligence. In surveillance, the problem of extraction is
just as profound as it is for surgery. The ability to insert a drone
designed to die after a prescribed amount of time, to liquify or break
down beyond the point of recognition, is extremely enticing to DARPA,
the military’s advanced research arm. DARPA has thrown significant
funding behind this effort, and no doubt has a wide array of application
in mind.
Of course, not every component of modern circuit boards can be so
easily replaced with a degradable polymer. Their primary material of
interest is actually a purified form of the silk produced by silk worms
for cocoons. That works for the plastic boards and other simple
substrates, but efficient conducting materials are much harder. They
found that ribbons of magnesium work well as conductors since it will
naturally break down to a molecular level when immersed in water.
Super-thin sheets of silicon, used for semiconductors will break down in
much the same way.
It’s also one thing to say that magnesium and silicon are safe to
release into the body, but quite another to get government approval to
test that idea. There’s really no telling how the body will react to
such a release of chemicals into the bloodstream without directly
testing it. That’s a hard road to hoe, however, one studded with
predictable and unavoidable delays. Couple this with public fears (well
founded and otherwise) about putting wireless technology in their
bodies, and about implants of all types, and you have an issue these
researchers will likely have to battle for several years to come.
In the end, the military and industrial applications for this
technology will almost certainly beat the medical to market. Still, it’s
an exciting idea that has to potential to change the way a whole sector
of computer technology is both made and used.
NSA spying, as
revealed by the whistleblower Edward Snowden, may cause countries to
create separate networks and break up the experts, according to experts.
Photograph: Alex Milan Tracy/NurPhoto/NurPhoto/Corbis
The vast scale of online surveillance revealed by Edward Snowden is leading to the breakup of the internet
as countries scramble to protect private or commercially sensitive
emails and phone records from UK and US security services, according to
experts and academics.
They say moves by countries, such as Brazil and Germany,
to encourage regional online traffic to be routed locally rather than
through the US are likely to be the first steps in a fundamental shift
in the way the internet works. The change could potentially hinder
economic growth.
"States may have few other options than to follow
in Brazil's path," said Ian Brown, from the Oxford Internet Institute.
"This would be expensive, and likely to reduce the rapid rate of
innovation that has driven the development of the internet to date … But
if states cannot trust that their citizens' personal data – as well as
sensitive commercial and government information – will not otherwise be
swept up in giant surveillance operations, this may be a price they are
willing to pay."
Since the Guardian's revelations about the scale
of state surveillance, Brazil's government has published ambitious plans
to promote Brazilian networking technology, encourage regional internet
traffic to be routed locally, and is moving to set up a secure national
email service.
In India, it has been reported that government employees are being advised not to use Gmail
and last month, Indian diplomatic staff in London were told to use
typewriters rather than computers when writing up sensitive documents.
In Germany, privacy commissioners have called for a review of whether Europe's internet traffic can be kept within the EU – and by implication out of the reach of British and US spies.
Surveillance dominated last week's Internet Governance Forum 2013,
held in Bali. The forum is a UN body that brings together more than
1,000 representatives of governments and leading experts from 111
countries to discuss the "sustainability, robustness, security,
stability and development of the internet".
Debates on child
protection, education and infrastructure were overshadowed by widespread
concerns from delegates who said the public's trust in the internet was
being undermined by reports of US and British government surveillance.
Lynn
St Amour, the Internet Society's chief executive, condemned government
surveillance as "interfering with the privacy of citizens".
Johan
Hallenborg, Sweden's foreign ministry representative, proposed that
countries introduce a new constitutional framework to protect digital
privacy, human rights and to reinforce the rule of law.
Meanwhile,
the Internet Corporation for Assigned Names and Numbers – which is
partly responsible for the infrastructure of the internet – last week
voiced "strong concern over the undermining of the trust and confidence
of internet users globally due to recent revelations of pervasive
monitoring and surveillance".
Daniel Castro, a senior analyst at
the Information Technology & Innovation Foundation in Washington,
said the Snowden revelations were pushing the internet towards a tipping
point with huge ramifications for the way online communications worked.
"We
are certainly getting pushed towards this cliff and it is a cliff we do
not want to go over because if we go over it, I don't see how we stop.
It is like a run on the bank – the system we have now works unless
everyone decides it doesn't work then the whole thing collapses."
Castro
said that as the scale of the UK and US surveillance operations became
apparent, countries around the globe were considering laws that would
attempt to keep data in-country, threatening the cloud system – where
data stored by US internet firms is accessible from anywhere in the
world.
He said this would have huge implications for the way large companies operated.
"What
this would mean is that any multinational company suddenly has lots of
extra costs. The benefits of cloud computing that have given us
flexibility, scaleability and reduced costs – especially for large
amounts of data – would suddenly disappear."
Large internet-based firms, such as Facebook and Yahoo, have already raised concerns about the impact of the NSA
revelations on their ability to operate around the world. "The
government response was, 'Oh don't worry, we're not spying on any
Americans'," said Facebook founder Mark Zuckerberg. "Oh, wonderful:
that's really helpful to companies trying to serve people around the
world, and that's really going to inspire confidence in American
internet companies."
Castro wrote a report for Itif in August
predicting as much as $35bn could be lost from the US cloud computing
market by 2016 if foreign clients pull out their businesses. And he said
the full economic impact of the potential breakup of the internet was
only just beginning to be recognised by the global business community.
"This
is changing how companies are thinking about data. It used to be that
the US government was the leader in helping make the world more secure
but the trust in that leadership has certainly taken a hit … This is
hugely problematic for the general trust in the internet and e-commerce
and digital transactions."
Brown said that although a localised
internet would be unlikely to prevent people in one country accessing
information in another area, it may not be as quick and would probably
trigger an automatic message telling the user that they were entering a
section of the internet that was subject to surveillance by US or UK
intelligence.
"They might see warnings when information is about
to be sent to servers vulnerable to the exercise of US legal powers – as
some of the Made in Germany email services that have sprung up over the summer are."
He
said despite the impact on communications and economic development, a
localised internet might be the only way to protect privacy even if, as
some argue, a set of new international privacy laws could be agreed.
"How
could such rules be verified and enforced? Unlike nuclear tests,
internet surveillance cannot be detected halfway around the world."
Computational knowledge. Symbolic programming. Algorithm
automation. Dynamic interactivity. Natural language. Computable
documents. The cloud. Connected devices. Symbolic ontology. Algorithm
discovery. These are all things we’ve been energetically working
on—mostly for years—in the context of Wolfram|Alpha, Mathematica, CDF and so on.
But recently something amazing has happened. We’ve figured out how to
take all these threads, and all the technology we’ve built, to create
something at a whole different level. The power of what is emerging
continues to surprise me. But already I think it’s clear that it’s going
to be profoundly important in the technological world, and beyond.
At some level it’s a vast unified web of technology that builds on
what we’ve created over the past quarter century. At some level it’s an
intellectual structure that actualizes a new computational view of the
world. And at some level it’s a practical system and framework that’s
going to be a fount of incredibly useful new services and products.
I have to admit I didn’t entirely see it coming. For years I have
gradually understood more and more about what the paradigms we’ve
created make possible. But what snuck up on me is a breathtaking new
level of unification—that lets one begin to see that all the things
we’ve achieved in the past 25+ years are just steps on a path to
something much bigger and more important.
I’m not going to be able to explain everything in this blog post (let’s hope it doesn’t ultimately take something as long as A New Kind of Science
to do so!). But I’m excited to begin to share some of what’s been
happening. And over the months to come I look forward to describing some
of the spectacular things we’re creating—and making them widely
available.
It’s hard to foresee the ultimate consequences of what we’re
doing. But the beginning is to provide a way to inject sophisticated
computation and knowledge into everything—and to make it universally
accessible to humans, programs and machines, in a way that lets all of
them interact at a vastly richer and higher level than ever before.
In a sense, the Wolfram Language has been incubating inside Mathematica for more than 25 years. It’s the language of Mathematica,
and CDF—and the language used to implement Wolfram|Alpha. But
now—considerably extended, and unified with the knowledgebase of
Wolfram|Alpha—it’s about to emerge on its own, ready to be at the center
of a remarkable constellation of new developments.
We call it the Wolfram Language because it is a language. But it’s a
new and different kind of language. It’s a general-purpose
knowledge-based language. That covers all forms of computing, in a new
way.
There are plenty of existing general-purpose computer languages. But
their vision is very different—and in a sense much more modest—than the
Wolfram Language. They concentrate on managing the structure of
programs, keeping the language itself small in scope, and relying on a
web of external libraries for additional functionality. In the Wolfram
Language my concept from the very beginning has been to create a single
tightly integrated system in which as much as possible is included right
in the language itself.
And so in the Wolfram Language, built right into the language, are
capabilities for laying out graphs or doing image processing or creating
user interfaces or whatever. Inside there’s a giant web of
algorithms—by far the largest ever assembled, and many invented by us.
And there are then thousands of carefully designed functions set up to
use these algorithms to perform operations as automatically as possible.
Over the years, I’ve put immense effort into the design of the
language. Making sure that all the different pieces fit together as
smoothly as possible. So that it becomes easy to integrate data analysis
here with document generation there, with mathematical optimization
somewhere else. I’m very proud of the results—and I know the language
has been spectacularly productive over the course of a great many years
for a great many people.
But now there’s even more. Because we’re also integrating right into
the language all the knowledge and data and algorithms that are built
into Wolfram|Alpha. So in a sense inside the Wolfram Language we have a
whole computable model of the world. And it becomes trivial to write a
program that makes use of the latest stock price, computes the next high
tide, generates a street map, shows an image of a type of airplane, or a
zillion other things.
We’re also getting the free-form natural language of
Wolfram|Alpha. So when we want to specify a date, or a place, or a song,
we can do it just using natural language. And we can even start to
build up programs with nothing more than natural language.
There are so many pieces. It’s quite an array of different things.
But what’s truly remarkable is how they assemble into a unified whole.
Partly that’s the result of an immense amount of work—and
discipline—in the design process over the past 25+ years. But there’s
something else too. There’s a fundamental idea that’s at the foundation
of the Wolfram Language: the idea of symbolic programming, and the idea
of representing everything as a symbolic expression. It’s been an
embarrassingly gradual process over the course of decades for me to
understand just how powerful this idea is. That there’s a completely
general and uniform way to represent things, and that at every level
that representation is immediately and fluidly accessible to
computation.
It can be an array of data. Or a piece of graphics. Or an algebraic
formula. Or a network. Or a time series. Or a geographic location. Or a
user interface. Or a document. Or a piece of code. All of these are just
symbolic expressions which can be combined or manipulated in a very
uniform way.
But in the Wolfram Language, there’s not just a framework for setting
up these different kinds of things. There’s immense built-in curated
content and knowledge in each case, right in the language. Whether it’s
different types of visualizations. Or different geometries. Or actual
historical socioeconomic time series. Or different forms of user
interface.
I don’t think any description like this can do the concept of
symbolic programming justice. One just has to start experiencing
it. Seeing how incredibly powerful it is to be able to treat code like
data, interspersing little programs inside a piece of graphics, or a
document, or an array of data. Or being able to put an image, or a user
interface element, directly into the code of a program. Or having any
fragment of any program immediately be runnable and meaningful.
In most languages there’s a sharp distinction between programs, and
data, and the output of programs. Not so in the Wolfram Language. It’s
all completely fluid. Data becomes algorithmic. Algorithms become data.
There’s no distinction needed between code and data. And everything
becomes both intrinsically scriptable, and intrinsically
interactive. And there’s both a new level of interoperability, and a new
level of modularity.
So what does all this mean? The idea of universal computation implies
that in principle any computer language can do the same as any
other. But not in practice. And indeed any serious experience of using
the Wolfram Language is dramatically different than any other
language. Because there’s just so much already there, and the language
is immediately able to express so much about the world. Which means that
it’s immeasurably easier to actually achieve some piece of
functionality.
I’ve put a big emphasis over the years on automation. So that the
Wolfram Language does things automatically whenever you want it
to. Whether it’s selecting an optimal algorithm for something. Or
picking the most aesthetic layout. Or parallelizing a computation
efficiently. Or figuring out the semantic meaning of a piece of
data. Or, for that matter, predicting what you might want to do next. Or
understanding input you’ve given in natural language.
Fairly recently I realized there’s another whole level to this. Which
has to do with the actual deployment of programs, and connectivity
between programs and devices and so on. You see, like everything else,
you can describe the infrastructure for deploying programs
symbolically—so that, for example, the very structure and operation of
the cloud becomes data that your programs can manipulate.
And this is not just a theoretical idea. Thanks to endless layers of
software engineering that we’ve done over the years—and lots of
automation—it’s absolutely practical, and spectacular. The Wolfram
Language can immediately describe its own deployment. Whether it’s
creating an instant API, or putting up an interactive web page, or
creating a mobile app, or collecting data from a network of embedded
programs.
And what’s more, it can do it transparently across desktop, cloud, mobile, enterprise and embedded systems.
It’s been quite an amazing thing seeing this all start to work. And
being able to create tiny programs that deploy computation across
different systems in ways one had never imagined before.
This is an incredibly fertile time for us. In a sense we’ve got a new
paradigm for computation, and every day we’re inventing new ways to use
it. It’s satisfying, but more than a little disorienting. Because
there’s just so much that is possible. That’s the result of the unique
convergence of the different threads of technology that we’ve been
developing for so long.
Between the Wolfram Language—with all its built-in computation and
knowledge, and ways to represent things—and our Universal Deployment
System, we have a new kind of universal platform of incredible power.
And part of the challenge now is to find the best ways to harness it.
Over the months to come, we’ll be releasing a series of products that
support particular ways of using the Wolfram Engine and the Universal
Platform that our language and deployment system make possible.
There’ll be the Wolfram Programming Cloud, that allows one to create
Wolfram Language programs, then instantly deploy them in the cloud
through an instant API, or a form-based app, or whatever. Or deploy them
in a private cloud, or, for example, through a Function Call Interface,
deploy them standalone in desktop programs and embedded systems. And
have a way to go from an idea to a fully deployed realization in an
absurdly short time.
There’ll be the Wolfram Data Science Platform, that allows one to
connect to all sorts of data sources, then use the kind of automation
seen in Wolfram|Alpha Pro, then pick out and modify Wolfram Language
programs to do data science—and then use CDF to set up reports to
generate automatically, on a schedule, through an API, or whatever.
There’ll be the Wolfram Publishing Platform that lets you create
documents, then insert interactive elements using the Wolfram Language
and its free-form linguistics—and then deploy the documents, on the web
using technologies like CloudCDF, that instantly support interactivity
in any web browser, or on mobile using the Wolfram Cloud App.
And we’ll be able to advance Mathematica a lot too. Like there’ll be Mathematica Online, in which a whole Mathematica
session runs on the cloud through a web browser. And on the desktop,
there’ll be seamless integration with the Wolfram Cloud, letting one
have things like persistent symbolic storage, and instant large-scale
parallelism.
And there’s still much more; the list is dauntingly long.
Here’s another example. Just as we curate all sorts of data and
algorithms, so also we’re curating devices and device connections. So
that built into the Wolfram Language, there’ll be mechanisms for
communicating with a very wide range of devices. And with our Wolfram
Embedded Computation Platform, we’ll have the Wolfram Language running
on all sorts of embedded systems, communicating with devices, as well as
with the cloud and so on.
At the center of everything is the Wolfram Language, and we intend to make this as widely accessible to everyone as possible.
The Wolfram Language is a wonderful first language to learn (and
we’ve done some very successful experiments on this). And we’re planning
to create a Programming Playground that lets anyone start to use the
language—and through the Programming Cloud even step up to make some
APIs and so on for free.
We’ve also been building the Wolfram Course Authoring Platform, that
does major automation of the process of going from a script to all the
elements of an online course—then lets one deploy the course in the
cloud, so that students can have immediate access to a Wolfram Language
sandbox, to be able to explore the material in the course, do exercises,
and so on. And of course, since it’s all based on our unified system,
it’s for example immediate that data from the running of the course can
go into the Wolfram Data Science Platform for analysis.
I’m very excited about all the things that are becoming possible. As
the Wolfram Language gets deployed in all these different places, we’re
increasingly going to be able to have a uniform symbolic representation
for everything. Computation. Knowledge. Content. Interfaces.
Infrastructure. And every component of our systems will be able to
communicate with full semantic fidelity, exchanging Wolfram Language
symbolic expressions.
Just as the lines between data, content and code blur, so too will
the lines between programming and mere input. Everything will become
instantly programmable—by a very wide range of people, either by using
the Wolfram Language directly, or by using free-form natural language.
There was a time when every computer was in a sense naked—with just
its basic CPU. But then came things like operating systems. And then
various built-in languages and application programs. What we have now is
a dramatic additional step in this progression. Because with the
Wolfram Language, we can in effect build into our computers a vast swath
of existing knowledge about computation and about the world.
If we’re forming a kind of global brain with all our interconnected
computers and devices, then the Wolfram Language is the natural language
for it. Symbolically representing both the world and what can be
created computationally. And, conveniently enough, being efficient and
understandable for both computers and humans.
The foundations of all of this come from decades spent on Mathematica, and Wolfram|Alpha, and A New Kind of Science. But
what’s happening now is something new and unexpected. The emergence, in
effect, of a new level of computation, supported by the Wolfram
Language and the things around it.
So far I can see only the early stages of what this will lead to. But
already I can tell that what’s happening is our most important
technology project yet. It’s a lot of hard work, but it’s incredibly
exciting to see it all unfold. And I can’t wait to go from “Coming Soon” to actual systems that people everywhere can start to use…
If your car was
powered by thorium, you would never need to refuel it. The vehicle would
burn out long before the chemical did. The thorium would last so long,
in fact, it would probably outlive you.
That's why a company called Laser Power Systems has created a concept
for a thorium-powered car engine. The element is radioactive, and the
team uses bits of it to build a laserbeam that heats water, produces
steam, and powers an energy-producing turbine.
Thorium is one of the most dense materials on the planet. A small
sample of it packs 20 million times more energy than a similarly-sized
sample of coal, making it an ideal energy source.
The thing is, Dr. Charles Stevens, the CEO of Laser Power Systems, told Mashable that thorium engines won't be in cars anytime soon.
"Cars are not our primary interest," Stevens said. "The automakers don't want to buy them."
He said too much of the automobile industry is focused on making
money off of gas engines, and it will take at least a couple decades for
thorium technology to be used enough in other industries that vehicle
manufacturers will begin to consider revamping the way they think about
engines.
"We're building this to power the rest of the world," Stevens said.
He believes a thorium turbine about the size of an air conditioning unit
could more provide cheap power for whole restaurants, hotels, office
buildings, even small towns in areas of the world without electricity.
At some point, thorium could power individual homes.
Stevens understands that people may be wary of Thorium because it is radioactive — but any such worry would be unfounded.
"The radiation that we develop off of one of these things can be
shielded by a single sheet off of aluminum foil," Stevens said." "You
will get more radiation from one of those dental X-rays than this."
Have something to add to this story? Share it in the comments.
3D printing may have an image problem. It’s sometimes seen as a
hobbyist pursuit—a fun way to build knickknacks from your living room
desktop—but a growing number of companies are giving serious thought to
the technology to help get new ideas off the ground.
That’s literally off the ground in aircraft maker Boeing’s case.
Thirty thousand feet in the air, some planes made by Boeing are
outfitted with air duct components, wiring covers, and other small,
general parts that have been made via 3D printing, or, as the process is
known in industrial applications, additive manufacturing. The company
also uses additive manufacturing with metal to produce prototype parts
for form, fit and function tests.
Whether it’s the living room or a corporate factory, the underlying
principle of 3D printing—additive manufacturing—is the same. It’s
different from traditional manufacturing techniques such as subtractive
or formative manufacturing, which mainly rely on removing material
through molding, drilling or grinding. Additive manufacturing instead
starts from scratch and binds layers of material sequentially in
extremely thin sheets, into a shape designed with 3D modeling software.
Please, we call it "additive manufacturing"
Boeing has been conducting research and development in the area of
additive manufacturing since 1997, but the company wants to scale up its
processes in the years ahead so it can use the technology to build
larger, structural components that can be widely incorporated into
military and commercial aircraft.
For these larger titanium structures that constitute the backbone of
aircraft, “they generally fall outside of the capacity of additive
manufacturing in its current state because they’re larger than the
equipment that can make them,” said David Dietrich, lead engineer for
additive manufacturing in metals at Boeing.
“That’s our goal through aggressive new machine designs—to scale to larger applications,” he said.
Boeing’s use of 3D printing may seem unconventional because of the
growing attention on the technology’s consumer applications for things
like toys, figurines and sculptures. But it’s not.
In industry, “we don’t like to refer to it as ‘3D printing’ because
the term additive manufacturing has been around longer and is more
accepted,” Dietrich said.
For consumers, some of the more prominent 3D printer makers include
MakerBot, MakieLab and RepRap; industrial-grade makers include 3D
Systems, which also makes lower-cost models, Stratasys, ExOne and EOS.
The cost of a 3D printer varies widely. 3D Systems’ Cube, which is
designed for home users and hobbyists, starts at around $1,300. But
machines built for industrial-grade manufacturing in industries like
aerospace, automotive and medical, such as those made by ExOne, can
fetch prices as high as $1 million.
The average selling price for an industrial-grade 3D printer is about
$75,000, according to market research compiled by Terry Wohlers, an
analyst who studies trends in 3D printing. Most consumer printers go for
between $1,500 and $3,000, he said.
3D printing or additive manufacturing offers several advantages over
traditional subtractive processes. The biggest benefit, some businesses
say, is that the technology allows for speedier, one-off production of
products in-house.
At Boeing, the team handling additive manufacturing in plastics has
cut down its processing time dramatically. While it might take up to a
year to make some small parts using conventional tools, 3D printing can
lessen the processing time to a week, said Michael Hayes, lead engineer
for additive manufacturing in plastics at the company.
The company can also more easily tweak its products using the
technology, he said. “You can fail early,” Hayes said. “You can make the
first part very quickly, make changes, and get to a high-quality part
faster.”
Far beyond the hobbiests
NASA is another organization that is using 3D printers to experiment.
The space agency has been looking at the technology for years, but over
the past six months, NASA’s Jet Propulsion Laboratory has been using
the technology more frequently to test new concepts for parts that may
soon find their way into spacecraft.
Located in Pasadena, California, the lab has a dozen 3D printers
including consumer models made by companies such as MakerBot, Stratasys
and 3D Systems.
Make the virtual world tangible.
Previously, 3D printers were too expensive, but the revolution now is
their affordability, said Tom Soderstrom, chief technology officer at
the lab. JPL uses the printers as a brainstorming tool as part of what
Soderstrom calls their “IT petting zoo.”
So far, the program’s results have been good. This past summer,
mechanical engineers used the printers to create concepts for simple
items like table trays. But an actual stand for a webcam was produced
too, to be used for conference calls. And engineers realized, using the
3D printers, they could incorporate the same swivel mechanism that was
used for the stand into their design for a new spacecraft part for
deploying parachutes.
“That was the ‘aha’ moment,” Soderstrom said, that the printers could
be used to conceive and print parts for actual spacecraft. The swivel
part, which has been designed but not manufactured yet, would provide
wiggle room to the parachute to reduce the torque or rotational impact
when it deploys.
Another advantage of having a 3D printer in-house is that it can give
a company an easier way to fine-tune designs for new products,
Soderstrom said. “It can take you 20 times to get an idea right,” he
said.
Soderstrom hopes that eventually entire spacecraft could be printed
using the technology. The spacecraft would be unmanned, and small,
perhaps a flat panel the size of an art book. “Not all spacecraft need
to look like the Voyager,” Soderstrom said.
For consumer-level 3D printers, the technology is still developing.
Depending on the machine, the printed objects are not always polished,
and the software to make the designs can be buggy and difficult to
learn, Soderstrom said. Software for generating designs for 3D printing
can be supplied by the printer vendor, take the form of computer-aided
design programs such as Autodesk, or come from large engineering
companies like Siemens.
Still, Soderstrom recommends that CIOs make the investment in 3D
printing and purchase or otherwise obtain several machines on loan. They
don’t have to be the most expensive models, he said, but companies
should try to identify which business units might see the most benefit
from the machines. Companies should try to find somebody who can act as
the “IT concierge”—a person with knowledge of the technology who can
advise the company how best to use it.
“Producing a high-fidelity part on some of the cheaper 3D printers
can be hard,” Soderstrom said. “This concierge could help with that.”
Certain skills this person may need could include knowing how to work
with multiple different materials within a single object, he said.
Companies don’t have to be as large as Boeing or NASA to get some use
out of 3D printers. The technology is also an option for small-business
owners and entrepreneurs looking to make customized designs for
prototypes and then print them in small-scale runs.
A new take on 3D printing
One company making strategic use of 3D printing is shipping and
logistics giant UPS. The company, which also makes its services
available to smaller customers via storefront operations, has responded
to the growing interest in the technology with a program designed to
help small businesses and startups that may not have the funds to
purchase their own 3D printer.
A poll of small-business owners conducted by UPS showed high interest
in trying out the technology, particularly among those wanting to
create prototypes, artistic renderings or promotional materials. So, in
July the company announced the start of a program that UPS said makes it the first nationwide retailer to test 3D printing services in-store.
Staples claims to be the first retailer to stock 3D printers for
consumers, but UPS says its program makes it the first to offer 3D
printing services like computer-aided design consultations in addition
to the printing itself.
Currently, there are six independently owned UPS store locations
offering Stratasys’ uPrint SE Plus printer, an industrial-grade machine.
A store in San Diego was the first to get it, followed by locations in
Washington, D.C.; Chicago; New York; and outside Dallas. In September,
the printer was installed at a location in Menlo Park, California, just
off Sand Hill Road in Silicon Valley, a street known for its
concentration of venture capital companies backing tech startups.
3D printed fashion via Shapeways.
The UPS Store will gather feedback from store owners and customers
over the next 12 months and then will decide whether to add printers in
additional stores if the test is successful.
So far at the San Diego store, costs to the customer have ranged from
$10, for lifelike knuckles printed by a medical device developer, to
$500 for a prototype printed by a prosthetics company. The biggest
factor in determining price is the complexity of the design.
The customer brings in a digital file in the STL format to the store.
The store then checks to make sure the file is print-ready by running
it through a software program. If it is, the customer gets a quote for
the printing and labor costs.
Sometimes the digital file needs to be reworked or created from
scratch. In such cases, the customer can work with a contracted 3D
printing designer to iron out the design. Depending on how this meeting
goes, it can be a several-step process before a file is ready for
printing, said Daniel Remba, the UPS Store’s small-business technology
leader, who leads the company’s 3D printing project.
So far at the San Diego store, there have been several different
types of customers coming in to use the printer, said store owner Burke
Jones. They have ranged from small startups to engineers from larger
companies, government contractors and other people who just have an
interesting idea, he said.
One customer wanted a physical 3D replica of his own head, Jones
said. There was also a scuba diver who printed a light filter for an
underwater lamp and a mountain biker who printed a mount for a camera.
For early stage companies, Jones estimates that the store has printed
roughly a couple dozen product prototypes. In total, the store has done
probably as many as 50 printing jobs for various types of customers, he
said, producing 200 different parts.
In Menlo Park, the store has completed about 10 jobs with the printer, with at least 25 other inquiries pending.
A virtual physical enterprise
There are other online companies that offer 3D printing services. Two sites are Shapeways and Quickparts,
which take files uploaded by the customer and then print the object for
them. But the UPS Store project is different because it’s more
personal, Jones said.
“We get to know the people, and their vision,” he said.
3D Hubs is another company
betting that there are people who are interested in 3D printers but
don’t own one. The site operates like an Airbnb for 3D printers, by
helping people find 3D printers that are owned by other people or
businesses nearby.
3D printing is already a crucial element in some large companies’
manufacturing processes. But for smaller companies, the technology’s
biggest obstacle may be a lack of awareness about when it’s right to use
it, said Pete Brasiliere, an industry analyst with Gartner.
Though the desktop machines may not be as advanced, their popularity
within the “maker” culture could provide that knowledge to the business
world. “The hype around the consumer market has made senior management
aware,” Brasiliere said.
Motorola has unveiled Project Ara,
an open-source initiative for modular smartphones with the goal to "do
for hardware what the Android platform has done for software." The
company plans to create an ecosystem that can support third-party
hardware development for individual phone components — in other words,
you could upgrade your phone's processor, display, and more by shopping
at different vendors.
Motorola will be working with Phonebloks,
which recently showed off a similarly ambitious concept for modular
smartphones; the Google-owned hardware manufacturer says that it plans
to engage with the Phonebloks community throughout the development
process and help realize the same idea with its technical expertise.
Project Ara's design comprises
of an "endo" — the phone's endoskeleton, or basic structure — and
various modules. The modules "can be anything," says Motorola, giving
examples ranging from a new keyboard or battery to more unusual
components such as a pulse oximeter.
"We want to do for hardware what the Android platform has done for software."
The company will be reaching
out to developers to start creating Ara modules, and expects the
developer's kit to be released in alpha this winter; interested parties
can sign up to be an "Ara Scout" now.
Humanity just made a large, DIY step towards a time when everyone can
upgrade themselves towards being a cyborg. Of all places, it happened
somewhere in the post-industrial tristesse of the German town of Essen.
It's there that I met up with biohacker Tim Cannon, and
followed along as he got what is likely the first-ever computer chip
implant that can record and transmit his biometrical data. Combined in a
sealed box with a battery that can be wirelessly charged, it's not a
small package. And as we saw, Cannon had it implanted directly under his
skin by a fellow biohacking enthusiast, not a doctor, and without
anesthesia.
Called the Circadia 1.0, the implant can record data from
Cannon's body and transfer it to any Android-powered mobile
device. Unlike wearable computers and biometric-recording devices like
Fitbit, the subcutaneous device is open-source, and allows the user
full control over how data is collected and used.
The Circadia device before being implanted in Cannon's arm.
Because a regular surgeon wouldn't be allowed to implant a device
unapproved by medical authorities, Tim relied on the expertise of body
modification enthusiasts, who had all met in Essen for the BMXnet conference.
The procedure itself was so delicate that not only were we not allowed
to film the thing, but we were not even able to share where exactly it
took place.
One of the pioneers in body modification is Steve Haworth, who
conducted the surgery. Despite a family background in medical device
engineering, Haworth turned to the more experimental side of altering
the human body in the early 90s, first with piercing and tattoo studios
in Phoenix and later by developing modifications like 3D tattoos and the
metal mohawk.
Haworth used his own tools for the surgery, and as he's not a
board-certified surgeon, was not able to use anesthetics. He did
assure me that "there are pretty amazing things we can do with ice." It
sounded convincing at the time.
In its first version, the chip can record Cannon's body temperature
and transfer it in real time via Bluetooth. Three LEDs built into the
package serve as status lights, and can be controlled to light up the
tattoo in Cannon's forearm.
It's
a humble beginning, but updates are on the way. Grindehouse Wetware
have already completed the development of a pulse monitoring device, and
he's also been able to shrink the size of the Circadia system, which
will make the procedure quite a bit more user-friendly. He's also
working to automate communication between the chip and the internet of
things.
"I think that our environment should listen more accurately und
more intuitively to what's happening in our body," Cannon explained. "So
if, for example, I've had a stressful day, the Circadia will
communicate that to my house and will prepare a nice relaxing atmosphere
for when I get home: dim the lights, let in a hot bath."
So Cannon is essentially trying to integrate the body into the
growing quantified and connected universe. But unlike the life-loggers
and step-counter-users, biohackers take the concept of self-improvement
to the next level. Why would one literally hack his body?
According to Cannon, the developments are not about simply trying to
insert gadgets into one's body for a performance enhancement. The end
goal is to transcend the boundaries of biology, and try to hack
evolution itself.
As hacking, both for good and bad, has become pervasive in society,
and as the appropriation of communication networks has become a common
battleground, biohackers like Cannon are trying to take the fight
for self-determination into the realm to the technologized body.
It's as if there a new dimension has been added to Michel Foucault's terms biopolitics and biopower,
which he developed in the 1970s. Through biopower, Foucault described
the emergence of a new form of power and politics as an extension to
traditional state power in the 20th century, which takes the body as an
object of quantification and the reproduction of society's power
structures. His point seems valid today, as the political fights of our
time not only take place in legal discourses, but are also being staged
over what's legal to do with one's own body.
In the future, hackers' and activists' disputes with restrictive
governments may not only be about communication, information, and
digital infrastructure, but may also shift into debates about our own
technologically-improved physical beings. In the face of companies and
governmental agencies developing implants that are protected by patents
and secret test procedures, the question of how to remain in control of
our bodies may turn out to be a very real pressing social issue,
something Cannon is preemptively trying to push against.
Cannon does hold the human body as imperfect and failing in many
ways, and refuses to obey the "established medical industry's artificial
ideas about what 100 percent is." He has also decided to pause
theoretical and academic discussions on immortality, and is focusing
instead on what he terms practical transhumanism. In essence, he aims to
use open-source and networked research approaches to define the
capabilities of his body and to find out how far he can upgrade himself.
The security risks are real, as we've learned with Dick Cheney's heart.
At the same time, complex medical products tend to be mostly restricted
to those who can afford them. Even though the former vice president may
dislike the thought, a bunch of biohackers aim to prove that a
well-connected and enthusiastic underground culture of self-taught
garage tinkerers may be able to increase the safety and accessibility of
medical devices. The Linux community stands as proof enough for
Cannon that open source implant can be developed safely and securely.
Building on the much cheaper development costs of an open
coding-network, Tim wants to realize his goal of offering cheap organs
for everyone in a near future. "We have been working on the Circadia
Chip for 18 months, needing only a fraction of the costs that big
companies would use for this," he said. "The same will go for our next
projects and an artificial heart is a goal for us for the next decade."
Tim Cannon demonstrates the prototype of the Circadia and the control commands on his tablet device.
In a few months, the first production series of the Circadia chip
should be ready. With an expected price of around $500, the chip should
be relatively accessible for just about any enthusiast, and will mainly
be distributed through the networks of the body modification community.
That means that if you find the right local establishment, you could
have your own chip installed. Haworth told me that he would charge
roughly $200 for the procedure itself.
The question of whether or not we'll blur the line between man and
machine, of whether or not we'll enhance the human body, has long been
answered in the affirmative. Now it's a question of who will break new
ground, and when. As to whether or not the roboticiziation of humans is
found in better hands with DIY enthusiasts or the medical
establishment, and who will make farther strides, only time will tell.
In the meantime, there are just slight battery difficulties to be resolved: