The iPad’s light, sleek, simple construction belies its complex
origins. There’s a lot of stuff in the iPad: aluminum and glass, of
course, but also other heavy metals and toxic chemicals. And
manufacturing each 1.44-pound iPad results in over 285 times
its own weight in greenhouse gas emissions. The manufacturing of and
material used in the iPad are two reasons why the iPad must be made in
China—and not just in the ways you’d expect.
Yes, labor is dirt cheap in China. Minimum wage was just $138/month at Hongkai Electronics in October 2010, compared to $1160/month in the US (based on a $7.25/hour federal minimum wage and a 40-hour work week).
And yes, environmental regulations in China are pretty minimal (though improving). China ranks 116th out of 132 countries on Yale’s 2012 Environmental Performance Index
rankings. Even with all their illegally run coltan mines, the
Democratic Republic of Congo is ranked many points higher than China.
But there’s another important reason why Apple and other
manufacturers have their heels stuck in Chinese mud. iPad manufacturing,
like the manufacturing of other electronics, requires a significant
amount of rare earth elements, the 17 difficult-to-mine elements used in
all kinds of green technology. It’s hard to say exactly what rare
earths are in an iPad, since Apple is really tight-lipped about their
materials—no one can even get them to confirm what manufacturer makes
their impact-resistant glass, though I suspect Asahi.
Cambridge engineering professor Dr. Tim Coombs
guesses that there may be lanthanum in the iPad’s lithium-ion polymer
battery, as well as “a range of rare earths to produce the different
colours” in the display. The magnets along the side of the iPad and in its cover (pictured above) are possibly a neodymium alloy. Electronics glass is often polished with cerium oxide. According to a Congressional Research Service report, worldwide demand for rare earths was 136,100 tons in 2010, 45-percent of which was for magnets, glass, and polishing.
All Our Rare Earths Come from a Pit Mine in China
Why is all this rare earth consumption a problem? China currently controls 95-97%
of the world’s supply of rare earths and has repeatedly cut export
quotas, sending already-high prices skyrocketing. Fearing dependence on
China for rare earths, two companies—Molycorp in California and Lynas Corp in Australia—plan
to begin mining rare earths this year. As green industry continues to
grow, however, it’s unclear if current mining operations will be able to
keep up with increasing demand.
Facing growing concern about the possibility of a rare earth shortage, President Obama recently lodged a complaint with the World Trade Organization against China about their rare earth policy. Some specialists think the complaint may be “too little, too late”—by the time China changes its policy, more manufacturers will have moved plants to China.
Recycling is Not a Rare Earth Solution
It might seem that the mountains of electronic waste would be a
perfect source of rare earths. But recycling isn’t the answer to the
rare earth shortage—at least not yet. Some Japanese recyclers are successfully recovering rare earths from compressors. But neither SIMS Recycling Solutions nor Electronics Recyclers International (ERI),
the two biggest electronics recyclers in the US, are currently
recovering any rare earths in their recycling process, according to SIMS
president Steve Skurnac and ERI CEO John Shegerian.
For now, Skurnac says, “Rare earths come in very minute
concentrations in electronic scrap,” which means that recyclers need
high volume and super efficient processes to recover any reasonable
amount of rare earths from electronics. The technology just isn’t there
to make it economically feasible for most recyclers.
Today, an American electronics company can only be exempt from
China’s rare earth export quotas by manufacturing within China. So
that’s what most companies, including Apple, are doing. The only other
solution is for us to stop consuming so much—an option that people
rarely find appealing. Not as appealing as a retina display, at least.
While most camera innovations are aimed at higher megapixel counts or new image capturing techniques, Matt Richardson is taking an entirely different route with the Descriptive Camera:
creating a device that turns your captured imagery into words. Designed
as part of a class for New York University's Interactive
Telecommunications Program, the camera consists of a USB webcam, a
shutter button, a small thermal printer, and an ethernet connection.
When a picture is "snapped," it's sent off to humans for analysis via
Amazon's Mechanical Turk API. The human on the other end then creates a
written description of the image, which is sent back to the camera. The
resulting text is printed with the thermal printer, framed by a
Polaroid-style photo outline (an example Richardson provides reads "It's
a dark room with a window. The image is quite pixelated."
According to Richardson's post about the project,
the Amazon Human Intelligence Task — or HIT — cost is about $1.25 for
each image, with results usually taking between three to six minutes to
return. An "accomplice mode" actually lets the camera send out links to
the image via instant messenger, providing a cheaper option for human
interpretation. While the device currently requires external power from a
5-volt source, Richardson does hope to make a version at some point
that runs off self-contained batteries and can use wireless data. It's
certainly an interesting project, and we won't deny that we're smitten
with the idea of taking images out and about in the world, and seeing
them perceived through someone else's eyes.
Don’t you just hate it when you often need to
solve a captcha whenever you want to log in to select websites? You
know, those irritating slanted and jumbled group of letters and numbers,
where sometimes, you cannot even tell whether it is the letter ‘o’ or
the number ’0?, or if the particular letter is in the uppercase or not.
Captchas have been employed for some years already in order to verify
that the person behind the computer is made out of flesh and bone, and
is not an automated robot or program of any kind. Detroit-based tech
company Are You A Human
(interesting name) has come up with a different way of verifying the
authenticity of a user – not through captchas, but rather, the idea of a
simple game known as PlayThru.
PlayThru claims to prevent bots from spamming sites, as the game can
only be completed by an actual human being. Definitely sounds far more
fun in theory to “solve”, and if your less than informed boss walks by
your desk to see you play the latest game, just tell him or her that you
are solving a captcha replacement before you are able to start work.
To get a better idea on how PlayThru works, here is an example of
just one of the games. You will be presented with your fair share of
items, including a shoe, a football jersey, an olive and a piece of
bacon, where all of them will float right beside a pizza. Should you
drag the right ingredients over the pizza, then you would have “won”,
and so far, I do not think that anyone would like a topping of shoes on
their pizza.
Soon you can get your hands on the Mobot modular
robot for a very reasonable $270 a module (pre-orders
available now). A number of connection plates and
attachments will also be available, and I
guess you can 3D print your own stuff.
Mobot by Barobo.com
I like the gripper that is powered and controlled by the
rotating faceplate. I am sure the same concept can be
used to 3D print some cool things in the future.
A connector would be an awesome thing and definitely
worth a price of some sort.
In general, it seems to be a very competent modular
robotics system. It uses a snap together connector,
making it simple and fast to use, but maybe not as
strong as a system that screws together.
There is a Graphical User Interface RobotController,
and you can program it with the C/C++ interpreter Ch
so everyone from beginner to hard core hacker should
be able to do some really cool stuff.
Here(@American Scientist) is an interesting and rather complete article on programming language [evolution,war] with some infographics on programming history and methods. It is not that technical, making the reading accessible to any one.
Develop looks at the platform with ambitions to challenge Adobe's ubiquitous Flash web player
Initially heralded as the future of
browser gaming and the next step beyond the monopolised world of Flash,
HTML5 has since faced criticism for being tough to code with and
possessing a string of broken features.
The coding platform, the fifth iteration of the HTML standard, was
supposed to be a one stop shop for developers looking to create and
distribute their game to a multitude of platforms and browsers, but
things haven’t been plain sailing.
Not just including the new HTML mark-up language, but also
incorporating other features and APIs such as CSS3, SVG and JavaScript,
the platform was supposed to allow for the easy insertion of features
for the modern browser such as video and audio, and provide them without
the need for users to install numerous plug-ins.
And whilst this has worked to a certain degree, and a number of
companies such as Microsoft, Apple, Google and Mozilla under the W3C
have collaborated to bring together a single open standard, the problems
it possesses cannot be ignored.
It doesn't get much more futuristic than "universal quantum network,"
but we're going to have to find something else to pine over, since a
UQN now exists. A group from the Max Planck Institute of Quantum Optics
has tied the quantum states of two atoms together using photons, creating the first network of qubits.
A quantum network is just like a regular network, the one that you're
almost certainly connected to at this very moment. The only difference
is that each node in the network is just a single atom (rubidium atoms,
as it happens), and those atoms are connected by photons. For the first
time ever, scientists have managed to get these individual atoms to read
a qubit off of a photon, store that qubit, and then write it out onto
another photon and send it off to another atom, creating a fully
functional quantum network that has the potential to be expanded to
however many atoms we want.
How Quantum Networking Works
You remember the deal with the quantum states of atoms, right? You
know, how you can use quantum spin to represent the binary states of
zero or one or both or neither all at the same time? Yeah, don't worry,
when it comes down to it it's not something that anyone really
understands. You just sort of have to accept that that's the way it is,
and that quantum bits (qubits) are rather weird.
So, okay, this quantum weirdness comes in handy when you want to create a very specific sort of computer,
but what's the point of a quantum network? Well, if you're the paranoid
sort, you're probably aware that when you send data from one place to
another in a traditional network, those data can be intercepted en route and read by some nefarious person with nothing better to do with their time.
The cool bit about a quantum network is that it offers a way
to keep a data transmission perfectly secure. To explain why this is
the case, let's first go over how the network functions. Basically,
you've got one single atom on one end, and other single atom on the
other end, and these two atoms are connected with a length of optical
fiber through which single photons can travel. If you get a bunch of
very clever people with a bunch of very expensive equipment together in a
room with one of those atoms, you can get that atom to emit a photon
that travels down the optical fiber containing the quantum signature of
the atom that it was emitted from. And when that photon runs smack into
the second atom, it imprints it with the quantum information from the first atom, entangling the two.
When two atoms are entangled like this, it means that you can measure
the quantum state of one of them, and even though the result of your
measurement will be random, you can be 100% certain that the quantum
state of the other one will match it. Why and how does this work? Nobody
has any idea. Seriously. But it definitely does, because we can do it.
Quantum Lockdown
Now, let's get back to this whole secure network thing. You've got a
pair of entangled atoms that you can measure, and you'll get back a
random state (a one or a zero) that you know will be the same for both
atoms. You can measure them over and over, getting a new random state
each time you do, and gradually you and the person measuring the other
atom will be able to build up a long string of totally random (but
totally identical) ones and zeros. This is your quantum key.
There are three things that make a quantum key so secure. Thing one
is that the single photon that transmits the entanglement itself cannot
be messed with, since messing with it screws up the quantum signature of
the atom that it originally came from. Thing two is that while you're
measuring your random ones and zeroes, if anyone tries to peek in and
measure your atom at the same time (to figure out your key), you'll be
able to tell. And thing three is that you don't have to send the key
itself back and forth, since you're relying on entangled atoms that
totally ignore conventional rules of space and time.*
Hooray, you've got a super-secure quantum key! To use it, you turn it
into what's called a one-time pad, which is a very old fashioned and
very simple but theoretically 100% secure way to encrypt something. A
one-time pad is just a completely random string of ones and zeros.
That's it, and you've got one of those in the form of your quantum key.
Using binary arithmetic, you add that perfectly random string of data to
the data that make up your decidedly non-random message, ending up with
a new batch of data that looks completely random. You can send
that message through any non-secure network you like, and nobody will
ever be able to break it. Ever.
When your recipient (the dude with the other entangled atom and an
identical quantum key) gets your message, all they have to do is do that
binary arithmetic backwards, subtracting the quantum key from the
encrypted message, and that's it. Message decoded!
The reason this system is so appealing is that theoretically, there are zero
weak points in the information chain. Theoretically (and we really do
have to stress that "theoretically"), an entangled quantum network
offers a way to send information back and forth with 100% confidence
that nobody will be able to spy on you. We don't have this capability
yet, but with this first operational entangled quantum network, we're
getting closer, in that all of the pieces of the puzzle do seem to
exist.
*If you're wondering why we can't use entanglement to transmit
information faster than the speed of light, it's because entangled atoms
only share their randomness. You can be sure that measuring one of them
will result in the same measurement on the other one no matter how far
away it is, but we have no control over what that measurement will be.
Paranoid Shelter is a recent installation / architectural device that fabric | ch finalized later in 2011 after a 6 months residency at the EPFL-ECAL Lab in Renens (Switzerland). It was realized with the support of Pro Helvetia, the OFC, the City of Lausanne and the State of Vaud.
It was initiated and first presented as sketches back in 2008 (!), in
the context of a colloquium about surveillance at the Palais de Tokyo in
Paris.
Being created in the context of a theatrical collaboration with french writer and essayist Eric Sadin around his books about contemporary surveillance (Surveillance globale and Globale paranoïa --both published back in 2009--), Paranoid Shelter
revisits the old figure/myth of the architectural shelter, articulated
by the use of surveillance technologies as building blocks.
Additionnal information on the overall project can be found through the two following links:
A compressed preview and short of the play by NOhista.
-----
On the first
technical drawings and sketches of the Paranoid Shelter project, the
entire system was just looking like a (big) mess of wires, sensors
and video cameras, all concentrated on a pretty tiny space where humans
will have difficulties to move in. The entire space is consciously
organised around tracking methods/systems, the space being delimited
by 3 [augmented] posts which host a set of sensors, video cameras and
microphones. It includes networked [power over ethernet] video cameras,
microphones and a set of wireless ambient sensors (giving the ability
of measuring temperature, O2 and CO2 gaz concentration, current
atmospheric pressure, light, etc...).
Based on a real-time analysis of major
sensors hardware, the system is able to control DMX lights, a set of
two displays (one LCD screen and one projector) and to produce sound
through a dynamically generated text to speech process.
All programs were developed using
openFrameworks enhanced by a set of dedicated in-house C++ libraries
in order to be able to capture networked camera video flow, control
any DMX compatible piece of hardware and collect wireless Libelum sensor's
data. Sound analysis programs, LCD display program and the main
program are all connected to each other via a local network. The main
program is in charge of collecting other program's data, performing
the global analysis of the system's activity, recording system's raw
information to a database and controlling system's [re]actions
(lights, display).
The overall system can act in an
[autonomous] way by controlling the entire installation behavior
while it can also be remotely controlled when used on stage,
in the context of a theater play.
Collecting all sensor's flows is one of
the basic task. Cameras are used to track movements, microphones
measure sound activity and sensors collect a set of ambient
parameters. Even if data capture consists in some basic network based
tasks, it is easily raised to upper complexity level when each data
collection should occur simultaneously, in real-time, [without,with]
a [limited,acceptable] delay. Major raw data analysis have to occur
directly after data acquisition in order to minimize the time-shift
in the system's space awareness. This first level of data analysis
brings out mainly frequencies information, quantity of activity and
2D location tracking (from the point of view of each camera). Every
single piece of raw information is systematically recorded in a
dedicated database : it reduces system's memory footprint (by keeping
it almost constant) without loosing any activity information. From
time to time the system can access these recorded information in its
post-analysis process, when required, mainly to add a time-scale
dimension on the global activity that occurred in the monitored
space. Time isolated information can be interpreted in a rough and
basic way, while time composition of the same information or a set of
information may bring additional meanings by verifying information
consistency over time (of course, it could be in a negative or a
positive way, by confirming or refuting a first level deduced
activity information). Another level of analysis can be reached by
taking in account the spacial distribution of sensors in the overall
installation. The system is then able to compute 3D information
getting an awareness of activities within the space it is monitoring.
It generates a second level of data analysis, spatialised, that will
increase the global understanding of captured data by the system.
Recorded activities are made available
to the [audience,visitors] through a wifi access point. Networked
cameras can be accessed in real time, giving the ability to humans to
see some of the system's [inputs]. Thus, network activity is also
monitored as another sign of human presence, the system can then
[detect] activity elsewhere than in its dedicated space.
Whatever how numerous are collected
data, the system faces a real problem when it comes to the
interpretation of these data while not having benefit of a human
brain. Events that are quite obvious to humans, do not mean anything
to computers and softwares. In order to avoid the use of some
artificial neural networks simulation (which may still be a good
option to explore), I have decided to compute a limited set of
parameters, all based on previously analysed data, only computed
lately when the system may decide to react to perceived activities.
It defines a kind of global [mood] of the system, based on which it
will [decide] whether to be aggressive (from a human point of view)
by making the global tracking activity [noticeable] by humans
evolving in the installation's space, or by focusing tracking sensors
on a given area or by trying to enhance some sensor's information
analysis, whether to settle in a kind of silent mode.
Moreover, the evolution of these
parameters are also studied in time, making the [mood] evolving in
a human way, increasing and decreasing [analogically]. System's
[mood] may be wrong or [unjustified,weird] from a human point of
view, but that's where [multi-dimensional] software becomes
interesting. Beyond a certain complexity, by adding computation
layers on top of each over, having written every single line of code
does not allow the programmer to predict precisely what next system's
[re]action will be.
We did reach here monitoring system
limitations which is obviously [interpretation,comprehension]. As long as automatic
system can not correctly [understand] data, humans will need to be in
the loop, making all these monitoring systems quite useless [as
expert system], except for producing an enormous quantity of data
that still need to be post-analysed by a human brain. As the system
is producing an important set of heteregeneous data, a set of rules
may suggest to the system some sort of data correlation. These rules
should not be too [tights,precises] in order to avoid producing
obvious system's interpretation, while keeping them slightly [out of
focus] may allow [smart,astonishing] conclusion being produced. So
there's rooms here for additional implementation of the data analysis
processes that can still completely change the way the entire
installation [can,may] behave.
Extreme Tech has just published an interesting article on the history of super-comput[er,ing] that worth a reading. It is a bit spec' oriented but still gives a good overview on super-computer [r]evolution.
"25 million laptops later," Mashable announced today, "One Laptop Per Child doesn't increase test scores." "Error Message," reads the headline from The Economist: "A disappointing return from an investment in computing."
The tenor of these stories feels like a grand "Gotcha!"
for ed-tech: It's shiny stuff, sure, but it offers no measurable gains
in "student achievement." So while the OLPC project might have been a
good idea, so the story goes, it is not a good investment.
One Laptop Per Child was a good idea, a noble and ambitious
one at that. Originally proposed in 2006, OLPC aimed to build an
inexpensive laptop that would be sold to governments in the developing
world and made available in turn to the children in those countries via
their respective ministries of education. Easier said than done. Over
the course of the past 6 years, the OLPC has fussed with hardware and
software specs, finally building a laptop (and now, a tablet) that costs $200 (twice that of the originally promised price).
In the meantime, much of the developing world has experienced its own
mobile computing revolution. There are now a number of manufacturers
working on low-cost devices for that market. There's the Intel Classmates PC, for example (with similar hardware, but more expensive software than its OLPC coursin); there's the Worldreader project (it delivers villages a library full of e-books via Kindles); and there's the now-infamous Aakash tablet (which was sold in India for $35 but with its reliability and functionality very much in question).
Arguably more significant than the competition OLPC faces from these low-cost tablets and netbooks: 95% of the world's population now owns a cellphone, by some estimates (See Wikipedia's list
of mobile phone penetration, broken down by country). Of course, a
clamshell phone is hardly the same as a laptop. One has SMS; the other, a
command line. Nonetheless, the ubiquity of the cellphone makes it
clear that the value proposition of the OLPC device needs to be more
than just "access" and "connectivity."
The mission of the non-profit organization always stressed something
broader, bigger -- One Laptop per Child meant empowerment, engagement,
and education:
We aim to provide each child with a rugged, low-cost,
low-power, connected laptop. To this end, we have designed hardware,
content and software for collaborative, joyful, and self-empowered
learning. With access to this type of tool, children are engaged in
their own education, and learn, share, and create together. They become
connected to each other, to the world and to a brighter future.
No mention of improving standardized test scores in there, you'll
notice. No talk of "student achievement." "The best preparation for
children," according to the OLPC website isn't test prep. It is "to develop the passion for learning and the ability to learn how to learn."
Standardized test scores in math and in language do not reflect "the
ability to learn how to learn" -- they don't even purport to. But we
fixate on test scores nevertheless. It is worth noting here that the
study that prompted today's headlines about OLPC's "disappointing" test
results -- one conducted by the Inter-American Development Bank
using data collected from some 300 primary schools in rural Peru -- did
find some improvement in students' cognitive skills (as in, "the
ability to learn how to learn").
The study links that boost in cognitive skills to "increased
interaction with technology." Make of that what you will. The study
also found that having access to computers increases your access to
computers. To quote Keanu Reeves here, "Whoa."
The study points out other things too, and it asks "Could stricter
adherence to the OLPC principles have brought about better academic
outcomes?" Many students were not allowed to take their laptops home.
Internet access was "practically non-existent." Just 70% of teachers
had 40-hours of professional development before their students were
given the devices.
That last (missing) piece -- training for teachers -- has long been
something that gets overlooked when it comes to ed-tech initiatives no
matter the location, Peru or the U.S. It is almost as if we believe we
can simply parachute technology in to a classroom and expect everyone to
just pick it up, understand it, use it, hack it, and prosper.
Oh right. OLPC has done just that, a la The Gods Must Be Crazy,
whereby tablets were quite literally dropped into villages from
helicopters. Okay, not everyone receives their devices this way, but
OLPC has always been fairly hands-off in its training implementation
efforts. It's one of the major criticisms that the organization has
faced (along with criticisms about price, hardware, software, and
environmental sustainability).
For his part, Nicholas Negroponte, the head of the OLPC foundation,
frequently points to the work of Sugata Mitra and the "Hole in the Wall
Project" as inspiration
-- the belief that children can learn (and teach each other) on their
own. Children are naturally inquisitive; they are ingenious. Access to
an Internet-enabled computing device is sufficient. They will "figure
it out."
It's part of what Mitra and Negroponte call a "minimally invasive education."
Considering the colonial legacy of education systems in the developing
world, avoiding "invasion" seems profoundly important.
But there remains a strange tension between dropping in a Western
technological "solution" and insisting doing so is "non-invasive." At
it's best, the OLPC represents a desire to support literacy,
connectivity and learning through technology. But it does those things
in a world of ubiquitous cellphones, which on their own have not
transformed education either. In an effort to be "non-invasive" then,
OLPC ends up often being unsupportive -- unsupportive of the tech, the
teachers and the learners.
But is that failure? It doesn't feel like pointing to standardized
test scores in math and language is the right measure at all to gauge
this. It goes against the core of the OLPC mission. But then again,
these measurements are political, not necessarily pedagogical. And
these scores reveal less about the global reach or potential of
technology, and more about the dominant narratives of the U.S. education
system: "what counts" as learning, and "what counts" in terms of
ed-tech's role in delivering or enabling it -- why, standardized test
scores, of course.