Workers assemble and perform quality control checks on MacBook Pro display enclosures at an Apple supplier facility in Shanghai.
(Credit:
Apple)
A new report on Apple offers up an interesting detail about the evolution of the
iPhone and gives a fascinating--and unsettling--look at the practice of overseas manufacturing.
The article, an in-depth report
by Charles Duhigg and Keith Bradsher of The New York Times, is based on
interviews with, among others, "more than three dozen current and
former Apple employees and contractors--many of whom requested anonymity
to protect their jobs."
The piece uses Apple and its recent history to look at why the
success of some U.S. firms hasn't led to more U.S. jobs--and to examine
issues regarding the relationship between corporate America and
Americans (as well as people overseas). One of the questions it asks is:
Why isn't more manufacturing taking place in the U.S.? And Apple's
answer--and the answer one might get from many U.S. companies--appears
to be that it's simply no longer possible to compete by relying on
domestic factories and the ecosystem that surrounds them.
The iPhone detail crops up relatively early in the story, in an
anecdote about then-Apple CEO Steve Jobs. And it leads directly into
questions about offshore labor practices:
In 2007, a little over a month before the iPhone was
scheduled to appear in stores, Mr. Jobs beckoned a handful of
lieutenants into an office. For weeks, he had been carrying a prototype
of the device in his pocket.
Mr. Jobs angrily held up his iPhone, angling it so everyone could see
the dozens of tiny scratches marring its plastic screen, according to
someone who attended the meeting. He then pulled his keys from his
jeans.
People will carry this phone in their pocket, he said. People also carry
their keys in their pocket. "I won't sell a product that gets
scratched," he said tensely. The only solution was using unscratchable
glass instead. "I want a glass screen, and I want it perfect in six
weeks."
A tall order. And another anecdote suggests that Jobs' staff went
overseas to fill it--along with other requirements for the top-secret
phone project (code-named, the Times says, "Purple 2"):
One former executive described how the company relied
upon a Chinese factory to revamp iPhone manufacturing just weeks before
the device was due on shelves. Apple had redesigned the iPhone's screen
at the last minute, forcing an assembly line overhaul. New screens began
arriving at the plant near midnight.
A foreman immediately
roused 8,000 workers inside the company's dormitories, according to the
executive. Each employee was given a biscuit and a cup of tea, guided to
a workstation and within half an hour started a 12-hour shift fitting
glass screens into beveled frames. Within 96 hours, the plant was
producing more than 10,000 iPhones a day.
"The speed and flexibility is breathtaking," the executive said. "There's no American plant that can match that."
That last quote there, like several others in the story, leaves one
feeling almost impressed by the no-holds-barred capabilities of these
manufacturing plants--impressed and queasy at the same time. Here's
another quote, from Jennifer Rigoni, Apple's worldwide supply demand
manager until 2010: "They could hire 3,000 people overnight," she says,
speaking of Foxconn City, Foxconn Technology's complex of factories in China. "What U.S. plant can find 3,000 people overnight and convince them to live in dorms?"
The article says that cheap and willing labor was indeed a factor in
Apple's decision, in the early 2000s, to follow most other electronics
companies in moving manufacturing overseas. But, it says, supply chain
management, production speed, and flexibility were bigger incentives.
"The entire supply chain is in China now," the article quotes a
former high-ranking Apple executive as saying. "You need a thousand
rubber gaskets? That's the factory next door. You need a million screws?
That factory is a block away. You need that screw made a little bit
different? It will take three hours."
It also makes the point that other factors come into play. Apple
analysts, the Times piece reports, had estimated that in the U.S., it
would take the company as long as nine months to find the 8,700
industrial engineers it would need to oversee workers assembling the
iPhone. In China it wound up taking 15 days.
The article and its sources paint a vivid picture of how much easier
it is for companies to get things made overseas (which is why so many
U.S. firms go that route--Apple is by no means alone in this). But the
underlying humanitarian issues nag at the reader.
Perhaps there's hope--at least for overseas workers--in last week's news that Apple has joined the Fair Labor Association, and that it will be providing more transparency when it comes to the making of its products.
As for manufacturing returning to the U.S.? The Times piece cites an unnamed guest at President Obama's 2011 dinner with Silicon Valley bigwigs.
Obama had asked Steve Jobs what it would take to produce the iPhone in
the states, why that work couldn't return. The Times' source quotes Jobs
as having said, in no uncertain terms, "Those jobs aren't coming back."
Apple, by the way, would not provide a comment to the Times about
the article. And Foxconn disputed the story about employees being
awakened at midnight to work on the iPhone, saying strict regulations
about working hours would have made such a thing impossible.
Researchers have succeeded in combining the power of quantum
computing with the security of quantum cryptography and have shown that
perfectly secure cloud computing can be achieved using the principles of
quantum mechanics. They have performed an experimental demonstration of
quantum computation in which the input, the data processing, and the
output remain unknown to the quantum computer. The international team of
scientists will publish the results of the experiment, carried out at
the Vienna Center for Quantum Science and Technology (VCQ) at the
University of Vienna and the Institute for Quantum Optics and Quantum
Information (IQOQI), in the forthcoming issue of Science.
Quantum computers are expected to play an important role in future
information processing since they can outperform classical computers at
many tasks. Considering the challenges inherent in building quantum
devices, it is conceivable that future quantum computing capabilities
will exist only in a few specialized facilities around the world – much
like today's supercomputers. Users would then interact with those
specialized facilities in order to outsource their quantum computations.
The scenario follows the current trend of cloud computing: central
remote servers are used to store and process data – everything is done
in the "cloud." The obvious challenge is to make globalized computing
safe and ensure that users' data stays private.
The latest research, to appear in Science, reveals that
quantum computers can provide an answer to that challenge. "Quantum
physics solves one of the key challenges in distributed computing. It
can preserve data privacy when users interact with remote computing
centers," says Stefanie Barz, lead author of the study. This newly
established fundamental advantage of quantum computers enables the
delegation of a quantum computation from a user who does not hold any
quantum computational power to a quantum server, while guaranteeing that
the user's data remain perfectly private. The quantum server performs
calculations, but has no means to find out what it is doing – a
functionality not known to be achievable in the classical world.
The scientists in the Vienna research group have demonstrated the
concept of "blind quantum computing" in an experiment: they performed
the first known quantum computation during which the user's data stayed
perfectly encrypted. The experimental demonstration uses photons, or
"light particles" to encode the data. Photonic systems are well-suited
to the task because quantum computation operations can be performed on
them, and they can be transmitted over long distances.
The process works in the following manner. The user prepares qubits –
the fundamental units of quantum computers – in a state known only to
himself and sends these qubits to the quantum computer. The quantum
computer entangles the qubits according to a standard scheme. The actual
computation is measurement-based: the processing of quantum information
is implemented by simple measurements on qubits. The user tailors
measurement instructions to the particular state of each qubit and sends
them to the quantum server. Finally, the results of the computation are
sent back to the user who can interpret and utilize the results of the
computation. Even if the quantum computer or an eavesdropper tries to
read the qubits, they gain no useful information, without knowing the
initial state; they are "blind."
###
The research at the Vienna Center for
Quantum Science and Technology (VCQ) at the University of Vienna and at
the Institute for Quantum Optics and Quantum Information (IQOQI) of the
Austrian Academy of Sciences was undertaken in collaboration with the
scientists who originally invented the protocol, based at the University
of Edinburgh, the Institute for Quantum Computing (University of
Waterloo), the Centre for Quantum Technologies (National University of
Singapore), and University College Dublin.
Publication: "Demonstration of Blind Quantum Computing"
Stefanie Barz, Elham Kashefi, Anne Broadbent, Joseph Fitzsimons, Anton Zeilinger, Philip Walther.
DOI: 10.1126/science.1214707
Many of us have been waiting for the moment when 3D printers
would not only be offered ready-to-use without the need of DIY
assembly, but at a price comparable to a common computer. Well get
excited, because that day has arrived.
Created by 3D Systems, the Cube
will retail for just $1,299 and is connected to a community of 3D
designers where you can find inspiration, or upload your own designs and
sell them in the Cubify marketplace. Admittedly, the MakerBot Replicator
is only a tad more expensive at $1,749, but just like the early
versions of the home Windows PC versus the Mac, the Cube wins on style
points for those who prefer a less industrial look and feel to their 3D
printer.
You can order the Cube 3D printer here and check out the design to fabrication process in the video below.
What if you could feel what's on your television screen? Tech company Senseg is working on a way for you to someday be able to do just that, and recently demonstrated a prototype tablet that is already able to make that magic happen.
The tech is made possible using an electrostatic-field-based system that allows different parts of the screen to produce varying degrees of friction. So, while you're touching a flat screen, it feels like you're touching something textured instead. Your traditional screen is turned into what Senseg is calling a "Feel Screen," allowing you to feel textures, contours and edges of things that are displayed in front of you.
Feel Screens don't rely on moving parts in the screen itself, and could be integrated into devices we use today such as smartphones, tablets, and televisions.Senseg's technology is still very much in prototype-form, but could be headed our way in the next 24 months.
Through this signage at Promenade Temecula, the mall is notifying shoppers that their phones may be tracked as they move throughout the premises.
NEW YORK (CNNMoney) -- Attention holiday shoppers: your cell phone may be tracked this year.
Starting on Black Friday and running through New Year's Day, two U.S. malls -- Promenade Temecula in southern California and Short Pump Town Center in Richmond, Va. -- will track guests' movements by monitoring the signals from their cell phones.
While the data that's collected is anonymous, it can follow shoppers' paths from store to store.
The goal is for stores to answer questions like: How many Nordstrom shoppers also stop at Starbucks? How long do most customers linger in Victoria's Secret? Are there unpopular spots in the mall that aren't being visited?
While U.S. malls have long tracked how crowds move throughout their stores, this is the first time they've used cell phones.
But obtaining that information comes with privacy concerns.
The management company of both malls, Forest City Commercial Management, says personal data is not being tracked.
"We won't be looking at singular shoppers," said Stephanie Shriver-Engdahl, vice president of digital strategy for Forest City. "The system monitors patterns of movement. We can see, like migrating birds, where people are going to."
Still, the company is preemptively notifying customers by hanging small signs around the shopping centers. Consumers can opt out by turning off their phones.
The tracking system, called FootPath Technology, works through a series of antennas positioned throughout the shopping center that capture the unique identification number assigned to each phone (similar to a computer's IP address), and tracks its movement throughout the stores.
The system can't take photos or collect data on what shoppers have purchased. And it doesn't collect any personal details associated with the ID, like the user's name or phone number. That information is fiercely protected by mobile carriers, and often can be legally obtained only through a court order.
"We don't need to know who it is and we don't need to know anyone's cell phone number, nor do we want that," Shriver-Engdahl said.
Manufactured by a British company, Path Intelligence, this technology has already been used in shopping centers in Europe and Australia. And according to Path Intelligence CEO Sharon Biggar, hardly any shoppers decide to opt out.
"It's just not invasive of privacy," she said. "There are no risks to privacy, so I don't see why anyone would opt out."
Now, U.S. retailers including JCPenney (JCP,Fortune 500) and Home Depot (HD,Fortune 500) are also working with Path Intelligence to use their technology, Biggar said.
Home Depot has considered implementing the technology but is not currently using it any stores, a company spokesman said.JCPenney declined to comment on its relationship with the vendor.
Some retail analysts say the new technology is nothing to be worried about. Malls have been tracking shoppers for years through people counters, security cameras, heat maps and even undercover researchers who follow shoppers around.
And some even say websites that trackonline shoppersare more invasive, recording not only a user's name and purchases, but then targeting them with ads even after they've left a site.
"It's important for shoppers to realize this sort of data is being collected anyway," Biggar said.
Whereas a website can track a customer who doesn't make a purchase, physical stores have been struggling to perfect this kind of research, Biggar said. By combining the data from FootPath with their own sales figures, stores will have better measurements to help them improve the shopping experience.
"We can now say, you had 100 people come to this product, but no one purchased it," Biggar said. "From there, we can help a retailer narrow down what's going wrong."
But some industry analysts worry about the broader implications of this kind of technology.
"Most of this information is harmless and nobody ever does anything nefarious with it," said Sucharita Mulpuru, retail analyst at Forrester Research. "But the reality is, what happens when you start having hackers potentially having access to this information and being able to track your movements?"
Last year,hackers hit AT&T, exposing the unique ID numbers and e-mail addresses of more than 100,000 iPad 3G owners. To make it harder for hackers to get at this information, Path Intelligence scrambles those numbers twice.
"I'm sure as more people get more cell phones, it's probably inevitable that it will continue as a resource," Mulpuru said. "But I think the future is going to have to be opt in, not opt out."
Personal comment:
One step further. I guess we have to be thankful to be given the ability to opt out the system by 'just' switching off our cell-phone!!!
The Kilobots are an inexpensive system for testing synchronized and collaborative behavior in a very large swarm of robots. Photo courtesy of Michael Rubenstein
The Kilobots are coming. Computer scientists and engineers at Harvard University have developed and licensed technology that will make it easy to test collective algorithms on hundreds, or even thousands, of tiny robots.
Called Kilobots, the quarter-sized bug-like devices scuttle around on three toothpick-like legs, interacting and coordinating their own behavior as a team. AJune 2011 Harvard Technical Reportdemonstrated a collective of 25 machines implementing swarming behaviors such as foraging, formation control, and synchronization.
Once up and running, the machines are fully autonomous, meaning there is no need for a human to control their actions.
The communicative critters were created by members of the Self-Organizing Systems Research Group led by Radhika Nagpal, the Thomas D. Cabot Associate Professor of Computer Science at the Harvard School of Engineering and Applied Sciences (SEAS) and a Core Faculty Member at the Wyss Institute for Biologically Inspired Engineering at Harvard. Her team also includes Michael Rubenstein, a postdoctoral fellow at SEAS; and Christian Ahler, a fellow of SEAS and the Wyss Institute.
Thanks to a technology licensing deal with the K-Team Corporation, a Swiss manufacturer of high-quality mobile robots, researchers and robotics enthusiasts alike can now take command of their own swarm.
One key to achieving high-value applications for multi-robot systems in the future is the development of sophisticated algorithms that can coordinate the actions of tens to thousands of robots.
"The Kilobot will provide researchers with an important new tool for understanding how to design and build large, distributed, functional systems," says Michael Mitzenmacher, Area Dean for Computer Science at SEAS.
The name "Kilobot" does not refer to anything nefarious; rather, it describes the researchers' goal of quickly and inexpensively creating a collective of a thousand bots.
Inspired by nature, such swarms resemble social insects, such as ants and bees, that can efficiently search for and find food sources in large, complex environments, collectively transport large objects, and coordinate the building of nests and other structures.
Due to reasons of time, cost, and simplicity, the algorithms being developed today in research labs are only validated in computer simulation or using a few dozen robots at most.
In contrast, the design by Nagpal's team allows a single user to easily oversee the operation of a large Kilobot collective, including programming, powering on, and charging all robots, all of which would be difficult (if not impossible) using existing robotic systems.
So, what can you do with a thousand tiny little bots?
Robot swarms might one day tunnel through rubble to find survivors, monitor the environment and remove contaminants, and self-assemble to form support structures in collapsed buildings.
They could also be deployed to autonomously perform construction in dangerous environments, to assist with pollination of crops, or to conduct search and rescue operations.
For now, the Kilobots are designed to provide scientists with a physical testbed for advancing the understanding of collective behavior and realizing its potential to deliver solutions for a wide range of challenges.
-----
Personal comment:
This remembers me one project I have worked on, back in 2007, called "Variable Environment", which was involving swarm based robots called "e-puck" developed at EPFL. E-pucks were reacting in an autonomous manner to human activity around them.
The future of augmented-reality technology is here - as long as you're a rabbit. Bioengineers have placed the first contact lenses containing electronic displays into the eyes of rabbits as a first step on the way to proving they are safe for humans. The bunnies suffered no ill effects, the researchers say.
The first version may only have one pixel, but higher resolution lens displays - like those seen inTerminator- could one day be used as satnav enhancers showing you directional arrows for example, or flash up texts and emails - perhaps even video. In the shorter term, the breakthrough also means people suffering from conditions like diabetes and glaucoma may find they have a novel way to monitor their conditions.
In February,New Scientistrevealedthe litany of research projects underway in the field of contact lens enhancement. While one companyhas fielded a contact lens technologyusing a surface-mounted strain gauge to assess glaucoma risk, none have built in a display, or the lenses needed for focused projection onto the retina - and then tested it in vivo. They have now.
"We have demonstrated the operation of a contact lens display powered by a remote radiofrequency transmitter in free space and on a live rabbit," says a US and Finnish team led by Babak Praviz of the University of Washington in Seattle.
"This verifies that antennas, radio chips, control circuitry, and micrometre-scale light sources can be integrated into a contact lens and operated on live eyes."
The test lens was powered remotely using a 5-millimetre-long antenna printed on the lens to receive gigahertz-range radio-frequency energy from a transmitter placed ten centimetres from the rabbit's eye. To focus the light on the rabbit's retina, the contact lens itself was fabricated as a Fresnel lens - in which a series of concentric annular sections is used to generate the ultrashort focal length needed.
They found their lens LED glowed brightly up to a metre away from the radio source in free space, but needed to be 2 centimetres away when the lens was placed in a rabbit's eye and the wireless reception was affected by body fluids. All the 40-minute-long tests on live rabbits were performed under general anaesthetic and showed that the display worked well - and fluroescence tests showed no damage or abrasions to the rabbit's eyes after the lenses were removed.
While making a higher resolution display is next on their agenda, there are uses for this small one, say the researchers: "A display with a single controllable pixel could be used in gaming, training, or giving warnings to the hearing impaired."
"This is clearly way off in the future. But we're aware of the research that is ongoing in this field and we're watching the technology's potential for biosensing and drug delivery applications in particular," says a spokesperson for the British Contact Lens Association in London.
Focus in future will be on HTML5 as mobile world shifts towards
non-proprietary open standards – and now questions will linger over use
of Flash on desktop
Adobe is killing off development of its
mobile Flash plugin, and laying off 750 staff as part of broader
restructuring. Photograph: Paul Sakuma/AP
Mobile Flash is being killed off. The plugin that launched a thousand online forum arguments and a technology standoff between Apple and the format's creator, Adobe,
will no longer be developed for mobile browsers, the company said in a
note that will accompany a financial briefing to analysts.
Instead the company will focus on development around HTML5
technologies, which enable modern browsers to do essentially the same
functions as Flash did but without relying on Adobe's proprietary
technologies, and which can be implemented across platforms.
The existing plugins for the Android and BlackBerry platforms will be given bug fixes and security updates, the company said in a statement first revealed by ZDNet. But further development will end.
The
decision also raises a question mark over the future of Flash on
desktop PCs. Security vulnerabilities in Flash on the desktop have been
repeatedly exploited to infect PCs in the past 18 months, while Microsoft
has also said that the default browser in its forthcoming Windows 8
system, expected at the end of 2012, will not include the Flash plugin
by default. Apple, which in the third quarter captured 5% of the world
market, does not include Flash in its computers by default.
John Nack, a principal product manager at Adobe, commented on his personal blog
(which does not necessarily reflect Adobe views) that: "Adobe saying
that Flash on mobile isn't the best path forward [isn't the same as]
Adobe conceding that Flash on mobile (or elsewhere) is bad technology.
Its quality is irrelevant if it's not allowed to run, and if it's not
allowed to run, then Adobe will have to find different ways to meet
customers' needs."
Around 250m iOS (iPhone, iPod Touches and iPad)
devices have been sold since 2007. There are no clear figures for how
many are now in use. More recently Larry Page, chief executive of
Google, said that a total of 190m Android devices have been activated.
It is not clear how many of those include a Flash plugin in the browser.
"Our
future work with Flash on mobile devices will be focused on enabling
Flash developers to package native apps with Adobe Air for all the major
app stores," Adobe said in the statement. "We will no longer adapt
Flash Player for mobile devices to new browser, OS version or device
configurations.
"Some of our source code licensees may opt to
continue working on and releasing their own implementations. We will
continue to support the current Android and PlayBook configurations with
critical bug fixes and security updates."
The decision comes as
Adobe plans to cut 750 staff, principally in North America and Europe.
An Adobe spokesperson declined to give any figures for the extent of
layoffs in the UK. The company reiterated its expectation that it will
meet revenue targets for the fourth quarter.
The reversal by Adobe
– and its decision to focus on the open HTML5 platform for mobile –
brings to an end a long and tumultuous row between Apple and Adobe over
the usefulness of Flash on the mobile platform. The iPhone launched in
2007 without Flash capability, as did the iPad in 2010.
Steve
Jobs, then Apple's chief executive, and Apple's engineers insisted that
Flash was a "battery hog" and introduced security and stability flaws;
Adobe countered that it was broadly implemented in desktop PCs and used
widely on the web.
Jobs's antagonism was partly driven, his
biography reveals, by Adobe's reluctance after he rejoined Apple in 1996
to port its movie-editing programs to the Mac and to keep its Photoshop
suite comparable on the Mac platform with the Windows one.
But
Jobs also insisted that mobile Flash failed in the role of providing a
good user experience, and also would restrict Apple's ability to push
forward on the iOS platform. Studies of browser crash reports by Apple's
teams showed that Flash was responsible for a signficant proportion of
user problems; Apple was also not satisfied that a Flash plugin would be
available for the first iPhone in 2007 which would not consume more
battery power than would be acceptable.
Jobs managed to persuade
Eric Schmidt, then Google's chief executive and a member of the Apple
board, to get YouTube to make videos available in the H.264 format
without a Flash "wrapper", as was then used for the desktop
implementation.
But the disagreements between Apple and Adobe
intensified, especially when Android devices began appearing which did
use the Flash plugin. Apple refused to use it, and banned apps from its
App Store which tried to use or include Flash.
In "Thoughts on Flash",
an open letter published by Jobs in April 2010, he asserted that "Flash
was created during the PC era – for PCs and mice. Flash is a successful
business for Adobe, and we can understand why they want to push it
beyond PCs. But the mobile era is about low power devices, touch
interfaces and open web standards – all areas where Flash falls short.
"New
open standards created in the mobile era, such as HTML5, will win on
mobile devices (and PCs too). Perhaps Adobe should focus more on
creating great HTML5 tools for the future, and less on criticizing Apple
for leaving the past behind."
The first human genome cost $3 billion to complete; now we can sequence the entire population of Chicago for the same price
The mythical "$1,000 genome" is almost upon us, said Jonathan Rothberg, CEO of sequencing technology company Ion Torrent, at MIT's Emerging Technology conference. If his prediction comes true, it will represent an astonishing triumph in rapid technological development. The rate at which genome sequencing has become more affordable isfaster than Moore's law. (You can read a Q&ATRdid with Rothberg earlier this yearhere, and a profile of his companyhere).
"By this time next year sequencing human genomes as fast and cheap as bacterial genome," said Rothberg. (Earlier, he'd commented that his company can now do an entire bacterial genome in about two hours.)
I was in the room on October 19 when he said it, and I would have thought it pure hubris were it not for Rothberg's incredible track record in this area, from founding successful previous-generation sequencing company 454 Life Sciences to recent breakthroughs made with the same technology he proposes will get us to the $1,000 genome.
The Personal Genome Maker is already showing up in clinical labs, even doctors' offices
The key to this breakthrough, says Rothberg, is that the PGM does not rely on conventional wet chemistry to sequence DNA. Instead, it works almost entirely throughconventional microchip technology, which means Ion Torrent is leveraging decades of investment in conventional transistors and chips.
So what's the age of the $1,000 genome look like? Until we know what more of those genes actually correlate with, for most of us it won't be so different from the present.
"Right now don't have very many correlations between those 3 billion base pairs [of the human genome] and outcomes or medicines," says Rothberg. He predicts it will take at least 10 years of clinical experiments with full genome sequencing to get us to the point where we can begin to unlock its value.
"And it will be 20 years before we understand cancer at same level as HIV and can come up with combinations of medicine [tailored] for each individual," says Rothberg.