Taiwanese firm Polytron Technologies has revealed the world's first
fully transparent smartphone prototype. As you can see in the pictures
above and below, the prototype device is almost fully transparent. The
only components visible on the device are the board, chips memory card
and camera.
The rest of the device is a piece of glass that
sports a small touchscreen (also transparent) located in the center of
the device. According to Polytron, its technology may be available by
the end of 2013.
15.10.13 - Two EPFL
spin-offs, senseFly and Pix4D, have modeled the Matterhorn in 3D, at a
level of detail never before achieved. It took senseFly’s ultralight
drones just six hours to snap the high altitude photographs that were
needed to build the model.
They weigh less than a kilo each, but they’re as agile as eagles
in the high mountain air. These “ebees” flying robots developed by
senseFly, a spin-off of EPFL’s Intelligent Systems Laboratory (LIS),
took off in September to photograph the Matterhorn from every
conceivable angle. The drones are completely autonomous, requiring
nothing more than a computer-conceived flight plan before being launched
by hand into the air to complete their mission.
Three
of them were launched from a 3,000m “base camp,” and the fourth made
the final assault from the summit of the stereotypical Swiss landmark,
at 4,478m above sea level. In their six-hour flights, the completely
autonomous flying machines took more than 2,000 high-resolution
photographs. The only remaining task was for software developed by
Pix4D, another EPFL spin-off from the Computer Vision Lab (CVLab), to
assemble them into an impressive 300-million-point 3D model. The model
was presented last weekend to participants of the Drone and Aerial
Robots Conference (DARC), in New York, by Henri Seydoux, CEO of the
French company Parrot, majority shareholder in senseFly.
All-terrain and even in swarms
“We want above all to demonstrate what our devices are capable of
achieving in the extreme conditions that are found at high altitudes,”
explains Jean-Christophe Zufferey, head of senseFly. In addition to the
challenges of altitude and atmospheric turbulence, the drones also had
to take into consideration, for the first time, the volume of the object
being photographed. Up to this point they had only been used to survey
relatively flat terrain.
Last week the dynamic Swiss company –
which has just moved into new, larger quarters in Cheseaux-sur-Lausanne –
also announced that it had made software improvements enabling drones
to avoid colliding with each other in flight; now a swarm of drones can
be launched simultaneously to undertake even more rapid and precise
mapping missions.
Cards are fast becoming the best design pattern for mobile devices.
We are currently witnessing a re-architecture of the web, away from
pages and destinations, towards completely personalised experiences
built on an aggregation of many individual pieces of content. Content
being broken down into individual components and re-aggregated is the
result of the rise of mobile technologies, billions of screens of all
shapes and sizes, and unprecedented access to data from all kinds of
sources through APIs and SDKs. This is driving the web away from many
pages of content linked together, towards individual pieces of content
aggregated together into one experience.
The aggregation depends on:
The person consuming the content and their interests, preferences, behaviour.
Their location and environmental context.
Their friends’ interests, preferences and behaviour.
The targeting advertising eco-system.
If the predominant medium of our time is set to be the portable
screen (think phones and tablets), then the predominant design pattern
is set to be cards. The signs are already here…
Twitter is moving to cards
Twitter recently launched Cards,
a way to attached multimedia inline with tweets. Now the NYT should
care more about how their story appears on the Twitter card (right hand
in image above) than on their own web properties, because the likelihood
is that the content will be seen more often in card format.
Google is moving to cards
With Google Now,
Google is rethinking information distribution, away from search, to
personalised information pushed to mobile devices. Their design pattern
for this is cards.
Everyone is moving to cards
Pinterest (above left) is built around cards. The new Discover feature on Spotify
(above right) is built around cards. Much of Facebook now represents
cards. Many parts of iOS7 are now card based, for example the app
switcher and Airdrop.
The list goes on. The most exciting thing is that despite these many
early card based designs, I think we’re only getting started. Cards are
an incredible design pattern, and they have been around for a long time.
Cards give bursts of information
Cards as an information dissemination medium have been around for a
very long time. Imperial China used them in the 9th century for
games. Trade cards in 17th century London helped people find businesses.
In 18th century Europe footmen of aristocrats used cards to introduce
the impending arrival of the distinguished guest. For hundreds of years
people have handed around business cards.
We send birthday cards, greeting cards. My wallet is full of debit
cards, credit cards, my driving licence card. During my childhood, I was
surrounded by games with cards. Top Trumps, Pokemon, Panini sticker
albums and swapsies. Monopoly, Cluedo, Trivial Pursuit. Before computer
technology, air traffic controllers used cards to manage the planes in
the sky. Some still do.
Cards are a great medium for communicating quick stories. Indeed the
great (and terrible) films of our time are all storyboarded using a card
like format. Each card representing a scene. Card, Card, Card. Telling
the story. Think about flipping through printed photos, each photo
telling it’s own little tale. When we travelled we sent back postcards.
What about commerce? Cards are the predominant pattern for coupons.
Remember cutting out the corner of the breakfast cereal box? Or being
handed coupon cards as you walk through a shopping mall? Circulars, sent
out to hundreds of millions of people every week are a full page
aggregation of many individual cards. People cut them out and stick them
to their fridge for later.
Cards can be manipulated.
In addition to their reputable past as an information medium, the
most important thing about cards is that they are almost infinitely
manipulatable. See the simple example above from Samuel Couto
Think about cards in the physical world. They can be turned over to
reveal more, folded for a summary and expanded for more details, stacked
to save space, sorted, grouped, and spread out to survey more than one.
When designing for screens, we can take advantage of all these
things. In addition, we can take advantage of animation and movement. We
can hint at what is on the reverse, or that the card can be folded out.
We can embed multimedia content, photos, videos, music. There are so
many new things to invent here.
Cards are perfect for mobile devices and varying screen sizes.
Remember, mobile devices are the heart and soul of the future of your
business, no matter who you are and what you do. On mobile devices,
cards can be stacked vertically, like an activity stream on a phone.
They can be stacked horizontally, adding a column as a tablet is turned
90 degrees. They can be a fixed or variable height.
Cards are the new creative canvas
It’s already clear that product and interaction designers will
heavily use cards. I think the same is true for marketers and creatives
in advertising. As social media continues to rise, and continues to
fragment into many services, taking up more and more of our time,
marketing dollars will inevitably follow. The consistent thread through
these services, the predominant canvas for creativity, will be card
based. Content consumption on Facebook, Twitter, Pinterest, Instagram,
Line, you name it, is all built on the card design metaphor.
I think there is no getting away from it. Cards are the next big
thing in design and the creative arts. To me that’s incredibly exciting.
Samsung and HTC
are flirting with advanced home automation control in future Galaxy and
One smartphones, it’s reported, turning new smartphones into universal
remotes for lighting, entertainment, and more. The two companies are
each separately working on plans for what Pocket-lint‘s source describes as “home smartphones” that blur the line between mobile products and gadgets found around the home.
For Samsung, the proposed solution is to embed ZigBee into its new phones, it’s suggested. The low-power networking system – already found in products like Philips’ Hue
remote-controlled LED lightbulbs, along with Samsung’s own ZigBee bulbs
– creates mesh networks for whole-house coverage, and can be embedded
into power switches, thermostats, and more.
Samsung is already a member of the ZigBee Alliance, and has been
flirting with remote control functionality – albeit using the somewhat
more mundane infrared standard – in its more recent Galaxy phones. The
Galaxy S 4, for instance, has an IR blaster that, with the accompanying
app, can be used to control TVs and other home entertainment kit.
HTC, meanwhile, is also bundling infrared with its recent devices;
the HTC One’s power button is actually also a hidden IR blaster, for
instance, and like Samsung the smartphone comes with a TV remote app
that can pull in real-time listings and control cable boxes and more.
It’s said to be looking to ZigBee RF4CE, a newer iteration which is specifically focused on home entertainment and home automation hardware.
Samsung is apparently considering a standalone ZigBee-compliant
accessory dongle, though exactly what they add-on would do is unclear.
HTC already has a limited range of accessories for wireless home use,
though focused currently on streaming media, such as the Media Link HD.
When we could expect to see the new devices with ZigBee support is
unclear, and course it will take more than just a handset update to get a
home equipped for automation. Instead, there’ll need to be greater
availability – and understanding – of automation accessories, though
there Samsung could have an edge given its other divisions make TVs,
fridges, air conditioners, and other home tech.
Choosing sides: Google’s new augmented-reality game,
Ingress, makes users pick a faction—Enlightened or Resistance—and run
around town attacking virtual portals in hopes of attaining world
domination
I’m not usually very political, but I recently joined the Resistance,
fighting to protect the world against the encroachment of a strange,
newly discovered form of energy. Just this week, in fact, I spent hours
protecting Resistance territory and attacking the enemy.
Don’t worry, this is just the gloomy sci-fi world depicted in a new smartphone game called Ingress
created by Google. Ingress is far from your normal gaming app,
though—it takes place, to some degree, in the real world; aspects of the
game are revealed only as you reach different real-world locations.
Ingress’s world is one in which the discovery of so-called
“exotic matter” has split the population into two groups: the
Enlightened, who want to learn how to harness the power of this energy,
and the Resistance, who, well, resist this change. Players pick a side,
and then walk around their city, collecting exotic matter to keep
scanners charged and taking control of exotic-matter-exuding portals in
order to capture more land for their team.
I found the game, which
is currently available only to Android smartphone users who have
received an invitation to play, surprisingly addictive—especially
considering my usual apathy for gaming.
What’s most interesting
about Ingress, though, is what it suggests about Google’s future plans,
which seem to revolve around finding new ways to extend its reach from
the browser on your laptop to the devices you carry with you at all
times. The goal makes plenty of sense when you consider that traditional
online advertising—Google’s bread and butter—could eventually be
eclipsed by mobile, location-based advertising.
Ingress was
created by a group within Google called Niantic Labs—the same team
behind another location-based app released recently (see “Should You Go on Google’s Field Trip?”).
Google
is surely gathering a treasure trove of information about where we’re
going and what we’re doing while we play Ingress. It must also see the
game as a way to explore possible applications for Project Glass, the
augmented-reality glasses-based computer that the company will start
sending out to developers next year. Ingress doesn’t require a
head-mounted display; it uses your smartphone’s display to show a map
view rather than a realistic view of your surroundings. Still, it is
addictive, and is likely to get many more folks interested in
location-based augmented reality, or at least in augmented-reality
games.
Despite its futuristic focus, Ingress sports a sort of
pseudo-retro look, with a darkly hued map that dominates the screen and a
simple pulsing blue triangle that indicates your position. I could only
see several blocks in any direction, which meant I had to walk around
and explore in order to advance in the game.
For a while, I didn’t
know what I was doing, and it didn’t help that Ingress doesn’t include
any street names. New users complete a series of training exercises,
learning the basics of the game, which include capturing a portal,
hacking a portal to snag items like resonators (which control said
portals), creating links of exotic matter between portals to build a
triangular control field that enhances the safety of team members in the
area, and firing an XMP (a “non-polarized energy field weapon,”
according to the glossary) at an enemy-controlled portal.
Confused much? I sure was.
But
I forged ahead, though, hoping that if I kept playing it would make
more sense. I started wandering around looking for portals. Portals are
found in public places—in San Francisco, where I was playing, this
includes city landmarks such as museums, statues, and murals. Resistance
portals are blue, Enlightened ones are green, and there are also some
gray ones out there that remain unclaimed.
I found a link to a larger map
of the Ingress world that I could access through my smartphone browser
and made a list of the best-looking nearby targets. Perhaps this much
planning goes against the exploratory spirit of the game, but it made
Ingress a lot less confusing for me (there’s also a website that doles out clues about the game and its mythology).
Once
I had a plan, I set out toward the portals on my list, all of which
were in the Soma and Downtown neighborhoods of San Francisco. I managed
to capture two new portals at Yerba Buena Gardens—one at a statue of
Martin Luther King, Jr. and another at the top of a waterfall—and link
them together.
Across the street, in front of the Contemporary
Jewish Museum, I hacked an Enlightened portal and fired an XMP at it,
weakening its resonators. I was then promptly attacked. I fled, figuring
I wouldn’t be able to take down the portal by myself.
A few hours
later, much of my progress was undone by a member of Enlightened
(Ingress helpfully sends e-mail notifications about such things). I was
surprised by how much this pissed me off—I wanted to get those portals
back for the Resistance, but pouring rain and the late hour stopped me.
Playing
Ingress was a lot more fun than I expected, and from the excited
chatter in the game’s built-in chat room, it was clear I wasn’t the only
one getting into it.
On my way back from a meeting, I couldn’t
help but keep an eye out for portals, ducking into an alley to attack
one near my office. Later, I found myself poring over the larger map on
my office computer, looking at the spread of portals and control fields
around the Bay Area.
As it turns out, my parents live in an area
dominated by the Enlightened. So I guess I’ll be busy attacking enemy
portals in my hometown this weekend.
The iPhone 5 is the latest smartphone to hop on-board the LTE (Long Term Evolution)
bandwagon, and for good reason: The mobile broadband standard is fast,
flexible, and designed for the future. Yet LTE is still a young
technology, full of growing pains. Here’s an overview of where it came
from, where it is now, and where it might go from here.
The evolution of ‘Long Term Evolution’
LTE is a mobile broadband standard developed by the 3GPP (3rd Generation Partnership Project),
a group that has developed all GSM standards since 1999. (Though GSM
and CDMA—the network Verizon and Sprint use in the United States—were at
one time close competitors, GSM has emerged as the dominant worldwide
mobile standard.)
Cell networks began as analog, circuit-switched systems nearly identical
in function to the public switched telephone network (PSTN), which
placed a finite limit on calls regardless of how many people were
speaking on a line at one time.
The second-generation, GPRS,
added data (at dial-up modem speed). GPRS led to EDGE, and then 3G,
which treated both voice and data as bits passing simultaneously over
the same network (allowing you to surf the web and talk on the phone at
the same time).
GSM-evolved 3G (which brought faster speeds) started with UMTS, and then
accelerated into faster and faster variants of 3G, 3G+, and “4G”
networks (HSPA, HSDPA, HSUPA, HSPA+, and DC-HSPA).
Until now, the term “evolution” meant that no new standard broke or
failed to work with the older ones. GSM, GPRS, UMTS, and so on all work
simultaneously over the same frequency bands: They’re intercompatible,
which made it easier for carriers to roll them out without losing
customers on older equipment. But these networks were being held back by
compatibility.
That’s where LTE comes in. The “long term” part means: “Hey, it’s time
to make a big, big change that will break things for the better.”
LTE needs its own space, man
LTE has “evolved” beyond 3G networks by incorporating new radio
technology and adopting new spectrum. It allows much higher speeds than
GSM-compatible standards through better encoding and wider channels.
(It’s more “spectrally efficient,” in the jargon.)
LTE is more flexible than earlier GSM-evolved flavors, too: Where GSM’s
3G variants use 5 megahertz (MHz) channels, LTE can use a channel size
from 1.4 MHz to 20 MHz; this lets it work in markets where spectrum is
scarce and sliced into tiny pieces, or broadly when there are wide
swaths of unused or reassigned frequencies. In short, the wider the
channel—everything else being equal—the higher the throughput.
Speeds are also boosted through MIMO (multiple input, multiple output),
just as in 802.11n Wi-Fi. Multiple antennas allow two separate
benefits: better reception, and multiple data streams on the same
spectrum.
LTE complications
This map, courtesy Wikipedia,
shows countries in varying states of LTE readiness. Those in red have
commercial service; dark blue countries have LTE networks planned and
deploying; light blue countries are investigating LTE, and grey
countries have no LTE service at all.
Unfortunately, in practice, LTE implementation gets sticky: There are 33 potential bands for LTE, based on a carrier’s local regulatory domain. In contrast, GSM has just 14 bands,
and only five of those are widely used. (In broad usage, a band is two
sets of paired frequencies, one devoted to upstream traffic and the
other committed to downstream. They can be a few MHz apart or hundreds
of MHz apart.)
And while LTE allows voice, no standard has yet been agreed upon;
different carriers could ultimately choose different approaches, leaving
it to handset makers to build multiple methods into a single phone,
though they’re trying to avoid that. In the meantime, in the U.S.,
Verizon and AT&T use the older CDMA and GSM networks for voice
calls, and LTE for data.
LTE in the United States
Of the four major U.S. carriers, AT&T, Verizon, and Sprint have LTE networks, with T-Mobile set to start supporting LTE
in the next year. But that doesn’t mean they’re set to play nice. We
said earlier that current LTE frequencies are divided up into 33
spectrum bands: With the exception of AT&T and T-Mobile, which share
frequencies in band 4, each of the major U.S. carriers has its own
band. Verizon uses band 13; Sprint has spectrum in band 26; and AT&T
holds band 17 in addition to some crossover in band 4.
In addition, smaller U.S. carriers, like C Spire, U.S. Cellular, and Clearwire, all have their own separate piece of the spectrum pie: C Spire and U.S. Cellular use band 12, while Clearwire uses band 41.
As such, for a manufacturer to support LTE networks in the United States alone,
it would need to build a receiver that could tune into seven different
LTE bands—let alone the various flavors of GSM-evolved or CDMA networks.
With the iPhone, Apple tried to cut through the current Gordian Knot by
releasing two separate models, the A1428 and A1429, which cover a
limited number of different frequencies depending on where they’re
activated. (Apple has kindly released a list of countries
that support its three iPhone 5 models.) Other companies have chosen to
restrict devices to certain frequencies, or to make numerous models of
the same phone.
Banded together
Other solutions are coming. Qualcomm made a regulatory filing in June
regarding a seven-band LTE chip, which could be in shipping devices
before the end of 2012 and could allow a future iPhone to be activated
in different fashions. Within a year or so, we should see
most-of-the-world phones, tablets, and other LTE mobile devices that
work on the majority of large-scale LTE networks.
That will be just in time for the next big thing: LTE-Advanced, the true
fulfillment of what was once called 4G networking, with rates that
could hit 1 Gbps in the best possible cases of wide channels and short
distances. By then, perhaps the chip, handset, and carrier worlds will
have converged to make it all work neatly together.
Interior navigation is only just coming into its own,
but IndoorAtlas has developed a technology that could make it just as
natural as breathing -- or at least, firing up a smartphone's mapping
software. Developed by a team at Finland's University of Oulu,
the method relies on identifying the unique geomagnetic field of every
location on Earth to get positioning through a mobile device. It's not
just accurate, to less than 6.6 feet, but can work without help from wireless signals
and at depths that would scare off mere mortal technologies:
IndoorAtlas has already conducted tests in a mine 4,593 feet deep.
Geomagnetic location-finding is already available through an Android
API, with hints of more platforms in the future. It will still need some
tender loving care from app developers before we're using our
smartphones to navigate through the grocery store as well as IndoorAtlas
does in a video
Of course, Apple didn’t cut the iPad from whole cloth (which probably
would have been linen). It was built upon decades of ideas, tests,
products and more ideas. Before we explore the iPad’s story, it’s
appropriate to consider the tablets and the pen-driven devices that
preceded it.
So Popular So Quickly
Today the iPad is so popular that it’s easy to overlook that it’s
only three years old. Apple has updated it just twice. Here’s a little
perspective to reinforce the iPad’s tender age:
When President Barak Obama was inaugurated as America’s 44th president, there was no iPad.
In 2004 when the Boston Red Sox broke the Curse of the Bambino
and won the World Series for the first time in 86 years, there was no
iPad. Nor did it exist three years later, when they won the championship
again.
Elisha Gray
was an electrical engineer and inventor who lived in Ohio and
Massachusetts between 1835 and 1901. Elisha was a wonderful little geek,
and became interested in electricity while studying at Oberlin College. He collected nearly 70 patents in his lifetime, including that of the Telautograph. [PDF].
The Telautograph let a person use a stylus that was connected to two rheostats,
which managed the current produced by the amount of resistance
generated as the operator wrote with the stylus. That electronic record
was transmitted to a second Telautograph, reproducing the author’s
writing on a scroll of paper. Mostly. Gray noted that, since the scroll
of paper was moving, certain letters were difficult or impossible to
produce. For example, you couldn’t “…dot an i or cross a t or underscore
or erase a word.” Users had to get creative.
Still, the thing was a hit, and was used in hospitals, clinics,
insurance firms, hotels (as communication between the front desk and
housekeeping), banks and train dispatching. Even the US Air Force used the Telautograph to disseminate weather reports.
It’s true that the Telautograph is more akin to a fax machine than a
contemporary tablet, yet it was the first electronic writing device to
receive a patent, which was awarded in 1888.
Of course, ‘ol Elisha is better known for arriving at the US patent
office on Valentine’s Day, 1876, with what he described as an apparatus
“for transmitting vocal sounds telegraphically” just two hours after
Mr. Alexander Graham Bell showed up with a description of a device that
accomplished the same feat. After years of litigation, Bell was legally
declared the inventor of what we now call the telephone, even though
the device described in his original patent application wouldn’t have
worked (Gray’s would have). So Gray/Bell have a Edison/Tesla thing going on.
Back to tablets.
Research Continues
Research continued after the turn of the century. The US Patent Office awarded a patent to Mr. Hyman Eli Goldberg of Chicago in 1918,
for his invention of the Controller. This device concerned the “a
moveable element, a transmitting sheet, a character on said sheet formed
of conductive ink and electrically controlled operating mechanism for
said moveable element.” It’s considered the first patent awarded for a
handwriting recognition user interface with a stylus.
Photo credit: Computer History Museum
Jumping ahead a bit, we find the Styalator (early 1950’s) and the RAND tablet
(1964). Both used a pen and a tablet-like surface for input. The RAND
(above) is more well-known and cost an incredible $18,000. Remember,
that’s 18 grand in 1960?s money. Both bear little resemblance to
contemporary tablet computers, and consisted of a tablet surface and an
electronic pen. Their massive bulk — and price tags ?- made them a
feasible purchase for few.
Alan Kay and the Dynabook
In 1968, things got real. Almost. Computer scientist Alan Kay1
described his concept for a computer meant for children. His “Dynabook”
would be small, thin, lightweight and shaped like a tablet.
In a paper entitled “A Personal Computer For Children Of All Ages,” [PDF] Kay described his vision for the Dynabook:
”The size should be no larger than a notebook; weigh less
than 4 lbs.; the visual display should be able to present 4,000
printing quality characters with contrast ratios approaching that of a
book; dynamic graphics of reasonable quality should be possible; there
should be removable local file storage of at least one million
characters (about 500 ordinary book pages) traded off against several
hours audio (voice/music) files.”
In the video below, Kay explains his thoughts on the original prototype:
That’s truly amazing vision. Alas, the Dynabook as Kay envisioned it was never produced.
Apple’s First Tablet
The first commercial tablet product from Apple appeared in 1979. The Apple Graphics Tablet was meant to compliment the Apple II and use the “Utopia Graphics System” developed by musician Todd Rundgren. 2
That’s right, Todd Rundgren. The FCC soon found that it caused radio
frequency interference, unfortunately, and forced Apple to discontinue
production.
A revised version was released in the early 1980’s, which Apple described like this:
“The Apple Graphics Tablet turns your Apple II system
into an artist’s canvas. The tablet offers an exciting medium with easy
to use tools and techniques for creating and displaying
pictured/pixelated information. When used with the Utopia Graphics
Tablet System, the number of creative alternatives available to you
multiplies before your eyes.
The Utopia Graphics Tablet System includes a wide array of brush
types for producing original shapes and functions, and provides 94 color
options that can generate 40 unique brush shades. The Utopia Graphics
Tablet provides a very easy way to create intricate designs, brilliant
colors, and animated graphics.”
The GRiDpad
This early touchscreen device cost $2,370 in 1989 and reportedly inspired Jeff Hawkins
to create the first Palm Pilot. Samsung manufactured the GRiDpad
PenMaster, which weighed under 5 lbs., was 11.5“ x 9.3” x 1.48? and ran
on a 386SL 20MHz processor with a 80387SX coprocessor. It had 20 MB RAM
and the internal hard drive was available at 40 MB, 60 MB, 80 MB or 120
MB. DigiBarn has a nice GRiDpad gallery.
The Newton Message Pad
With Steve Jobs out of the picture, Apple launched its second pen-computing product, the Newton Message Pad.
Released in 1993, the Message Pad was saddled with iffy handwriting
recognition and poor marketing efforts. Plus, the size was odd; too big
to fit comfortably in a pocket yet small enough to suggest that’s where
it ought to go.
The Newton platform evolved and improved in the following years, but was axed in 1998 (I still use one, but I’m a crazy nerd).
Knight-Ridder and the Tablet Newspaper
This one is compelling. Back in 1994, media and Internet publishing company Knight-Ridder3
produced a video demonstrating its faith in digital newspaper. Its
predictions are eerily accurate, except for this bold statement:
“Many of the technologists…assume that information is
just a commodity and people really don’t care where that information
comes from as long as it matches their set of personal interests. I
disagree with that view. People recognize the newspapers they subscribe
to…and there is a loyalty attached to those.”
Knight-Ridder got a lot right, but I’m afraid the technologists
quoted above were wrong. Just ask any contemporary newspaper publisher.
The Late Pre-iPad Tablet Market
Many other devices appeared at this time, but what I call the “The
Late Pre-iPad Tablet Market” kicked off when Bill Gates introduced the
Compaq tablet PC in 2001. That year, Gates made a bold prediction at COMDEX:
“‘The PC took computing out of the back office and into
everyone’s office,’ said Gates. ‘The Tablet takes cutting-edge PC
technology and makes it available wherever you want it, which is why I’m
already using a Tablet as my everyday computer. It’s a PC that is
virtually without limits – and within five years I predict it will be
the most popular form of PC sold in America.’”
None of these devices, including those I didn’t mention, saw the
success of the iPad. That must be due to in a large part to iOS. While
the design was changing dramatically — flat, touch screen, light weight,
portable — the operating system was stagnant and inappropriate. When
Gates released the Compaq tablet in 2001, it was running Windows XP.
That system was built for a desktop computer and it simply didn’t work
on a touch-based tablet.
Meanwhile, others dreamed of what could be, unhindered by the limitations of hardware and software. Or reality.
Tablets in Pop Culture
The most famous fictional tablet device must be Star Trek’s Personal Access Display Device
or “PADD.” The first PADDs appeared as large, wedge-shaped clipboards
in the original Star Trek series and seemed to operate with a stylus
exclusively. Kirk and other officers were always signing them with a
stylus, as if the yeomen were interstellar UPS drivers and Kirk was
receiving a lot of packages. 4
As new Trek shows were developed, new PADD models appeared. The
devices went multi-touch in The Next Generation, adopting the LCARS
Interface. A stylus was still used from time to time, though there was
less signing. And signing. Aaand signing.
In Stanley Kubrick’s 2001: A Space Odyssey, David Bowman and
Frank Poole use flat, tablet-like devices to send and receive news from
Earth. In his novel, Arthur C. Clarke described the “Newspad” like
this:
“When he tired of official reports and memoranda and
minutes, he would plug his foolscap-sized Newspad into the ship’s
information circuit and scan the latest reports from Earth. One by one
he would conjure up the world’s major electronic papers; he knew the
codes of the more important ones by heart, and had no need to consult
the list on the back of his pad. Switching to the display unit’s
short-term memory, he would hold the front page while he quickly
searched the headlines and noted the items that interested him.
Each had its own two-digit reference; when he punched that, the
postage-stamp-sized rectangle would expand until it neatly filled the
screen and he could read it with comfort. When he had finished, he would
flash back to the complete page and select a new subject for detailed
examination.
Floyd sometimes wondered if the Newspad, and the fantastic technology
behind it, was the last word in man’s quest for perfect communications.
Here he was, far out in space, speeding away from Earth at thousands of
miles an hour, yet in a few milliseconds he could see the headlines of
any newspaper he pleased. (That very word ‘newspaper,’ of course, was an
anachronistic hangover into the age of electronics.) The text was
updated automatically on every hour; even if one read only the English
versions, one could spend an entire lifetime doing nothing but absorbing
the ever-changing flow of information from the news satellites.
It was hard to imagine how the system could be improved or made more
convenient. But sooner or later, Floyd guessed, it would pass away, to
be replaced by something as unimaginable as the Newspad itself would
have been to Caxton or Gutenberg.”
The iPad was released in 2010, so Clarke missed reality by only nine years. Not bad for a book published in 1968.
Next Time: Apple Rumors Begin
In the next article in this series, I’ll pick things up in the early
2000’s when rumors of an Apple-branded tablet gained momentum. For now,
I’ll leave you with this quote from an adamant Steve Jobs, taken from an AllThingsD conference in 2003:
“Walt Mossberg: A lot of people think given the success
you’ve had with portable devices, you should be making a tablet or a
PDA.
Steve Jobs: There are no plans to make a tablet. It turns out people
want keyboards. When Apple first started out, people couldn’t type. We
realized: Death would eventually take care of this. We look at the
tablet and we think it’s going to fail. Tablets appeal to rich guys with
plenty of other PCs and devices already. I get a lot of pressure to do a
PDA. What people really seem to want to do with these is get the data
out. We believe cell phones are going to carry this information. We
didn’t think we’d do well in the cell phone business. What we’ve done
instead is we’ve written what we think is some of the best software in
the world to start syncing information between devices. We believe that
mode is what cell phones need to get to. We chose to do the iPod instead
of a PDA.”
We’ll pick it up from there next time. Until then, go and grab your
iPad and give a quiet thanks to Elisha Gray, Hyman Eli Goldberg, Alan
Kay, the Newton team, Charles Landon Knight and Herman Ridder, Bill
Gates and yes, Stanley Kubrick, Arthur C. Clarke and Gene Roddenberry.
Without them and many others, you might not be holding that wonderful
little device.
More recently known as a co-developer on the the One Laptop Per Child machine. The computer itself was inspired, in part, by Kay’s work on the Dynabook.
I’m really sorry for all the Flash on Todd’s site. It’s awful.
True, as Tom Henderson, principal researcher for ExtremeLabs and a colleague, told me, there’s a “Schwarzschild
radius surrounding Apple. It’s not just a reality distortion field;
it’s a whole new dimension. Inside, time slows and light never escapes–
as time compresses to an amorphous mass.
“Coddled, stroked, and massaged,” Henderson continued, “Apple users
start to sincerely believe the distortions regarding the economic life,
the convenience, and the subtle beauties of their myriad products.
Unknowingly, they sacrifice their time, their money, their privacy, and
soon, their very souls. Comparing Apple with Android, the parallels to
Syria and North Korea come to mind, despot-led personality cults.”
I wouldn’t go that far. While I prefer Android, I can enjoy using iOS
devices as well. Besides, Android fans can be blind to its faults just
as much as the most besotted Apple fan.
For example, it’s true that ICS has all the features that iOS 6 will eventually have, but you can only find ICS on 7.1 percent of all currently running Android devices. Talk to any serious Android user, and you’ll soon hear complaints about how they can’t update their systems.
You name an Android vendor-HTC, Motorola, Samsung, etc. -and I can
find you a customer who can’t update their smartphone or tablet to the
latest and greatest version of the operating system. The techie Android
fanboy response to this problem is just “ROOT IT.” It’s not that easy.
First, the vast majority of Android users are as about as able to
root their smartphone as I am to run a marathon. Second, alternative
Android device firmwares don’t always work with every device. Even the
best of them, Cyanogen ICS, can have trouble with some devices.
Another issue is consistency. When you buy an iPhone or an iPad you
know exactly what the interface is going to work and look like. With
Android devices, you never know quite what you’re going to get. We talk
about ICS as if it’s one thing-and it is from a developer’s
viewpoint-but ICS on different phones such as the HTC One X doesn’t look or feel much like say the Samsung Galaxy S III.
A related issue is that the iOS interface is simply cleaner and more
user-friendly than any Android interface I’d yet to see. One of Apple’s
slogans is “It just works.” Well, actually sometimes it doesn’t work.
ITunes, for example, has been annoying me for years now. But, when it
comes to device interfaces, iOS does just work. Android implementations,
far too often, doesn’t.
So, yes, Android does more today than Apple’s iOS promises to do
tomorrow, but that’s only part of the story. The full story includes
that iOS is very polished and very closed, while Android is somewhat
messy and very open. To me, it’s that last bit-that Apple is purely
proprietary while Android is largely open source-based-that insures that
I’m going to continue to use Android devices.
Now, if only Google can get everyone on the same page with updates and the interface, I’ll be perfectly happy!