Flytrap Robot The nanomaterial in this flytrap design can mimic muscle function.Bioinspiration & Biomimetics via PhysOrg
It’s alarming enough when robots ingest plant detritus like twigs and grass clippings. It’s another thing entirely when they can start chowing down on members of the animal kingdom.
A pair of prototype robots are designed to catch bugs, a major step on
the path toward robots that can hunt, catch and digest their own meals.
The tiny robots are modeled after the lobes of Venus flytraps, which
snap shut as soon as sensitive hairs inside detect an alighting insect.
One prototype, developed at Seoul National University, is made of
shape-memory materials that switch between two states when subjected to a
current. The other, made at the University of Maine, uses artificial
muscles made of a gold nanomaterial.
The Seoul robot has a pair of carbon fiber leaves connected by a shape-memory metal spring, as explained by New Scientist.
The spring works like your average mousetrap — the weight of an insect
(or something else) causes the spring to contract, which pulls the
leaves together. The robot’s quarry is trapped inside.
The Maine robot, which is reported in the online version of the journal Bioinspiration & Biomimetics,
uses an ionic polymeric metal composite, which bends in an electric
field. Engineer Mohsen Shahinpoor said the manner in which a Venus
flytrap’s lobes contract looks remarkably similar to the way his IPMC
contracts in the presence of a voltage.
He built a prototype using a polymer membrane coated with gold
electrodes, a design he had previously developed in other experiments,
according to PhysOrg.
This material is used to make two leaves, with the IPMC electrodes
serving as the flytrap’s sensor hairs. The two leaves are connected by a
copper electrode, as seen in the image at the top of the page. When an
insect alights on the polymer membrane, the IPMC “bristles” send a
signal, which trigger the lobes to snap toward each other.
Of course, it’s still a pretty big leap to robots that can make use
of whatever they’ve trapped inside their lobes. An insectivorious robot
would probably have to transport the dead prey to some type of
mechanical-chemical gut for digestion and caloric production, which
would be quite a feat. But then again, we’ve seen it before with the
EATR bot, so it’s certainly possible. Let’s hope no one endeavors to
make an Audrey II-sized flytrap robot.
Venus Flytrap:Dionaea muscipula traps seen up close. Wikimedia Commons
I went back and found every Android phone shipped in the United States1
up through the middle of last year. I then tracked down every update
that was released for each device - be it a major OS upgrade or a minor
support patch - as well as prices and release & discontinuation
dates. I compared these dates & versions to the currently shipping
version of Android at the time. The resulting picture isn’t pretty -
well, not for Android users:
Other than the original G1 and MyTouch, virtually all of the millions of phones represented by this chart are still under contract today. If you thought that entitled you to some support, think again:
7 of the 18 Android phones never ran a current version of the OS.
12 of 18 only ran a current version of the OS for a matter of weeks or less.
10 of 18 were at least two major versions behind well within their two year contract period.
11 of 18 stopped getting any support updates less than a year after release.
13 of 18 stopped getting any support updates before they even stopped selling the device or very shortly thereafter.
15 of 18 don’t run Gingerbread, which shipped in December 2010.
In a few weeks, when Ice Cream Sandwich comes out, every device on here will be another major version behind.
At least 16 of 18 will almost certainly never get Ice Cream Sandwich.
Also worth noting that each bar in the chart starts from the
first day of release - so it only gets worse for people who bought their
phone late in its sales period.
Why Is This So Bad?
This may be stating the obvious but there are at least three major reasons.
Consumers Get Screwed
Ever since the iPhone turned every smartphone into a blank slate, the
value of a phone is largely derived from the software it can run and
how well the phone can run it. When you’re making a 2 year commitment to
a device, it’d be nice to have some way to tell if the software was
going to be remotely current in a year or, heck, even a month. Turns out
that’s nearly impossible - here are two examples:
The Samsung Behold II on T-Mobile was the most expensive Android
phone ever and Samsung promoted that it would get a major update to
Eclair at least. But at launch the phone was already two major versions
behind — and then Samsung decided not to do the update after all, and it fell three major OS versions behind. Every one ever sold is still under contract today.
The Motorola Devour on Verizon launched with a Megan Fox Super Bowl ad, while reviews said it was “built to last and it delivers on features.”
As it turned out, the Devour shipped with an OS that was already
outdated. Before the next Super Bowl came around, it was three major
versions behind. Every one ever sold is still under contract until
sometime next year.
Developers Are Constrained
Besides the obvious platform fragmentation problems, consider this comparison: iOS developers, like Instapaper’s Marco Arment,
waited patiently until just this month to raise their apps’ minimum
requirement to the 11 month old iOS 4.2.1. They can do so knowing that
it’s been well over 3 years since anyone bought an iPhone that couldn’t
run that OS. If developers apply that same standard to Android, it will
be at least 2015 before they can start requiring 2010’s Gingerbread OS.
That’s because everyUScarrier is still selling - even just now introducing2
- smartphones that will almost certainly never run Gingerbread and
beyond. Further, those are phones still selling for actual upfront money
- I’m not even counting the generally even more outdated &
presumably much more popular free phones.
It seems this is one area the Android/Windows comparison holds up:
most app developers will end up targeting an ancient version of the OS
in order to maximize market reach.
Security Risks Loom
In the chart, the dashed line in the middle of each bar indicates how
long that phone was getting any kind of support updates - not just
major OS upgrades. The significant majority of models have received very
limited support after sales were discontinued. If a security or privacy
problem popped up in old versions of Android or its associated apps
(i.e. the browser), it’s hard to imagine that all of these
no-longer-supported phones would be updated. This is only less likely as
the number of phones that manufacturers would have to go back and deal
with increases: Motorola, Samsung, and HTC all have at least 20 models
each in the field already, each with a range of carriers that seemingly
have to be dealt with individually.
Why Don’t Android Phones Get Updated?
That’s a very good question. Obviously a big part of the problem is
that Android has to go from Google to the phone manufacturers to the
carriers to the devices, whereas iOS just goes from Apple directly to
devices. The hacker community (e.g. CyanogenMod, et cetera) has frequently managed to get these phones to run the newer operating systems, so it isn’t a hardware issue.
It appears to be a widely held viewpoint3
that there’s no incentive for smartphone manufacturers to update the
OS: because manufacturers don’t make any money after the hardware sale,
they want you to buy another phone as soon as possible. If that’s really
the case, the phone manufacturers are spectacularly dumb: ignoring the 2
year contract cycle & abandoning your users isn’t going to engender
much loyalty when they do buy a new phone. Further, it’s been fairly
well established that Apple also really only makes money from hardware sales, and yet their long term update support is excellent (see chart).
In other words, Apple’s way of getting you to buy a new phone is to
make you really happy with your current one, whereas apparently Android
phone makers think they can get you to buy a new phone by making you
really unhappy with your current one. Then again, all of this
may be ascribing motives and intent where none exist - it’s entirely
possible that the root cause of the problem is just flat-out bad
management (and/or the aforementioned spectacular dumbness).
A Price Observation
All of the even slightly cheaper phones are much worse than the
iPhone when it comes to OS support, but it’s interesting to note that
most of the phones on this list were actually not cheaper than the
iPhone when they were released. Unlike the iPhone however, the
“full-priced” phones are frequently discounted in subsequent months. So
the “low cost” phones that fueled Android’s generally accepted price
advantage in this period were basically either (a) cheaper from the
outset, and ergo likely outdated & terribly supported or (b)
purchased later in the phone’s lifecycle, and ergo likely outdated &
terribly supported.
Also, at any price point you’d better love your rebates. If you’re
financially constrained enough to be driven by upfront price, you can’t
be that excited about plunking down another $100 cash and waiting weeks
or more to get it back. And sometimes all you’re getting back is a “$100 Promotion Card” for your chosen provider. Needless to say, the iPhone has never had a rebate.
Along similar lines, a very small but perhaps telling point: the
price of every single Android phone I looked at ended with 99 cents -
something Apple has never done (the iPhone is $199, not $199.99). It’s
almost like a warning sign: you’re buying a platform that will
nickel-and-dime you with ads and undeletable bloatware, and it starts
with those 99 cents. And that damn rebate form they’re hoping you don’t
send in.
Notes on the chart and data
Why stop at June 2010?
I’m not going to. I do think that having 15 months or so of history
gives a good perspective on how a phone has been treated, but it’s also
just a labor issue - it takes a while to dredge through the various
sites to determine the history of each device. I plan to continue on and
might also try to publish the underlying table with references. I also
acknowledge that it’s possible I’ve missed something along the way.
Android Release Dates
For the major Android version release dates, I used the date at
which it was actually available on a normal phone you could get via
normal means. I did not use the earlier SDK release date, nor the date
at which ROMs, hacks, source, et cetera were available.
Outside the US
Finally, it’s worth noting that people outside the US have often had
it even worse. For example, the Nexus One didn’t go on sale in Europe
until 5 months after the US, the Droid/Milestone FroYo update happened
over 7 months later there, and the Cliq never got updated at all outside
of the US.
Thanks primarily to CNET & Wikipedia for the list of phones.?
Yes, AT&T committed to Gingerbread updates
for its 2011 Android phones, but only those that had already been
released at the time of the July 25 press release. The Impulse doesn’t
meet that criterion. Nor does the Sharp FX Plus.?
Miles Lightwood, of TeamTeamUSA, is leading Project Shelter as Makerbot's artist in residence.
Where does 3D printing and species protection intersect? Hermit crabs, apparently. Makerbot Industries, who make do-it-yourself 3D printers, launched Project Shellter
last Tuesday. Project Shellter intends to leverage the Makerbot
community's design talent and network of 5,000 3D printers to design and
produce shells for hermit crabs who face a species threatening,
man-made housing shortage. Hmm, sounds familiar.
Hermit crabs don’t make their own shells. They scavenge
their homes. And now, hermit crabs are facing a housing shortage as the
worldwide shell supply is decreasing. With a shell shortage, hermit
crabs around the world are being forced to stick their butts into
bottles, shotgun shells, and anything else they can find. This is not
acceptable. As a community, we can reach out to this vulnerable species
and offer our digital design skills and 3D printing capabilities and
give hermit crabs another option: 3D printed shells.
One of the challenges is that no one knows yet if hermit crabs will
live in man-made plastic shells. And if they will, what shell designs
would make the best hermit crab homes. Makerbot is setting up a hermit
crab habitat in their factory to test shell designs shared by the
community.
This is an ingenious crowdsourced intervention, and I encourage you to check it out (follow the #SHELLTER
tag Twitter). But, a thought - how about we stop destroying hermit crab
homes in the first place? Isn't putting too much plastic stuff in the
ocean part of the problem?
UPDATE 10/25:
Some clarification from the Makerbot folks brought up from comments below:
The final shell material has yet to be determined; plastic is being used for prototypes
No printed shells have been distributed in the wild
The goal is to create a printable hermit crab shell for domestic (aquariums) use thus reducing harvesting of natural shells
One of the best things about Steve Jobs’ return to Apple
— which will live on and benefit society long after Jobs’ death — is
how he basically spent the last 14 years teaching thousands of Apple
employees to have incredibly high standards and to build amazing
products.
Perhaps more importantly, through Apple’s products,
Steve also taught hundreds of millions of consumers to expect and demand
amazing things.
For now, many of those Apple colleagues —
especially the ones who worked most closely with Steve — still work
there. But over time, more will leave to start their own companies or
launch new projects. And some of those companies will make some really
cool things completely outside the consumer electronics industry,
reflecting both the work of their founders and also a little bit of
Steve Jobs.
Yes, it’s not the first
thermostat on the market that uses software to “learn” how to heat and
cool your house properly, just as the iPod wasn’t the first MP3 player
on the market. But it looks great, seems to have a user interface that
is well ahead of the competition, reflects modern software and
networking capabilities, and has an aspirational brand that I have never
seen in a thermostat before.
Did I mention it looks great? See this video on TechCrunch
where Fadell explains the attention to detail they put into the Nest,
designing the outer metal ring to act as a sort-of mirror, helping the
thermostat take on and blend into the colors of the wall behind it. This
isn’t just a fancy and functional thermostat, it’s a beautiful one. Go
to the “thermostats” page on the Home Depot website –
I’ve sorted the results to put the most expensive ones at the top of
the page — and see a bunch of white plastic boxes with black-and-green
LCD displays. And now you see why the Nest thermostat is exciting people
today.
More
than anything, this reminds me of the slide of 2006-era smartphones —
Motorola Q, BlackBerry Pearl, Palm Treo, Nokia E-something-something –
that Steve Jobs displayed before he introduced the iPhone for the first
time. Night and day.
Anyway, the Nest is, of course, just one example. It might not even work — it could be a flop. Who knows. That’s not the point.
The
idea is that, going forward, integrated hardware, software, and
Internet services — the model that Apple has really gotten good at — are
going to take over more industries than just computers and pocket
gadgets. Everything from your phone (already done, if you have an
iPhone) to your car (in progress) to your thermostat (see above) to
far-flung sectors of industry and commerce. If it hasn’t been
re-imagined yet, it will be eventually.
And while Apple can’t and won’t make all of that stuff
itself — part of what makes Apple so successful is its carefully
limited domain and obsessive focus — people who worked there and learned
from Steve Jobs can. And they increasingly will. And it could be great
for all of us.
In the beginning, there was the web and you accessed it though the
browser and all was good. Stuff didn’t download until you clicked on
something; you expected cookies to be tracking you and you always knew
if HTTPS was being used. In general, the casual observer had a pretty
good idea of what was going on between the client and the server.
Not
so in the mobile app world of today. These days, there’s this great big
fat abstraction layer on top of everything that keeps you pretty well
disconnected from what’s actually going on. Thing is, it’s a trivial
task to see what’s going on underneath, you just fire up an HTTP proxy
like Fiddler, sit back and watch the show.
Let
me introduce you to the seedy underbelly of the iPhone, a world where
not all is as it seems and certainly not all is as it should be.
There’s no such thing as too much information – or is there?
Here’s
a good place to start: conventional wisdom says that network efficiency
is always a good thing. Content downloads faster, content renders more
quickly and site owners minimise their bandwidth footprint. But even
more importantly in the mobile world, consumers are frequently limited
to fairly meagre download limits, at least by today’s broadband
standards. Bottom line: bandwidth optimisation in mobile apps is very important, far more so than in your browser-based web apps of today.
Let me give you an example of where this all starts to go wrong with mobile apps. Take the Triple M app, designed to give you a bunch of info about one of Australia’s premier radio stations and play it over 3G for you. Here’s how it looks:
Where it all starts to go wrong is when you look at the requests being made just to load the app, you’re up to 1.3MB alone:
Why?
Well, part of the problem is that you’ve got no gzip compression.
Actually, that’s not entirely true, some of the small stuff is
compressed, just none of the big stuff. Go figure.
But there’s
also a lot of redundancy. For example, on the app above you can see the
first article titled “Manly Sea Eagles’ 2013 Coach…” and this is
returned in request #2 as is the body of the story. So far, so good. But
jump down to request #19 – that massive 1.2MB one – and you get the
whole thing again. Valuable bandwidth right out the window there.
Now
of course this app is designed to stream audio so it’s never going to
be light on bandwidth (as my wife discovered when she hit her cap “just
by listening to the radio”), and of course some of the upfront load is
also to allow the app to respond instantaneously when you drill down to
stories. But the patterns above are just unnecessary; why send redundant
data in an uncompressed format?
Here’s a dirty Foxtel secret;
what do you reckon it costs you in bandwidth to load the app you see
below? A few KB? Maybe a hundred KB to pull down a nice gzipped JSON
feed of all the channels? Maybe nothing because it will pull data on
demand when you actually do something?
Guess again, you’ll be needing a couple of meg just to start the app:
Part
of the problem is shown under the Content-Type heading above; it’s
nearly all PNG files. Actually, it’s 134 PNG files. Why? I mean what on
earth could justify nearly 2 meg of PNGs just to open the app? Take a
look at this fella:
This is just one of the actual images at original size. And why is this humungous PNG required? To generate this screen:
Hmmm, not really a use case for a 425x243, 86KB PNG. Why? Probably because as we’ve seen before,
developers like to take something that’s already in existence and
repurpose it outside its intended context, and just as in that
aforementioned link, this can start causing all sorts of problems.
Unfortunately it makes for an unpleasant user experience as you sit
there waiting for things to load while it does unpleasant things to your
(possibly meagre) data allocation.
But we’re only just warming up. Let’s take a look at the very excellent, visually stunning EVO magazine on the iPad. The initial screen is just a list of editions like so:
Let’s
talk in real terms for a moment; the iPad resolution is 1024x768 or
what we used to think of as “high-res” not that long ago. The image
above has been resized down to about 60% of original but on the iPad,
each of those little magazine covers was originally 180x135 which even
saved as a high quality PNG, can be brought down well under 50KB apiece.
However:
Thirteen and a half meg?! Where an earth did all that go?! Here’s a clue:
Go on, click it, I’ll wait.
Yep, 1.6MB.
4,267 pixels wide, 3,200 pixels high.
Why?
I have no idea, perhaps the art department just sent over the originals
intended for print and it was “easy” to dump that into the app. All you
can do in the app without purchasing the magazine (for which you expect
a big bandwidth hit), is just look at those thumbnails. So there you
go, 13.5MB chewed out of your 3G plan before you even do anything just
because images are being loaded with 560 times more pixels than they
need.
The secret stalker within
Apps you install
directly onto the OS have always been a bit of a black box. I mean it’s
not the same “view source” world that we've become so accustomed to with
the web over the last decade and a half where it’s pretty easy to see
what’s going on under the covers (at least under the browser covers).
With the volume and affordability of iOS apps out there, we’re now well
and truly back in the world of rich clients which performs all sorts of
things out of your immediate view, and some of them are rather
interesting.
Let’s take cooking as an example; the ABC Foodi app is a beautiful piece of work. I mean it really is visually delightful and a great example of what can be done on the iPad:
But
it’s spying on you and phoning home at every opportunity. For example,
you can’t just open the app and start browsing around in the privacy of
your own home, oh no, this activity is immediately reported (I’m
deliberately obfuscating part of the device ID):
Ok, that’s a mostly innocuous, but it looks like my location – or at least my city
– is also in there so obviously my movements are traceable. Sure, far
greater location fidelity can usually be derived from the IP address
anyway (and they may well be doing this on the back end), but it’s
interesting to see this explicitly captured.
Let’s try something else, say, favouriting a dish:
Not the asparagus!!! Looks like you can’t even create a favourite
without your every move being tracked. But it’s actually even more than
that; I just located a nice chocolate cake and emailed it to myself
using the “share” feature. Here’s what happened next:
The app tracks your every moveand sends it back to base in small batches. In this case, that base is at flurry.com, and who are these guys? Well it’s quite clear from their website:
Flurry powers acquisition, engagement and monetization for the new
mobile app economy, using powerful data and applying game-changing
insight.
Here’s something I’ve seen before: POST requests to data.flurry.com. It’s perfectly obvious when you use the realestate.com.au iPad app:
Uh, except it isn’t really obvious, it’s another sneaky backdoor to
help power acquisitions and monetise the new app economy with
game-changing insight. Here’s what it’s doing and clearly it has nothing
to do with finding real estate:
Hang on – does that partially obfuscated device ID look a bit
familiar?! Yes it does, so Flurry now knows both what I’m cooking and
which kitchen I’d like to be cooking it in. And in case you missed it,
the first request when the Foxtel app was loaded earlier on was also to
data.flurry.com. Oh, and how about those travel plans with TripIt – the cheerful blue sky looks innocuous enough:
But under the covers:
Suddenly monetisation with powerful data starts to make more sense.
But this is no different to a tracking cookie on a website, right?
Well, yes and no. Firstly, tracking cookies can be disabled. If you
don’t like ‘em, turn ‘em off. Not so the iOS app as everything is hidden
under the covers. Actually, it’s in much the same way as a classic app
that gets installed on any OS although in the desktop world, we’ve
become accustomed to being asked if we’re happy to share our activities “for product improvement purposes”.
These privacy issues simply come down to this: what does the user
expect? Do they expect to be tracked when browsing a cook book installed
on their local device? And do they expect this activity to be
cross-referenceable with the use of other apparently unrelated apps? I
highly doubt it, and therein lays the problem.
Security? We don’t need no stinkin’ security!
Many people are touting mobile apps as the new security frontier, and rightly so IMHO. When I wrote about Westfield
last month I observed how easy it was for the security of services used
by apps to be all but ignored as they don’t have the same direct public
exposure as the apps themselves. A browse through my iPhone collection
supports the theory that mobile app security is taking a back seat to
their browser-based peers.
Let’s take the Facebook app. Now to be honest, this one surprised me a
little. Back in Jan of this year, Facebook allowed opt-in SSL or in or in the words of The Register,
this is also known as “Turn it on yourself...bitch”. Harsh, but fair –
this valuable security feature was going to be overlooked by many, many
people. “SS what?”
Unfortunately, the very security that is offered to browser-based
Facebook users is not accessible on the iPhone client. You know, the
device which is most likely to be carried around to wireless hotspots
where insecure communications are most vulnerable. Here’s what we’re
left with:
This is especially surprising as that little bit of packet-sniffing magic that is Firesheep was no doubt the impetus for even having the choice of enable SSL. But here we are, one year on and apparently Facebook is none the wiser.
Let’s change pace a little and take a look inside the Australian Frequent Flyer app.
This is “Australia's leading Frequent Flyer Community” and clearly a
community of such standing would take security very, very seriously.
Let’s login:
As with the Qantas example above, you have absolutely no idea
how these credentials are being transported across the wire. Well,
unless you have Fiddler then it’s perfectly clear they’re just being
posted to this HTTP address: http://www.australianfrequentflyer.com.au/mobiquo-withsponsor/mobiquo.php
And of course it’s all very simple to see the post body which in this case, is little chunk of XML:
Bugger, clearly this is going to tack some sort of hacker mastermind
to figure out what these credentials are! Except it doesn’t, it just
takes a bit of Googling. There are two parameters and they’re both Base64
encoded. How do we know this? Well, firstly it tells us in one of the
XML nodes and secondly, it’s a pretty common practice to encode data in
this fashion before shipping it around the place (refer the link in this
paragraph for the ins and outs of why this is useful).
So we have a value of “YWJjMTIz” and because Base64 is designed to allow for simple encoding and decoding (unlike a hash which is a one way process), you can simply convert this back to plain text using any old Base64 decoder. Here’s an online one right here and it tells us this value is “abc123”. Well how about that!
The next value is “NzA1ZWIyOThiNjQ4YWM1MGZiNmUxN2YzNjY0Yjc4ZTI=”
which is obviously going to be the password. Once we Base64 decode this,
we have something a little different:
“705eb298b648ac50fb6e17f3664b78e2?”. Wow, that’s an impressive password!
Except that as we well know, people choosing impressive passwords is a
very rare occurrence indeed so in all likelihood this is a hash. Now
there are all sorts of nifty brute forcing tools out there but when it
comes to finding the plain text of a hash, nothing beats Google. And
what does the first result Google gives us? Take a look:
I told you that was a useless password
dammit! You see the thing is, there’s a smorgasbord of hashes and their
plain text equivalents just sitting out there waiting to be searched
which is why it’s always important to apply a cryptographically random salt to the plain text before hashing. A straight hash of most user-created passwords – which we know are generally crap – can very frequently be resolved to plain text in about 5 seconds via Google.
The bottom line is that this is almost no better than plain
text credentials and it’s definitely no alternative to transport layer
security. I’ve had numerous conversations with developers before trying
to explain the difference between encoding, hashing and encryption and
if I were to take a guess, someone behind this thinks they’re
“encrypted” the password. Not quite.
But is this really any different to logging onto the Australian Frequent Flyer website
which also (sadly) has no HTTPS? Yes, what’s different is that firstly,
on the website it’s clear to see that no HTTPS has been employed, or at
least not properly employed (the login form isn’t loaded over HTTPS). I
can then make a judgement call on the trust and credibility of the
site; I can’t do that with the app. But secondly, this is a mobile app – a mobile travel
app – you know, the kind of thing that’s going to be used while roaming
around wireless hotspots vulnerable to eavesdropping. It’s more
important than ever to protect sensitive communications and the app
totally misses the mark on that one.
While we’re talking travel apps, let’s take another look at Qantas. I’ve written about these guys before, in fact they made my Who’s who of bad password practices list earlier in the year. Although they made a small attempt at implementing security by posting to SSL, as I’ve said before, SSL is not about encryption and loading login forms over HTTP is a big no-no. But at least they made some attempt.
So why does the iPhone app just abandon all hope and do everything
without any transport layer encryption at all? Even worse, it just
whacks all the credentials in a query string. So it’s a GET request.
Over HTTP. Here’s how it goes down:
This is a fairly typical login screen. So far, so good, although of
course there’s no way to know how the credentials are handled. At least,
that is, until you take a look at the underlying request that’s made.
Here’s those query string parameters:
Go ahead, give it a run. What I find most odd about this situation is that clearly a conscious decision was made to apply some
degree of transport encryption to the browser app, why not the mobile
app? Why must the mobile client remain the poor cousin when it comes to
basic security measures? It’s not it’s going to be used inside any
highly vulnerable public wifi hotspots like, oh I don’t know, an airport
lounge, right?
Apps continue to get security wrong. Not just iOS apps mind you,
there are plenty of problems on Android and once anyone buys a Windows
Mobile 7 device we’ll see plenty of problems on those too. It’s just too
easy to get wrong and when you do get it wrong, it’s further out of
sight than your traditional web app. Fortunately the good folks at OWASP are doing some great work around a set of Top 10 mobile security risks so there’s certainly acknowledge of the issues and work being done to help developers get mobile security right.
Summary
What I discovered about is the result of casually observing some of
only a few dozen apps I have installed on my iOS devices. There are
about half a million more out there and if I were a betting man, my
money would be the issues above only being the tip of the iceberg.
You can kind of get how these issues happen; every man and his dog
appears to be building mobile apps these days and a low bar to entry is
always going to introduce some quality issues. But Facebook? And Qantas?
What are their excuses for making security take a back seat?
Developers can get away with more sloppy or sneaky practices in mobile apps as the execution is usually further out of view.
You can smack the user with a massive asynchronous download as their
attention is on other content; but it kills their data plan.
You can track their moves across entirely autonomous apps; but it erodes their privacy.
And most importantly to me, you can jeopardise their security without
their noticing; but the potential ramifications are severe.
When Tim Berners-Lee arrived at CERN,
Geneva's celebrated European Particle Physics Laboratory in 1980, the
enterprise had hired him to upgrade the control systems for several of
the lab's particle accelerators. But almost immediately, the inventor of
the modern webpage noticed a problem: thousands of people were floating
in and out of the famous research institute, many of them temporary
hires.
"The big challenge for contract programmers was to try to
understand the systems, both human and computer, that ran this fantastic
playground," Berners-Lee later wrote. "Much of the crucial information
existed only in people's heads."
So in his spare time, he wrote up some software to address this
shortfall: a little program he named Enquire. It allowed users to create
"nodes"—information-packed index card-style pages that linked to other
pages. Unfortunately, the PASCAL application ran on CERN's proprietary
operating system. "The few people who saw it thought it was a nice idea,
but no one used it. Eventually, the disk was lost, and with it, the
original Enquire."
Some years later Berners-Lee returned to CERN.
This time he relaunched his "World Wide Web" project in a way that
would more likely secure its success. On August 6, 1991, he published an
explanation of WWW on the alt.hypertext usegroup. He also released a code library, libWWW, which he wrote with his assistant
Jean-François Groff. The library allowed participants to create their own Web browsers.
"Their efforts—over half a dozen browsers within 18 months—saved
the poorly funded Web project and kicked off the Web development
community," notes a commemoration of this project by the Computer History Museum in Mountain View, California. The best known early browser was Mosaic, produced by Marc Andreesen and Eric Bina at the
National Center for Supercomputing Applications (NCSA).
Mosaic was soon spun into Netscape, but it was not the first browser. A map
assembled by the Museum offers a sense of the global scope of the early
project. What's striking about these early applications is that they
had already worked out many of the features we associate with later
browsers. Here is a tour of World Wide Web viewing applications, before
they became famous.
The CERN browsers
Tim Berners-Lee's original 1990 WorldWideWeb browser was both a
browser and an editor. That was the direction he hoped future browser
projects would go. CERN has put together a reproduction
of its formative content. As you can see in the screenshot below, by
1993 it offered many of the characteristics of modern browsers.
Tim Berners-Lee's original WorldWideWeb browser running on a NeXT computer in 1993
The software's biggest limitation was that it ran on the NeXTStep
operating system. But shortly after WorldWideWeb, CERN mathematics
intern Nicola Pellow wrote a line mode browser that could function
elsewhere, including on UNIX and MS-DOS networks. Thus "anyone could
access the web," explains Internet historian Bill Stewart, "at that point consisting primarily of the CERN phone book."
Erwise came next. It was written by four Finnish college students in 1991 and released in 1992. Erwise is credited as the first browser that offered a graphical interface. It could also search for words on pages.
Berners-Lee wrote a review
of Erwise in 1992. He noted its ability to handle various fonts,
underline hyperlinks, let users double-click them to jump to other
pages, and to host multiple windows.
"Erwise looks very smart," he declared, albeit puzzling over a
"strange box which is around one word in the document, a little like a
selection box or a button. It is neither of these—perhaps a handle for
something to come."
So why didn't the application take off? In a later interview, one of
Erwise's creators noted that Finland was mired in a deep recession at
the time. The country was devoid of angel investors.
"We could not have created a business around Erwise in Finland then," he explained.
"The only way we could have made money would have been to continue our
developing it so that Netscape might have finally bought us. Still, the
big thing is, we could have reached the initial Mosaic level with
relatively small extra work. We should have just finalized Erwise and
published it on several platforms."
ViolaWWW was released in April
of 1992. Developer
Pei-Yuan Wei wrote it at the University of California at Berkeley via
his UNIX-based Viola programming/scripting language. No, Pei Wei didn't
play the viola, "it just happened to make a snappy abbreviation" of
Visually Interactive Object-oriented Language and Application, write
James Gillies and Robert Cailliau in their history of the World Wide
Web.
Wei appears to have gotten his inspiration from the early Mac
program HyperCard, which allowed users to build matrices of formatted
hyper-linked documents. "HyperCard was very compelling back then, you
know graphically, this hyperlink thing," he later recalled. But the
program was "not very global and it only worked on Mac. And I didn't
even have a Mac."
But he did have access to UNIX X-terminals at UC Berkeley's
Experimental Computing Facility. "I got a HyperCard manual and looked at
it and just basically took the concepts and implemented them in
X-windows." Except, most impressively, he created them via his Viola
language.
One of the most significant and innovative features of ViolaWWW was
that it allowed a developer to embed scripts and "applets" in the
browser page. This anticipated the huge wave of Java-based applet
features that appeared on websites in the later 1990s.
In his documentation, Wei also noted various "misfeatures" of ViolaWWW, most notably its inaccessibility to PCs.
Not ported to PC platform.
HTML Printing is not supported.
HTTP is not interruptable, and not multi-threaded.
Proxy is still not supported.
Language interpreter is not multi-threaded.
"The author is working on these problems... etc," Wei acknowledged
at the time. Still, "a very neat browser useable by anyone: very
intuitive and straightforward," Berners-Lee concluded in his review
of ViolaWWW. "The extra features are probably more than 90% of 'real'
users will actually use, but just the things which an experienced user
will want."
In September of 1991, Stanford Linear Accelerator physicist Paul
Kunz visited CERN. He returned with the code necessary to set up the
first North American Web server at SLAC. "I've just been to CERN," Kunz
told SLAC's head librarian Louise Addis, "and I found this wonderful
thing that a guy named Tim Berners-Lee is developing. It's just the
ticket for what you guys need for your database."
Addis agreed. The site's head librarian put the research center's
key database over the Web. Fermilab physicists set up a server shortly
after.
Then over the summer of 1992 SLAC physicist Tony Johnson wrote Midas, a graphical browser for the Stanford physics community. The big draw
for Midas users was that it could display postscript documents, favored
by physicists because of their ability to accurately reproduce
paper-scribbled scientific formulas.
"With these key advances, Web use surged in the high energy physics community," concluded a 2001 Department of Energy assessment of SLAC's progress.
Meanwhile, CERN associates Pellow and Robert Cailliau released the
first Web browser for the Macintosh computer. Gillies and Cailliau
narrate Samba's development.
For Pellow, progress in getting Samba up and running was slow,
because after every few links it would crash and nobody could work out
why. "The Mac browser was still in a buggy form,' lamented Tim
[Berners-Lee] in a September '92 newsletter. 'A W3 T-shirt to the first
one to bring it up and running!" he announced. The T shirt duly went to
Fermilab's John Streets, who tracked down the bug, allowing Nicola
Pellow to get on with producing a usable version of Samba.
Samba "was an attempt to port the design of the original WWW
browser, which I wrote on the NeXT machine, onto the Mac platform,"
Berners-Lee adds,
"but was not ready before NCSA [National Center for Supercomputing
Applications] brought out the Mac version of Mosaic, which eclipsed it."
Mosaic was "the spark that lit the Web's explosive growth in 1993,"
historians Gillies and Cailliau explain. But it could not have been
developed without forerunners and the NCSA's University of Illinois
offices, which were equipped with the best UNIX machines. NCSA also had
Dr. Ping Fu, a PhD computer graphics wizard who had worked on morphing
effects for Terminator 2. She had recently hired an assistant named Marc Andreesen.
"How about you write a graphical interface for a browser?" Fu
suggested to her new helper. "What's a browser?" Andreesen asked. But
several days later NCSA staff member Dave Thompson gave a demonstration
of Nicola Pellow's early line browser and Pei Wei's ViolaWWW. And just
before this demo, Tony Johnson posted the first public release of Midas.
The latter software set Andreesen back on his heels. "Superb!
Fantastic! Stunning! Impressive as hell!" he wrote to Johnson. Then
Andreesen got NCSA Unix expert Eric Bina to help him write their own
X-browser.
Mosaic offered many new web features, including support for video
clips, sound, forms, bookmarks, and history files. "The striking thing
about it was that unlike all the earlier X-browsers, it was all
contained in a single file," Gillies and Cailliau explain:
Installing it was as simple as pulling it across the network and
running it. Later on Mosaic would rise to fame because of the
<IMG> tag that allowed you to put images inline for the first
time, rather than having them pop up in a different window like Tim's
original NeXT browser did. That made it easier for people to make Web
pages look more like the familiar print media they were use to; not
everyone's idea of a brave new world, but it certainly got Mosaic
noticed.
"What I think Marc did really well," Tim Berners-Lee later wrote,
"is make it very easy to install, and he supported it by fixing bugs via
e-mail any time night or day. You'd send him a bug report and then two
hours later he'd mail you a fix."
Perhaps Mosaic's biggest breakthrough, in retrospect, was that it
was a cross-platform browser. "By the power vested in me by nobody in
particular, X-Mosaic is hereby released," Andreeson proudly declared on
the www-talk group on January 23, 1993. Aleks Totic unveiled his Mac
version a few months later. A PC version came from the hands of Chris
Wilson and Jon Mittelhauser.
The Mosaic browser was based on Viola and Midas, the Computer History museum's exhibit notes.
And it used the CERN code library. "But unlike others, it was reliable,
could be installed by amateurs, and soon added colorful graphics within
Web pages instead of as separate windows."
The Mosaic browser was available for X Windows, the Mac, and Microsoft Windows
But Mosaic wasn't the only innovation to show up on the scene around that same time. University of Kansas student Lou Montulli
adapted a campus information hypertext browser for the Internet and
Web. It launched in March, 1993. "Lynx quickly became the preferred web
browser for character mode terminals without graphics, and remains in
use today," historian Stewart explains.
And at Cornell University's Law School, Tom Bruce was writing a Web
application for PCs, "since those were the computers that lawyers tended
to use," Gillies and Cailliau observe. Bruce unveiled his browser Cello on June 8, 1993, "which was soon being downloaded at a rate of 500 copies a day."
Cello!
Six months later, Andreesen was in Mountain View, California, his
team poised to release Mosaic Netscape on October 13, 1994. He, Totic,
and Mittelhauser nervously put the application up on an FTP server. The
latter developer later recalled the moment. "And it was five minutes and
we're sitting there. Nothing has happened. And all of a sudden the
first download happened. It was a guy from Japan. We swore we'd send him
a T shirt!"
But what this complex story reminds is that is that no innovation is
created by one person. The Web browser was propelled into our lives by
visionaries around the world, people who often didn't quite understand
what they were doing, but were motivated by curiosity, practical
concerns, or even playfulness. Their separate sparks of genius kept the
process going. So did Tim Berners-Lee's insistence that the project stay
collaborative and, most importantly, open.
"The early days of the web were very hand-to-mouth," he writes. "So many things to do, such a delicate flame to keep alive."
Further reading
Tim Berners-Lee, Weaving the Web: The Original Design and Ultimate Destiny of the World Wide Web
James Gillies and R. Cailliau, How the web was born
Outside of its remarkable sales, the real star of the iPhone 4S show
has been Siri, Apple’s new voice recognition software. The intuitive
voice recognition software is the closest to A.I. we’ve seen on a
smartphone to date.
Over the weekend I noted
that Siri has some resemblance to the IBM supercomputer, Watson, and
speculated that someday Watson would be in our pockets while the
supercomputers of the future might look a lot more like the Artificial
Intelligence we’ve read about in science fiction novels today, such as
the mysterious Wintermute from William Gibson’s Neuromancer.
Over at Wired, John Stokes explains how Siri and the Apple cloud could lead to the advent of a real Artificial Intelligence:
In
the traditional world of canned, chatterbot-style “AI,” users had to
wait for a software update to get access to new input/output pairs. But
since Siri is a cloud application, Apple’s engineers can continuously
keep adding these hard-coded input/output pairs to it. Every time an
Apple engineer thinks of a clever response for Siri to give to a
particular bit of input, that engineer can insert the new pair into
Siri’s repertoire instantaneously, so that the very next instant every
one of the service’s millions of users will have access to it. Apple
engineers can also take a look at the kinds of queries that are popular
with Siri users at any given moment, and add canned responses based on
what’s trending.
In this way, we can expect Siri’s repertoire of clever
comebacks to grow in real-time through the collective effort of hundreds
of Apple employees and tens or hundreds of millions of users, until it
reaches the point where an adult user will be able to carry out a
multipart exchange with the bot that, for all intents and purposes,
looks like an intelligent conversation.
Meanwhile, the technology undergirding the software and iPhone
hardware will continue to improve. Now, this may not be the AI we had in
mind, but it also probably won’t be the final word in Artificial
Intelligence either. Other companies, such as IBM, are working to
develop other ‘cognitive computers‘ as well.
And while the Singularity may indeed be far, far away, it’s still exciting to see how some forms of A.I. may emerge at least in part through cloud-sourcing.
The sports world has already taken several bold leaps into its tech
future with on-the-field innovations, but a new invention by a clever
European tinkerer could elevate the art of sports photography to another level.
Jonas Pfeil, a recent graduate of the Technical University of Berlin, created the Throwable Panoramic Ball Camera
as a way to instantly create panoramic images without the often tedious
digital stitching process common today. Equipped with an accelerometer,
when thrown the camera creates full spherical panoramas that display
three-dimensional images of a moment captured in time. Although the
device is not yet for sale, it will be on display at the upcoming Siggraph Asia conference. You can see the ball's technology at work in the video below.
As new platform versions get released more and more quickly,
are users keeping up? Zeh Fernando, a senior developer at Firstborn,
looks at current adoption rates and points to some intriguing trends
There's a quiet revolution happening on the web, and it's related to one aspect of the rich web that is rarely discussed: the adoption rate of new platform versions.
First,
to put things into perspective, we can look at the most popular browser
plug-in out there: Adobe's Flash Player. It's pretty well known that
the adoption of new versions of the Flash plug-in happens pretty
quickly: users update Flash Player quickly and often after a new version
of the plug-in is released [see Adobe's sponsored NPD, United
States, Mar 2000-Jun 2006, and Milward Brown’s “Mature markets”, Sep
2006-Jun 2011 research (here and here) collected by way of the Internet Archive and saved over time; here’s the complete spreadsheet with version release date help from Wikipedia].
To
simplify it: give it around eight months, and 90 per cent of the
desktops out there will have the newest version of the plug-in
installed. And as the numbers represented in the charts above show, this
update rate is only improving. That’s party due to the fact that
Chrome, now a powerful force in the browser battles, installs new
versions of the Flash Player automatically (sometimes even before it is
formally released by Adobe), and that Firefox frequently detects the
user's version and insists on them installing an updated, more secure
version.
Gone are the days where the Flash platform needed an
event such as the Olympics or a major website like MySpace or YouTube
making use of a new version of Flash to make it propagate faster; this
now happens naturally. Version 10.3 only needed one month to get to a
40.5 per cent install base, and given the trends set by the previous
releases, it's likely that the plug-in's new version 11 will break new speed records.
Any technology that can allow developers and publishers to take advantage of it in a real world scenario
so fast has to be considered a breakthrough. Any new platform feature
can be proposed, developed, and made available with cross-platform
consistency in record time; such is the advantage of a proprietary
platform like Flash. To mention one of the more adequate examples of
the opposite effect, features added to the HTML platform (in any of its
flavours or versions) can take many years of proposal and beta support
until they're officially accepted, and when that happens, it takes many
more years until it becomes available on most of the computers out
there. A plug-in is usually easier and quicker to update than a browser
too.
That has been the story so far. But that's changing.
Advertisement
Google Chrome adoption rate
Looking
at the statistics for the adoption rate of the Flash plug-in, it's easy
to see it's accelerating constantly, meaning the last versions of the
player were finding their way to the user's desktops quicker and quicker
with every new version. But when you have a look at similar adoption
rate for browsers, a somewhat similar but more complex story unfolds.
Let's
have a look at Google Chrome's adoption rates in the same manner I've
done the Flash player comparisons, to see how many people had each of
its version installed (but notice that, given that Chrome is not used by
100 per cent of the people on the internet, it is normalised for the
comparison to make sense).
The striking thing here is that the adoption rate of Google Chrome manages to be faster than Flash Player itself [see StatOwl's web browser usage statistics, browser version release dates from Google Chrome on
Wikipedia]. This is helped, of course, by the fact that updates happens
automatically (without user approval necessary) and easily (using its smart diff-based update engine to
provide small update files). As a result, Chrome can get to the same 90
per cent of user penetration rate in around two months only; but what
it really means is that Google manages to put out updates to their HTML engine much faster than Flash Player.
Of
course, there's a catch here if we're to compare that to Flash Player
adoption rate: as mentioned, Google does the same auto-update for the
Flash Player itself. So the point is not that there's a race and
Chrome's HTML engine is leading it; instead, Chrome is changing the
rules of the game to not only make everybody win, but to make them win faster.
Opera adoption rate
The
fast update rate employed by Chrome is not news. In fact, one smaller
player on the browser front, Opera, tells a similar story.
Opera also manages to have updates reach a larger audience very quickly [see browser version release dates from History of the Opera web browser, Opera 10 and Opera 11
on Wikipedia]. This is probably due to its automatic update feature.
The mass updates seem to take a little bit longer than Chrome, around
three months for a 90 per cent reach, but it's important to notice that
its update workflow is not entirely automatic; last time I tested, it
still required user approval (and Admin rights) to work its magic.
Firefox adoption rate
The
results of this browser update analysis start deviating when we take a
similar look at the adoption rates of the other browsers. Take Firefox,
for example:
It's also clear that Firefox's update rate is accelerating (browser version release dates from Firefox on
Wikipedia), and the time-to-90-per-cent is shrinking: it should take
around 12 months to get to that point. And given Mozilla's decision to
adopt release cycles that mimics Chrome's,
with its quick release schedule, and automatic updates, we're likely to
see a big boost in those numbers, potentially making the update
adoption rates as good as Chrome's.
One interesting point here is
that a few users seem to have been stuck with Firefox 3.6, which is the
last version that employs the old updating method (where the user has
to manually check for new versions), causing Firefox updates to spread
quickly but stall around the 60 per cent mark. Some users still need to
realise there's an update waiting for them; and similarly to the problem the Mozilla team had to face with Firefox 3.5, it's likely that we'll see the update being automatically imposed upon users soon, although they'll still be able to disable it. It's gonna be interesting to see how this develops over the next few months.
What does Apple's Safari look like?
Safari adoption rate
Right now, adoption rates seem on par with Firefox (browser version release dates from Safari on
Wikipedia), maybe a bit better, since it takes users around 10 months
to get a 90 per cent adoption rate of the newest versions of the
browser. The interesting thing is that this seems to happen in a pretty solid
fashion, probably helped by Apple's OS X frequent update schedule,
since the browser update is bundled with system updates. Overall, update
rates are not improving – but they're keeping at a good pace.
Notice
that the small bump on the above charts for Safari 4.0.x is due to the
public beta release of that version of the browser, and the odd area for
Safari 4.1.x is due to its release in pair with Safari 5.0.x, but for a
different version of OSX.
IE adoption rate
All in all it seems to me that browser vendors, as well as the users, are starting to get it. Overall, updates are happening faster and the cycle from interesting idea to a feature that can be used on a real-world scenario is getting shorter and shorter.
There's just one big asterisk in this prognosis: the adoption rate for the most popular browser out there.
The adoption rate of updates to Internet Explorer is not improving at all (browser version release dates from Internet Explorer on Wikipedia). In fact, it seems to be getting worse.
As
is historically known, the adoption rate of new versions of Internet
Explorer is, well, painstakingly slow. IE6, a browser that was released
10 years ago, is still used by four per cent of the IE users, and newer
versions don't fare much better. IE7, released five years ago, is still
used by 19 per cent of all the IE users. The renewing cycle here is so
slow that it's impossible to even try to guess how long it would take
for new versions to reach a 90 per cent adoption rate, given the lack of
reliable data. But considering update rates haven't improved at all for
new versions of the browser (in fact, IE9 is doing worse than IE8 in
terms of adoption), one can assume a cycle of four years until any released version of Internet Explorer reaches 90 per cent user adoption. Microsoft itself is trying to make users abandon IE6,
and whatever the reason for the version lag – system administrators
hung up on old versions, proliferation of pirated XP installations that
can't update – it's just not getting there.
And the adoption rates
of new HTML features is, unfortunately, only as quick as the adoption
rate of the slowest browser, especially when it's someone that still
powers such a large number of desktops out there.
Internet Explorer will probably continue to be the most popular browser for a very long time.
The long road for HTML-based tech
The
story, so far, has been that browser plug-ins are usually easier and
quicker to update. Developers can rely on new features of a proprietary
platform such as Flash earlier than someone who uses the native HTML
platform could. This is changing, however, one browser at a time.
A merged
chart, using the weighted distribution of each browser's penetration
and the time it takes for its users to update, tells that the overall
story for HTML developers still has a long way to go.
Of course, this shouldn't be taken into account blindly. You should always have your audience in mind when developing a website,
and come up with your own numbers when deciding what kind of features
to make use of. But it works as a general rule of thumb to be taken into
account before falling in love with whatever feature has just been
added to a browser's rendering engine (or a new plug-in version, for
that matter). In that sense, be sure to use websites like caniuse.com to
check on what's supported (check the “global user stats” box!), and
look at whatever browser share data you have for your audience (and
desktop/mobile share too, although it's out of the scope of this
article).
Conclusion
Updating the browser has always been a
certain roadblock to most users. In the past, even maintaining
bookmarks and preferences when doing an update was a problem.
That
has already changed. With the exception of Internet Explorer, browsers
vendors are realising how important it is to provide easy and timely
updates for users; similarly, users themselves are, I like to believe,
starting to realise how an update can be easy and painless too.
Personally,
I used to look at the penetration numbers of Flash and HTML in
comparison to each other and it always baffled me how anyone would
compare them in a realistic fashion when it came down to the speed that
any new feature would take to be adopted globally. Looking at them this
time, however, gave me a different view on things; our rich web
platforms are not only getting better, but they're getting better at
getting better, by doing so faster.
In retrospect, it
seems obvious now, but I can only see all other browser vendors adopting
similar quick-release, auto-update features similar to what was
introduced by Chrome. Safari probably needs to make the updates install
without user intervention, and we can only hope that Microsoft will
consider something similar for Internet Explorer. And when that happens,
the web, both HTML-based and plug-ins-based but especially on the HTML
side, will be moving at a pace that we haven't seen before. And
everybody wins with that.
While voice control has been part of Android since the dawn of time, Siri
came along and ruined the fun with its superior search and
understanding capabilities. However, an industrious team of folks from Dexetra.com, led by Narayan Babu, built a Siri-alike in just 8 hours during a hackathon.
Iris allows you to search on various subjects including conversions,
art, literature, history, and biology. You can ask it “What is a fish?”
and it will reply with a paragraph from Wikipedia focusing on our finned
friends.
The app will soon be available soon from the Android Marketplace but I
tried it recently and found it a bit sparse but quite cool. It uses
Android’s speech-to-text functions to understand basic questions and
Narayan and his buddies are improving the app all the time.
The coolest thing? The finished the app in eight hours.
When we started seeing results, everyone got excited and
started a high speed coding race. In no time, we added Voice input,
Text-to-speech, also a lot of hueristic humor into Iris. Not until late
evening we decided on the name “iris.”, which would be Siri in reverse.
And we also reverse engineered a crazy expansion – Intelligent Rival
Imitator of Siri. We were still in the fun mode, but when we started
using it the results were actually good, really good.
You can grab the early, early beta APK here
but I recommend waiting for the official version to arrive this week.
It just goes to show you that amazing things can pop up everywhere.