Just a Boy and His Cell-Snooping, Password-Cracking, Hacktastic Homemade Spy Dronevia Rabbit-Hole
Last year at the Black Hat and Defcon security conferences in Las Vegas, a former Air Force cyber security contractor and a former Air Force engineering systems consultant displayed their 14-pound, six-foot-long unmanned aerial vehicle, WASP (Wireless Aerial Surveillance Platform). Last year it was a work in progress, but next week when they unveil an updated WASP they’ll be showing off afunctioning homemade spy dronethat can sniff out Wi-Fi networks, autonomously crack passwords, and even eavesdrop on your cell phone calls by posing as a cell tower.
WASP is built from a retired Army target drone, and creators Mike Tassey and Richard Perkins have crammed all kinds of technology aboard, including an HD camera, a small Linux computer packed with a 340-million-word dictionary for brute-forcing passwords as well as other network hacking implements, and eleven different antennae. Oh, and it’s autonomous; it requires human guidance for takeoff and landing, but once airborne WASP can fly a pre-set route, looping around an area looking for poorly defended data.
And on top of that, the duo has taught their WASP a new way to surreptitiously gather intel from the ground: pose as a GSM cell phone tower to trick phones into connecting through WASP rather than their carriers--a trick Tassey and Perkins learned from another security hacker at Defcon last year.
Tassey and Perkins say they built WASP so show just how easy it is, and just how vulnerable you are. “We wanted to bring to light how far the consumer industry has progressed, to the point where public has access to technologies that put companies, and even governments at risk from this new threat vector that they’re not aware of,” Perkins toldForbes.
Consider yourself warned. For details on the WASP design--including pointers on building your own--check out Tassey and Perkins sitehere.
English engineers have produced what is believed to be the world's
first printed plane. I'm not talking a nice artsy lithograph of the
Wright Bros. first flight. This is a complete, flyable aircraft spit out
of a 3D printer.
The SULSA began life in something like an inkjet and wound up in the air. (Credit:
University of Southhampton)
The SULSA (Southampton University Laser Sintered Aircraft) is an
unmanned air vehicle that emerged, layer by layer, from a nylon laser
sintering machine that can fabricate plastic or metal objects. In the
case of the SULSA, the wings, access hatches, and the rest of the
structure of the plane were all printed.
As if that weren't awesome enough, the entire thing snaps together in
minutes, no tools or fasteners required. The electric plane has a
wingspan of just under 7 feet and a top speed of 100 mph.
Jim Scanlon, one of the project leads at the University of Southhampton, explains in a statement that the technology allows for products to go from conception to reality much quicker and more cheaply.
"The flexibility of the laser-sintering process allows the design
team to revisit historical techniques and ideas that would have been
prohibitively expensive using conventional manufacturing," Scanlon says.
"One of these ideas involves the use of a Geodetic structure... This
form of structure is very stiff and lightweight, but very complex. If it
was manufactured conventionally it would require a large number of
individually tailored parts that would have to be bonded or fastened at
great expense."
So apparently when it comes to 3D printing, the sky is no longer the
limit. Let's just make sure someone double-checks the toner levels
before we start printing the next international space station.
The inventor Jonathan Rothberg with a semiconductor chip used in the Ion Torrent machine.
The inventor of a new machine that decodes DNA with semiconductors has
used it to sequence the genome of Gordon Moore, co-founder of Intel, a
leading chip maker.
The inventor, Jonathan Rothberg of Ion Torrent Systems in Guilford,
Conn., is one of several pursuing the goal of a $1,000 human genome,
which he said he could reach by 2013 because his machine is rapidly
being improved.
“Gordon Moore worked out all the tricks that gave us modern
semiconductors, so he should be the first person to be sequenced on a
semiconductor,” Dr. Rothberg said.
At $49,000, the new DNA decoding device is cheaper than its several
rivals. Its promise rests on the potential of its novel technology to be
improved faster than those of machines based on existing techniques.
Manufacturers are racing to bring DNA sequencing costs down to the point
where a human genome can be decoded for $1,000, the sum at which
enthusiasts say genome sequencing could become a routine part of medical
practice.
But the sequencing of Dr. Moore’s genome also emphasizes how far
technology has run ahead of the ability to interpret the information it
generates.
Dr. Moore’s genome has a genetic variant that denotes a “56 percent
chance of brown eyes,” one that indicates a “typical amount of
freckling” and another that confers “moderately higher odds of smelling
asparagus in one’s urine,” Dr. Rothberg and his colleagues reported Wednesday in the journal Nature. There are also two genetic variants in Dr. Moore’s genome said to be associated with “increased risk of mental retardation” — a risk evidently never realized. The clinical value of this genomic information would seem to be close to nil.
Dr. Rothberg said he agreed that few genes right now yield useful
genetic information and that it will be a 10- to 15-year quest to really
understand the human genome. For the moment his machine is specialized
for analyzing much smaller amounts of information, like the handful of
genes highly active in cancer.
The Ion Torrent machine requires only two hours to sequence DNA,
although sample preparation takes longer. The first two genomes of the
deadly E. coli bacteria that swept Europe in the spring were decoded on
the company’s machines.
The earliest DNA sequencing method depended on radioactivity to mark the
four different units that make up genetic material, but as the system
was mechanized, engineers switched to fluorescent chemicals.The
new device is the first commercial system to decode DNA directly on a
semiconductor chip and to work by detecting a voltage change, rather
than light.
About 1.2 million miniature wells are etched into the surface of the
chip and filled with beads holding the DNA strands to be sequenced. A
detector in the floor of the well senses the acidity of the solution in
each well, which rises each time a new unit is added to the DNA strands
on the bead. The cycle is repeated every few seconds until each unit in
the DNA strand has been identified.
Several years ago, Dr. Rothberg invented another DNA sequencing machine,
called the 454, which was used to sequence the genome of James Watson,
the co-discoverer of the structure of DNA. Dr. Rothberg said he was
describing how the machine had “read” Dr. Watson’s DNA to his young son
Noah, who asked why he did not invent a machine to read minds.
Dr. Rothberg said he began his research with the idea of making a
semiconductor chip that could detect an electrical signal moving across a
slice of neural tissue. He then realized the device he had developed
was more suited to sequencing DNA.
George Church, a genome technologist at the Harvard Medical School, said
he estimated the cost to sequence Dr. Moore’s genome at $2 million.
This is an improvement on the $5.7 million it cost in 2008 to sequence
Dr. Watson’s genome on the 454 machine, but not nearly as good as the
$3,700 spent by Complete Genomics to sequence Dr. Church’s genome and
others in 2009.
Dr. Rothberg said he had already reduced the price of his chips to $99
from $250, and today could sequence Dr. Moore’s genome for around
$200,000. Because of Moore’s Law — that the number of transistors
placeable on a chip doubles about every two years — further reductions
in the cost of the DNA sequencing chip are inevitable, Dr. Rothberg
said.
Stephan Schuster, a genome biologist at Penn State, said his two Ion
Torrent machines were “outstanding,” and enabled a project that would
usually have taken two months to be completed in five days.
There is now “a race to the death as to who can sequence faster and
cheaper, always with the goal of human resequencing in mind,” Dr.
Schuster said.
Ahh, Watson. Your performance on Jeopardy
let the world know that computers were about more than just storing and
processing data the way computers always have. Watson showed us all
that computers were capable of thinking in very human ways, which is
both an extremely exciting and mildly frightening prospect.
A research group at MIT
has been working on a project along the same lines — a computer that
can process information in a human-like manner and then apply what it’s
learned to a specific situation. In this case, the information was the
instruction manual for the classic PC game Civilization. After
reading the manual, the computer was ready to do battle with the game’s
AI. The result: 79% of the time, the computer was victorious.
This is an undeniably impressive development, but we’re clearly not
in any real danger until the computer decides to man up and play without reading the instructions like any real gamer
would. MIT tried that as well, and while a 46% success rate doesn’t
look all that good percentage-wise, it’s pretty darn amazing when you
remember this is a computer playing Civilization with no
orientation of any kind. I’ve got plenty of friends that couldn’t
compete with that, though they all insist it’s because the game was
boring and they hated it.
The ultimate goal of the project was to prove that computers were
capable of processing natural language the way we do — and actually
learning from it, not merely spitting out responses the way an
intelligent voice response (IVR) system does, for example. A system like
this could one day power something like a tricorder, diagnosing
symptoms based on a cavernous cache of medical data. Don’t worry,
doctors, it’s going to be a while before computers actually replace you.
Both Apple and Microsoft's new desktop operating systems borrow elements from mobile devices, in sometimes confusing ways.
Apple is widely expected to unveil a major update this week to OS X Lion, its operating system for desktop and laptop computers. Microsoft, meanwhile, is working on an even bigger overhaul of Windows, with a version called Windows 8.
Both new operating systems reflect a tectonic shift in personal computing. They incorporate elements from mobile operating systems alongside more conventional desktop features. But demos of both operating systems suggest that users could face a confusing mishmash of design ideas and interaction methods.
Windows 8 and OS X Lion include elements such as touch interaction and full-screen apps that will facilitate the kind of "unitasking" (as opposed to multitasking) that users have become accustomed to on mobile devices and tablets.
"The rise of the tablets, or at least the iPad, has suggested that there is a latent, unmet need for a new form of computing," says Peter Merholz, president of the user-experience and design firmAdaptive Path. However, he adds, "moving PCs in a tablet direction isn't necessarily sensible."
Cathy Shive, an independent software developer, would agree. She developed software for Mac desktop applications for six years before she switched and began developing for iOS (Apple's operating system for the iPhone and iPad). "When I first saw Steve Jobs's demo of Lion, I was really surprised—I was appalled, actually," she says.
Shive is surprised by the direction both Apple and Microsoft are taking. One fundamental dictate of usability design is that an interface should be tailored to the specific context—and hardware—in which it lives. A desktop PC is not the same thing as a tablet or a mobile device, yet in that initial demo, "It seemed like what [Jobs] was showing us was a giant iPad," says Shive.
A subsequentdemonstration of Windows 8by Microsoft vice president Julie Larson-Green confirmed that Redmond was also moving toward touch as a dominant interaction mechanism. One of the devices used in that demonstration, a "media tablet" from Taiwan-based ASUS, resembled an LCD monitor with no keyboard.
Not everyone is so skeptical about Apple and Microsoft's plans.Lukas Mathis, a programmer and usability expert, thinks that, on balance, this shift is a good thing. "If you watch casual PC users interact with their computers, you'll quickly notice that the mouse is a lot harder to use than we think," he says. "I'm glad to see finger-friendly, large user interface elements from phones and tablets make their way into desktop operating systems. This change was desperately needed, and I was very happy to see it."
Mathis argues that experienced PC users don't realize how crowded with "small buttons, unclear icons, and tiny text labels" typical desktop operating systems are.
Lion and Windows 8 solve these problems in slightly different ways. In Lion, file management is moving toward an iPhone/iPad-style model, where users launch applications from a "Launchpad," and their files are accessible from within those applications. In Windows 8, files, along with applications, bookmarks, and just about anything else, can be made accessible from a customizable start screen.
Some have criticized Mission Control, Apple's new centralized app and window management interface, saying that itadds complexityrather than introducing the simplicity of a mobile interface. At the other extreme, Lion allows any app to be rendered full-screen, which blocks out distractions but also forces users to switch applications more often than necessary.
"The problem [with a desktop OS] is that it's hard to manage windows," says Mathis. "The solution isn't to just remove windows altogether; the solution is to fix window management so it's easier to use, but still allows you to, say, write an essay in one window, but at the same time look at a source for your essay in a different window."
Windows 8, meanwhile, attempts to solve this problem in a more elegant way, with a "Windows Snap," which allows apps to be viewed side-by-side while eliminating the need to manage their dimensions by dragging them from the corner.
A problem with moving toward a touch-centric interface is that the mouse is absolutely necessary for certain professional applications. "I can't imagine touch in Microsoft Excel," says Shive. "That's going to be terrible," she says.
The most significant difference between Apple's approach and Microsoft's is that Windows 8 will be the same OS no matter what device it's on, from a mobile phone to a desktop PC. To accommodate a range of devices, Microsoft has left intact the original Windows interface, which users can switch to from the full-screen start screen and full-screen apps option.
Merholz believes Microsoft's attempt to make its interface consistent across all devices may be a mistake. "Microsoft has a history of overemphasizing the value of 'Windows everywhere.' There's a fear they haven't learned appropriateness, given the device and its context," he says.
Shive believes the same could be said of Apple. "Apple has been seduced by their own success, and they're jumping to translate that over to the desktop ... They think there's some kind of shortcut, where everyone is loving this interface on the mobile device, so they will love it on their desktop as well," she says.
In a sense, both Apple and Microsoft are about to embark on a beta test of what the PC should be like in an era when consumers are increasingly accustomed to post-PC modes of interaction. But it could be a bumpy process. "I think we can get there, but we've been using the desktop interface for 30 years now, and it's not going to happen overnight," says Shive.
-----
Personal Comments:
From my personal point of view and
based on my 30 years IT/Dev experience, I do not see the change of
desktop Look&Feel as a crisis but more as a simple and efficient aesthetic evolution.
Why? Because what was made for mobile
phone first and then for new coming mobile devices like tablets is
what some people were trying to do on laptop/desktop computer's GUI for
years: trying to make the GUI/desktop experience simple enough in
order to make computers accessible to anyone of us, even to the more
recusant to technology (see evolution of windows and Linux GUI). That
specific goal was successfully reached on mobile phones/devices in a very short time, pushing common people to
change of device every two years and making them enjoy new functionality/technology without having to read one single page of an
instruction manual (by the way, mobile phones are delivered without
any!).
It looks like technological constraints
and restrictions were needed in order to invent this kind of
interface. Touch screen only mobile phones were available since
years prior Apple produces its first iPhone (2007), remember the Sony-Ericsson P800
(2002) and its successor the P900 (2003), technically everything was here (they are close to the "classic" smartphone we are used to have in our pocket nowadays), but an efficient GUI and
in a more general way, an efficient OS was dramatically missing. What was done by
Apple with iOS, Google with Android and HTC with its UISense GUI on top of Android brings out and demonstrates the obvious potential of these mobile devices.
The adaptation of these GUI/OS on tablet (iOS,
Android 3.0), still with the touch-only constraint, rises up new
solutions for GUI while extending what can be done through few basic finger gestures. It sounds not surprising that classic
desktop/laptop computers are now trying to integrate the good of all this in
their own environment as they did not succeed in doing so on their own before. I would even say that this is an obvious
step forward as many ideas are adaptable to desktop computer world.
For example, making easier the installation of applications by making
available App store concept to desktop computers is an obvious step,
one does not have to think if the application is compatible with the local
hardware etc... the App store just focus on compatible applications,
seamlessly.
So more than entering a crisis/revolution, I would
say that desktop computer world will just exploit from the mobile
devices world what can be adapted in order to make the desktop
computer experience for the end-user as seamless as it is on mobile
devices... but for some basic tasks only.
You can embed a desktop OS in a very
nice and simple box making things looking very similar to mobile
device's simplicity, but this is just a kind of gift package which is
not valuable for all usages you can face on a desktop computer...
making this step forward looking like a set of cosmetic changes, and not more... because it just can not be more!
Today, one is used to glorious declaration each time a new OS is proposed to end-user, many so-called "new" features mentioned are not more than already existing ones that were re-design and pushed on the scene in order to obtain a kind of revolutionary OS impression: who can seriously consider full screen app or automatic save as new key features for a 21th century's new OS?
Let's go through some of the key new features announced by Apple in Mac OS X Lion:
- Multi-touch: this is not a new feature, it "just" adds some new functionality to map to already available multi-touch gestures.
- Full Screen management: it basically attached a virtual desktop to any application running in full screen. Thus, you can switch from/to full screen applications... the same way you were already able to do so by switching from one virtual desktop to another.
- LaunchPad: this is basically a graphical interface/shortcuts for the 'Applications' folder in the Finder. Ok it looks like the Apps grid on a tablet or a mobile phone... but as it was already presented as a list, the other option was... guess what... a grid!
- Mission Control: this is also an evolution of something that was already existing. The ability to see all your windows in addition to all your virtual desktops.
I'm pretty convinced that these new features are going to be really useful and pleasant to use, making the usage of the touchpad on MacBook even more primordial, but I do not see here a real revolution, neither a crisis, in the way we are going to work on desktop/laptop computers.
Now
that Chromebooks--portable computers based on Google's Chrome OS--are
out in the wild and hands-on reviews are appearing about them, it's
easier to gauge the prospects for this new breed of operating system.
As Jon Buys discussed here on OStatic,
these portables are not without their strong points. However, there
are criticisms appearing about the devices, and the criticisms echo
ones that we made early on. In short, Chromebooks force a cloud-only
compute model that leaves people used to working with local files and
data in the cold.
In his OStatic post on Chromebooks, Jon Buys noted that the devices
are "built to be the fastest way to get to the web." Indeed, Chromebooks
boot in seconds--partly thanks to a Linux core--and deposit the user
into an environment that looks and works like the Chrome browser.
However, Cade Metz, writing for The Register, notes this in a review of the Samsung Chromebook, which is available for under $500:
"Running Google's Chrome OS operating system, Chromebooks
seek to move everything you do onto the interwebs. Chrome OS is
essentially a modified Linux kernel that runs only one native
application: Google's Chrome browser….This also means that most of your
data sits on the web, inside services like Google Docs…The rub is that
the world isn't quite ready for a machine that doesn't give you access
to local files."
That's exactly what I felt woud be the shortcoming of devices based on Chrome OS, as seen in this post:
"With Chrome OS, Google is betting heavily on the idea
that consumers and business users will have no problem storing data and
using applications in the cloud, without working on the locally stored
data/applications model that they're used to. Here at OStatic, we always
questioned the aggressively cloud-centric stance that Chrome OS is
designed to take. Don't users want local applications too? Why don't I
just run Ubuntu and have my optimized mix of cloud and local apps? After
all, the team at Canonical, which has a few years more experience than
Google does in the operating system business, helped create Chrome OS. "
The interesting thing is, though, that Google may have an opportunity
to easily address this perceived shortcoming. For example, Chrome OS
now has a file manager, and it's not much of a leap to get from a file
manager to a reasonable set of ways to store local data and manage it.
Odds are that Google will change the way Chrome OS works going forward,
making it less cloud-centric. That would seem to be a practical course
of action.
HTML5 is a hot topic, which is a good thing. The problem is that
99% of what’s been written has been about HTML5 replacing Flash. Why is
that a problem? Because not only is it irrelevant, but also it prevents
you from seeing the big picture about interoperability.
But first things first. A few facts:
You do not build a web site in Flash. The only way to build a
website is to use HTML pages, and then to embed Flash elements in them.
Flash as been around for more than 12 years. It is a de facto
standard for the publishing industry. (No Flash = no advanced features
in banners).
HTML5 does not officially exist (yet). Rather, it’s a specification in working draft, scheduled for publication in 2014.
The video element in HTML5 is perfect for basic video
players, but Flash and Silverlight are much more suitable for advanced
video feature (streaming, caption, interactive features and
miscellaneous video effects).
These are not interpretations or opinions. These are facts. The truth is writing about the agony of Flash is an easy way to draw readers,
a much easier way than to adopt a nuanced stance. And this is why we
read so many garbage about HTML5 vs. Flash. (For an accurate
description, please read HTML5 fundamentals).
All this said, HTML5 will indeed replace Flash in certain circumstances, specifically Light interface enhancements.
To explain this, we must go back in time: HTML’s specifications evolved
over 10 years, thus web developers wishing to offer an enhanced
experience had no choice but Flash. In recent years, we began to see
Flash used for custom fonts and transitions. But HTML has at last
evolved into HTML5 (and CSS3), which allow web designers to use custom
fonts, gradients, rounded corners and transitions, among other uses. So
in this particular case (light interface enhancements), Flash is rapidly
losing ground to a much more legitimate HTML5.
So if HTML5 is more suitable for light interface enhancements, this
leaves rooms for Flash to do what it does best: heavy interface
enhancements, vector-based animations, advanced video and audio
features, and immersive environments.
To make a long story short: Flash has a 10 years advance over
HTML. This technology isn’t better but because it’s owned by a single
company has the entire control on its innovation rate. I have
no doubt that one day HTML will have the same capabilities as Flash
today, but in how many years? Don’t mistake me. Not every site needs
Flash or an equivalent RIA technology: Amazon, Ebay and Wikipedia built their audiences with classic HTML, as did millions of web sites.
So for the sake of precision: I am not an Adobe ambassador nor I am a
web standards’ ayatollah. I am just a web enthusiast enjoying what the
web best has to offer, whether powered by standard or proprietary
technologies. Moreover, standardization is not a simple process, because
what we refer to as standards (from MP3 and JPEG to h.264) are in fact
technologies owned by private companies or consortiums.
Then, there is the mobile argument. If iOS and Android provide users
with an HTML5 compliant browser, what about Blackberry? Symbian? WebOS?
Feature phones? Low cost tablets? If interoperability and wider reach are mandatory, then maybe the better way to achieve them will be to focus on APIs exploited by multiple interfaces, rather than on a miraculously adaptive HMTL5 front-end.
Of course, there are many technical arguments for one technology over the other. But the best and most important part is that you don’t have to choose between HTML5 and Flash because you can use both.
Maybe the best answer is to acknowledge that HTML5 and Flash have their
pros and cons and that you can use one or the other or both depending
on the experience you wish to provide, your ROI and SEO constraints, and
the human resources you access.
In short, it’s not a zero sum game. Rather, it’s a process of natural
evolution, where HTML is catching up while Flash is focusing on
advanced features (and narrowing, even as it consolidates, its market
share). Both are complementary. So please, stop comparing.
Over
the past few months, there have been a number of notable service
quality incidents and security breaches of online services,
including Sony’s PlayStation network, Amazon’s cloud service,
Dropbox’s storage in the cloud, and countless others. The bar talk
around “cloud” computing and online services would have you think
that businesses and consumers are shying away from using hosted
services, using Software as a Service (SaaS) applications, from
storing their data “in the cloud,” or from migrating some or all of
their computing infrastructure to virtual machines hosted by
cloud service providers. However, there’s actually an uptick in the
uptake of cloud computing in all of its various incarnations.
We (consumers and businesses) are using “cloud” services for all of the following kinds of activities:
1. Accessing and downloading media.
2. Accessing and downloading mobile apps.
3. Accessing and running business applications (CRM, hiring, ecommerce, logistics, provisioning, etc.).
4.
Collaborating with colleagues, clients, and customers (project
management, online communities, email, meeting scheduling).
5. Analyzing large amounts of data.
6. Storing large amounts of data (much of it unstructured, like video, images, text files, etc.).
7. Developing and testing new applications and online services.
8. Running distributed applications that need high performance around
the globe. (All of the social media apps we use are essentially
“cloud” applications—they run on virtual machines hosted in mostly
3rd-party data centers all over the world.)
9.
Scaling our operations to handle seasonal and other peaking
requirements—where we can take advantage of buying computing
capabilities by the hour, rather than pre-paying for capacity we
rarely need.
10. Back
up and Disaster Recovery—keeping copies of our systems and
data in remote locations, ready to run if a natural disaster impacts
our normal operations.
In short,
“cloud computing” in all of its instantiations—Software as a
Service, Platform as a Service, Infrastructure as a Service, Cloud
Storage, Cloud Computing, etc.—is here to stay. Taking advantage of
the cloud (virtual computers running software in data centers
distributed around the globe) is the most scalable and the most
cost-effective way to provide computing resources and
services to anyone who has reliable access to high bandwidth
networking via the Internet.
What About Security and Back Up?
Most of us now realize that we’re responsible for the security and
integrity of our information no matter where it sits on the planet.
And we are better off if we have more than one copy of anything
that’s really important.
SaaS and
cloud providers have had a lot of experience helping IT organizations
migrate some or all of their computing and/or storage to the
cloud. And most of them report that most IT organizations’ data
security practices leave quite a bit to be desired before they
migrate to the cloud. Their customers’ data security and
integrity typically improves dramatically as a result of
re-thinking their requirements and implementing better policies
and practices as they migrated some or all of their computing.
(Just because data is in your own physical data center doesn’t mean
it’s safe!)
It’s Time to Run Around in Front of the Cloud Parade
We’re now committed to living in the mobile Internet era. We treasure
our mobility and our unfettered access to information,
applications, media, and services. Cloud computing, in all its
forms, is here to stay. Small businesses and innovative service
providers have embraced cloud computing and services wholeheartedly
and are already reaping the benefits of “pay as you consume” for
software and computing and storage services. Medium-sized
businesses are the next to embrace cloud computing, because they
typically don’t have the inertia and overhead that comes with a
huge centralized IT organization. Large enterprises’ IT
organizations are the last to officially accept cloud computing as a
safe and compliant alternative for corporate IT. Yet many
departments in those same large enterprise organizations have been
the early adopters of cloud computing for the development and
testing of new software products and for the departmental (or even
corporate) adoption of SaaS for many of their companies’ most
critical applications.
Flash memory is the dominant nonvolatile (retaining information when
unpowered) memory thanks to its appearance in solid-state drives (SSDs)
and USB flash drives. Despite its popularity, it has issues when feature
sizes are scaled down to 30nm and below. In addition, flash has a
finite number of write-erase cycles and slow write speeds (on the order
of ms). Because of these shortcomings, researchers have been searching
for a successor even as consumers snap up flash-based SSDs.
There are currently a variety of alternative technologies competing to replace silicon-based flash memory, such as phase-change RAM (PRAM),
ferroelectric RAM (FERAM), magnetoresistive RAM (MRAM), and
resistance-change RAM (RRAM). So far, though, these approaches fail to
scale down to current process technologies well—either the switching
mechanism or switching current perform poorly at the nanoscale. All of
them, at least in their current state of development, also lack some
commercially-important properties such as write-cycle endurance,
long-term data retention, and fast switching speed. Fixing these issues
will be a basic requirement for next-gen non-volatile memory.
Or,
as an alternative, we might end up replacing this tech entirely.
Researchers from Samsung and Sejong University in Korea have published a
paper in Nature Materials that describes tanatalum oxide-based (TaOx) resistance-RAM (RRAM), which shows large improvements over current technology in nearly every respect.
RRAM devices work by applying a large enough voltage to switch material
that normally acts as an insulator (high-resistance state) into a
low-resistance state. In this case, the device is a sandwich structure
with a TaO2-x base layer and a thinner Ta2O5-x
insulating layer, surrounded by platinum (Pt) electrodes. This
configuration, known as metal-insulator-base-metal (MIMB), starts as an
insulator, but it can be switched to a low resistance, metal-metal
(filament)-base-metal (MMBM) state.
The nature of the switching process is not well understood in this case,
but the authors describe it as relying on the creation of conducting
filaments that extend through the Ta2O5-x layer.
These paths are created by applying sufficiently large voltages, which
drive the movement of oxygen ions through a redox (reduction-oxidaton)
process.
When in the MIMB state, the interface between the Pt electrode and the Ta2O5-x forms a metal-semiconductor junction known as a Schottky barrier, while the MMBM state forms an ohmic contact.
The main difference between these two is that the current-voltage
profile is linear and symmetric for ohmic but nonlinear and asymmetric
for Schottky. The presence of Schottky barriers is a benefit, as it
prevents stray current leakage through an array of multiple devices
(important for high-density storage).
The results presented by the authors appear to blow other memory
technologies out of the water, in pretty much every way we care about.
The devices presented here are 30nm thick, and the switching current is
50 ?A—an order of magnitude smaller than that of PRAM. They also
demonstrated an endurance of greater than 1012 switching cycles (higher than the previous best of 1010 and six orders of magnitude higher than that of flash memory at 104-106).
The device has a switching time of 10ns, and a data retention time
that's estimated to be 10 years operating at 85°C. This type of RRAM
also appears to work without problems in a vacuum, unlike
previously-demonstrated devices.
This may all seem too good to be true—it should be emphasized that this
was only a laboratory-scale demonstration, with 64 devices in an array
(therefore capable of storing only 64 bits). There will still be a few
years of development needed before we see gigabyte-size drives based on
this RRAM memory.
Like all semiconductor device fabrication, advances will be needed to
improve nanoscale lithography techniques for large-scale manufacturing
and, in this particular case, a better understanding of the basic
switching mechanism is also needed. However, based on the results shown
here, this new memory technology shows promise for use as a universal
memory storage: the same type could be used for storage and working
memory.
The Real Veterans of Technology To survive for more than 100 years, tech companies such as Siemens, Nintendo, and IBM have had to innovate
By Antoine Gara
When IBM turned 100 on June 16, it joined a surprising number of tech companies that have been evolving in order to survive since long before there was a Silicon Valley. Here’s a sampling of tech centenerians, and some of the breakthroughs that helped fuel their longevity.
Siemens: Topical Press Agency/Getty Images; Western Union: Hulton Archive/Getty Images; Diebold: Diebold Inc; Ericsson: SSPL/Getty Images; Nintendo: Tim Whitby/Alamy; Kodak: George Eastman House