Ahh, Watson. Your performance on Jeopardy
let the world know that computers were about more than just storing and
processing data the way computers always have. Watson showed us all
that computers were capable of thinking in very human ways, which is
both an extremely exciting and mildly frightening prospect.
A research group at MIT
has been working on a project along the same lines — a computer that
can process information in a human-like manner and then apply what it’s
learned to a specific situation. In this case, the information was the
instruction manual for the classic PC game Civilization. After
reading the manual, the computer was ready to do battle with the game’s
AI. The result: 79% of the time, the computer was victorious.
This is an undeniably impressive development, but we’re clearly not
in any real danger until the computer decides to man up and play without reading the instructions like any real gamer
would. MIT tried that as well, and while a 46% success rate doesn’t
look all that good percentage-wise, it’s pretty darn amazing when you
remember this is a computer playing Civilization with no
orientation of any kind. I’ve got plenty of friends that couldn’t
compete with that, though they all insist it’s because the game was
boring and they hated it.
The ultimate goal of the project was to prove that computers were
capable of processing natural language the way we do — and actually
learning from it, not merely spitting out responses the way an
intelligent voice response (IVR) system does, for example. A system like
this could one day power something like a tricorder, diagnosing
symptoms based on a cavernous cache of medical data. Don’t worry,
doctors, it’s going to be a while before computers actually replace you.
Both Apple and Microsoft's new desktop operating systems borrow elements from mobile devices, in sometimes confusing ways.
Apple is widely expected to unveil a major update this week to OS X Lion, its operating system for desktop and laptop computers. Microsoft, meanwhile, is working on an even bigger overhaul of Windows, with a version called Windows 8.
Both new operating systems reflect a tectonic shift in personal computing. They incorporate elements from mobile operating systems alongside more conventional desktop features. But demos of both operating systems suggest that users could face a confusing mishmash of design ideas and interaction methods.
Windows 8 and OS X Lion include elements such as touch interaction and full-screen apps that will facilitate the kind of "unitasking" (as opposed to multitasking) that users have become accustomed to on mobile devices and tablets.
"The rise of the tablets, or at least the iPad, has suggested that there is a latent, unmet need for a new form of computing," says Peter Merholz, president of the user-experience and design firmAdaptive Path. However, he adds, "moving PCs in a tablet direction isn't necessarily sensible."
Cathy Shive, an independent software developer, would agree. She developed software for Mac desktop applications for six years before she switched and began developing for iOS (Apple's operating system for the iPhone and iPad). "When I first saw Steve Jobs's demo of Lion, I was really surprised—I was appalled, actually," she says.
Shive is surprised by the direction both Apple and Microsoft are taking. One fundamental dictate of usability design is that an interface should be tailored to the specific context—and hardware—in which it lives. A desktop PC is not the same thing as a tablet or a mobile device, yet in that initial demo, "It seemed like what [Jobs] was showing us was a giant iPad," says Shive.
A subsequentdemonstration of Windows 8by Microsoft vice president Julie Larson-Green confirmed that Redmond was also moving toward touch as a dominant interaction mechanism. One of the devices used in that demonstration, a "media tablet" from Taiwan-based ASUS, resembled an LCD monitor with no keyboard.
Not everyone is so skeptical about Apple and Microsoft's plans.Lukas Mathis, a programmer and usability expert, thinks that, on balance, this shift is a good thing. "If you watch casual PC users interact with their computers, you'll quickly notice that the mouse is a lot harder to use than we think," he says. "I'm glad to see finger-friendly, large user interface elements from phones and tablets make their way into desktop operating systems. This change was desperately needed, and I was very happy to see it."
Mathis argues that experienced PC users don't realize how crowded with "small buttons, unclear icons, and tiny text labels" typical desktop operating systems are.
Lion and Windows 8 solve these problems in slightly different ways. In Lion, file management is moving toward an iPhone/iPad-style model, where users launch applications from a "Launchpad," and their files are accessible from within those applications. In Windows 8, files, along with applications, bookmarks, and just about anything else, can be made accessible from a customizable start screen.
Some have criticized Mission Control, Apple's new centralized app and window management interface, saying that itadds complexityrather than introducing the simplicity of a mobile interface. At the other extreme, Lion allows any app to be rendered full-screen, which blocks out distractions but also forces users to switch applications more often than necessary.
"The problem [with a desktop OS] is that it's hard to manage windows," says Mathis. "The solution isn't to just remove windows altogether; the solution is to fix window management so it's easier to use, but still allows you to, say, write an essay in one window, but at the same time look at a source for your essay in a different window."
Windows 8, meanwhile, attempts to solve this problem in a more elegant way, with a "Windows Snap," which allows apps to be viewed side-by-side while eliminating the need to manage their dimensions by dragging them from the corner.
A problem with moving toward a touch-centric interface is that the mouse is absolutely necessary for certain professional applications. "I can't imagine touch in Microsoft Excel," says Shive. "That's going to be terrible," she says.
The most significant difference between Apple's approach and Microsoft's is that Windows 8 will be the same OS no matter what device it's on, from a mobile phone to a desktop PC. To accommodate a range of devices, Microsoft has left intact the original Windows interface, which users can switch to from the full-screen start screen and full-screen apps option.
Merholz believes Microsoft's attempt to make its interface consistent across all devices may be a mistake. "Microsoft has a history of overemphasizing the value of 'Windows everywhere.' There's a fear they haven't learned appropriateness, given the device and its context," he says.
Shive believes the same could be said of Apple. "Apple has been seduced by their own success, and they're jumping to translate that over to the desktop ... They think there's some kind of shortcut, where everyone is loving this interface on the mobile device, so they will love it on their desktop as well," she says.
In a sense, both Apple and Microsoft are about to embark on a beta test of what the PC should be like in an era when consumers are increasingly accustomed to post-PC modes of interaction. But it could be a bumpy process. "I think we can get there, but we've been using the desktop interface for 30 years now, and it's not going to happen overnight," says Shive.
-----
Personal Comments:
From my personal point of view and
based on my 30 years IT/Dev experience, I do not see the change of
desktop Look&Feel as a crisis but more as a simple and efficient aesthetic evolution.
Why? Because what was made for mobile
phone first and then for new coming mobile devices like tablets is
what some people were trying to do on laptop/desktop computer's GUI for
years: trying to make the GUI/desktop experience simple enough in
order to make computers accessible to anyone of us, even to the more
recusant to technology (see evolution of windows and Linux GUI). That
specific goal was successfully reached on mobile phones/devices in a very short time, pushing common people to
change of device every two years and making them enjoy new functionality/technology without having to read one single page of an
instruction manual (by the way, mobile phones are delivered without
any!).
It looks like technological constraints
and restrictions were needed in order to invent this kind of
interface. Touch screen only mobile phones were available since
years prior Apple produces its first iPhone (2007), remember the Sony-Ericsson P800
(2002) and its successor the P900 (2003), technically everything was here (they are close to the "classic" smartphone we are used to have in our pocket nowadays), but an efficient GUI and
in a more general way, an efficient OS was dramatically missing. What was done by
Apple with iOS, Google with Android and HTC with its UISense GUI on top of Android brings out and demonstrates the obvious potential of these mobile devices.
The adaptation of these GUI/OS on tablet (iOS,
Android 3.0), still with the touch-only constraint, rises up new
solutions for GUI while extending what can be done through few basic finger gestures. It sounds not surprising that classic
desktop/laptop computers are now trying to integrate the good of all this in
their own environment as they did not succeed in doing so on their own before. I would even say that this is an obvious
step forward as many ideas are adaptable to desktop computer world.
For example, making easier the installation of applications by making
available App store concept to desktop computers is an obvious step,
one does not have to think if the application is compatible with the local
hardware etc... the App store just focus on compatible applications,
seamlessly.
So more than entering a crisis/revolution, I would
say that desktop computer world will just exploit from the mobile
devices world what can be adapted in order to make the desktop
computer experience for the end-user as seamless as it is on mobile
devices... but for some basic tasks only.
You can embed a desktop OS in a very
nice and simple box making things looking very similar to mobile
device's simplicity, but this is just a kind of gift package which is
not valuable for all usages you can face on a desktop computer...
making this step forward looking like a set of cosmetic changes, and not more... because it just can not be more!
Today, one is used to glorious declaration each time a new OS is proposed to end-user, many so-called "new" features mentioned are not more than already existing ones that were re-design and pushed on the scene in order to obtain a kind of revolutionary OS impression: who can seriously consider full screen app or automatic save as new key features for a 21th century's new OS?
Let's go through some of the key new features announced by Apple in Mac OS X Lion:
- Multi-touch: this is not a new feature, it "just" adds some new functionality to map to already available multi-touch gestures.
- Full Screen management: it basically attached a virtual desktop to any application running in full screen. Thus, you can switch from/to full screen applications... the same way you were already able to do so by switching from one virtual desktop to another.
- LaunchPad: this is basically a graphical interface/shortcuts for the 'Applications' folder in the Finder. Ok it looks like the Apps grid on a tablet or a mobile phone... but as it was already presented as a list, the other option was... guess what... a grid!
- Mission Control: this is also an evolution of something that was already existing. The ability to see all your windows in addition to all your virtual desktops.
I'm pretty convinced that these new features are going to be really useful and pleasant to use, making the usage of the touchpad on MacBook even more primordial, but I do not see here a real revolution, neither a crisis, in the way we are going to work on desktop/laptop computers.
The rise of Internet search engines like Google has changed the way our
brain remembers information, according to research by Columbia
University psychologist Betsy Sparrow published July 14 in Science.
“Since the advent of search engines, we are reorganizing the way we
remember things,” said Sparrow. “Our brains rely on the Internet for
memory in much the same way they rely on the memory of a friend, family
member or co-worker. We remember less through knowing information itself
than by knowing where the information can be found.”
Sparrow’s research reveals that we forget things we are confident we
can find on the Internet. We are more likely to remember things we think
are not available online. And we are better able to remember where to
find something on the Internet than we are at remembering the
information itself. This is believed to be the first research of its
kind into the impact of search engines on human memory organization.
Sparrow’s paper in Science is titled, “Google Effects on
Memory: Cognitive Consequences of Having Information at Our Fingertips.”
With colleagues Jenny Liu of the University of Wisconsin-Madison and
Daniel M. Wegner of Harvard University, Sparrow explains that the
Internet has become a primary form of what psychologists call
transactive memory—recollections that are external to us but that we
know when and how to access.
The research was carried out in four studies.
First, participants were asked to answer a series of difficult trivia
questions. Then they were immediately tested to see if they had
increased difficulty with a basic color naming task, which showed
participants words in either blue or red. Their reaction time to search
engine-related words, like Google and Yahoo, indicated that, after the
difficult trivia questions, participants were thinking of Internet
search engines as the way to find information.
Second, the trivia questions were turned into statements.
Participants read the statements and were tested for their recall of
them when they believed the statements had been saved—meaning accessible
to them later as is the case with the Internet—or erased. Participants
did not learn the information as well when they believed the information
would be accessible, and performed worse on the memory test than
participants who believed the information was erased.
Third, the same trivia statements were used to test memory of both
the information itself and where the information could be found.
Participants again believed that information either would be saved in
general, saved in a specific spot, or erased. They recognized the
statements which were erased more than the two categories which were
saved.
Fourth, participants believed all trivia statements that they typed
would be saved into one of five generic folders. When asked to recall
the folder names, they did so at greater rates than they recalled the
trivia statements themselves. A deeper analysis revealed that people do
not necessarily remember where to find certain information when they remember what it was, and that they particularly tend to remember where to find information when they can’t remember the information itself.
According to Sparrow, a greater understanding of how our memory works
in a world with search engines has the potential to change teaching and
learning in all fields.
“Perhaps those who teach in any context, be they college professors,
doctors or business leaders, will become increasingly focused on
imparting greater understanding of ideas and ways of thinking, and less
focused on memorization,” said Sparrow. “And perhaps those who learn
will become less occupied with facts and more engaged in larger
questions of understanding.”
The research was funded by the National Institutes of Health and Columbia’s department of psychology.
Betsy Sparrow talks about her research, which examines the changing nature of human memory.
Back in April, the news of Apple’s somewhat-secret iPhone location-data-tracking broke to much disapproval. The Internet was abuzz with outraged iPhone
users concerned about their privacy. With a few months’ time, the dust
has settled a bit, and a few people have even figured out ways to put
this technology to good use.
Take Crowdflow’s Michael Kreil for example. Kreil took location data
from 880 iPhones all across Europe over one month’s time, he aggregated
the data from April 2011, and then visualized it by creating an amazing
time-lapse video. We’re able to see how iPhone customers move across the
different countries in Europe. The video definitely has a psychedelic
feel to it with its bright undulating lights flying around the
eye-catching-colored maps of Europe.
The map style resembles a bunch of fireflies buzzing about Europe.
The lights fade on and off to represent when data is turned
off, presumably at night when we tend to turn off our phones when going
to sleep. Kreil said that most iPhones don’t collect data at night since
the owner is, typically, not moving. Because of this, the image becomes
blurry at night, and the lights dissolve.
Kreil said that he couldn’t decide on a color scheme, so he made
three videos of the same data, but in different colors. We chose our
favorite below, but make sure to watch the others if you have a
different color preference. We also recommend watching the videos in
full HD and in full-screen mode.
He also noted that he’d like to see the same project applied to the entire globe, which makes our little time-lapse-loving geek hearts flutter with excitement.
Now
that Chromebooks--portable computers based on Google's Chrome OS--are
out in the wild and hands-on reviews are appearing about them, it's
easier to gauge the prospects for this new breed of operating system.
As Jon Buys discussed here on OStatic,
these portables are not without their strong points. However, there
are criticisms appearing about the devices, and the criticisms echo
ones that we made early on. In short, Chromebooks force a cloud-only
compute model that leaves people used to working with local files and
data in the cold.
In his OStatic post on Chromebooks, Jon Buys noted that the devices
are "built to be the fastest way to get to the web." Indeed, Chromebooks
boot in seconds--partly thanks to a Linux core--and deposit the user
into an environment that looks and works like the Chrome browser.
However, Cade Metz, writing for The Register, notes this in a review of the Samsung Chromebook, which is available for under $500:
"Running Google's Chrome OS operating system, Chromebooks
seek to move everything you do onto the interwebs. Chrome OS is
essentially a modified Linux kernel that runs only one native
application: Google's Chrome browser….This also means that most of your
data sits on the web, inside services like Google Docs…The rub is that
the world isn't quite ready for a machine that doesn't give you access
to local files."
That's exactly what I felt woud be the shortcoming of devices based on Chrome OS, as seen in this post:
"With Chrome OS, Google is betting heavily on the idea
that consumers and business users will have no problem storing data and
using applications in the cloud, without working on the locally stored
data/applications model that they're used to. Here at OStatic, we always
questioned the aggressively cloud-centric stance that Chrome OS is
designed to take. Don't users want local applications too? Why don't I
just run Ubuntu and have my optimized mix of cloud and local apps? After
all, the team at Canonical, which has a few years more experience than
Google does in the operating system business, helped create Chrome OS. "
The interesting thing is, though, that Google may have an opportunity
to easily address this perceived shortcoming. For example, Chrome OS
now has a file manager, and it's not much of a leap to get from a file
manager to a reasonable set of ways to store local data and manage it.
Odds are that Google will change the way Chrome OS works going forward,
making it less cloud-centric. That would seem to be a practical course
of action.
Android 3.2 isn't unheard; the OS was first introduced with the MediaPad
tablet a few weeks back, and a few days ago, the Motorola Xoom started
receiving the update. But it's been a mostly quiet release. Google
has just released new details on what exactly v3.2 adds to the
Honeycomb mix, and more importantly, they've released the updated SDK
tools for developers to start taking full advantage.
Google calls this an incremental update, which adds "several new
capabilities for users and developers," and the API is now up to level
13. Included in the new release? Optimizations for a wider range of
tablets, Compatibility zoom for fixed-sized apps, Media sync from SD card, and Extended screen support API.
With this out in the open, we suspect more and more tablets will see the
v3.2 update slide down shortly, and hopefully new app updates will be
taking advantage of it. Any devs out there given this a whirl?
HTML5 is a hot topic, which is a good thing. The problem is that
99% of what’s been written has been about HTML5 replacing Flash. Why is
that a problem? Because not only is it irrelevant, but also it prevents
you from seeing the big picture about interoperability.
But first things first. A few facts:
You do not build a web site in Flash. The only way to build a
website is to use HTML pages, and then to embed Flash elements in them.
Flash as been around for more than 12 years. It is a de facto
standard for the publishing industry. (No Flash = no advanced features
in banners).
HTML5 does not officially exist (yet). Rather, it’s a specification in working draft, scheduled for publication in 2014.
The video element in HTML5 is perfect for basic video
players, but Flash and Silverlight are much more suitable for advanced
video feature (streaming, caption, interactive features and
miscellaneous video effects).
These are not interpretations or opinions. These are facts. The truth is writing about the agony of Flash is an easy way to draw readers,
a much easier way than to adopt a nuanced stance. And this is why we
read so many garbage about HTML5 vs. Flash. (For an accurate
description, please read HTML5 fundamentals).
All this said, HTML5 will indeed replace Flash in certain circumstances, specifically Light interface enhancements.
To explain this, we must go back in time: HTML’s specifications evolved
over 10 years, thus web developers wishing to offer an enhanced
experience had no choice but Flash. In recent years, we began to see
Flash used for custom fonts and transitions. But HTML has at last
evolved into HTML5 (and CSS3), which allow web designers to use custom
fonts, gradients, rounded corners and transitions, among other uses. So
in this particular case (light interface enhancements), Flash is rapidly
losing ground to a much more legitimate HTML5.
So if HTML5 is more suitable for light interface enhancements, this
leaves rooms for Flash to do what it does best: heavy interface
enhancements, vector-based animations, advanced video and audio
features, and immersive environments.
To make a long story short: Flash has a 10 years advance over
HTML. This technology isn’t better but because it’s owned by a single
company has the entire control on its innovation rate. I have
no doubt that one day HTML will have the same capabilities as Flash
today, but in how many years? Don’t mistake me. Not every site needs
Flash or an equivalent RIA technology: Amazon, Ebay and Wikipedia built their audiences with classic HTML, as did millions of web sites.
So for the sake of precision: I am not an Adobe ambassador nor I am a
web standards’ ayatollah. I am just a web enthusiast enjoying what the
web best has to offer, whether powered by standard or proprietary
technologies. Moreover, standardization is not a simple process, because
what we refer to as standards (from MP3 and JPEG to h.264) are in fact
technologies owned by private companies or consortiums.
Then, there is the mobile argument. If iOS and Android provide users
with an HTML5 compliant browser, what about Blackberry? Symbian? WebOS?
Feature phones? Low cost tablets? If interoperability and wider reach are mandatory, then maybe the better way to achieve them will be to focus on APIs exploited by multiple interfaces, rather than on a miraculously adaptive HMTL5 front-end.
Of course, there are many technical arguments for one technology over the other. But the best and most important part is that you don’t have to choose between HTML5 and Flash because you can use both.
Maybe the best answer is to acknowledge that HTML5 and Flash have their
pros and cons and that you can use one or the other or both depending
on the experience you wish to provide, your ROI and SEO constraints, and
the human resources you access.
In short, it’s not a zero sum game. Rather, it’s a process of natural
evolution, where HTML is catching up while Flash is focusing on
advanced features (and narrowing, even as it consolidates, its market
share). Both are complementary. So please, stop comparing.
Over
the past few months, there have been a number of notable service
quality incidents and security breaches of online services,
including Sony’s PlayStation network, Amazon’s cloud service,
Dropbox’s storage in the cloud, and countless others. The bar talk
around “cloud” computing and online services would have you think
that businesses and consumers are shying away from using hosted
services, using Software as a Service (SaaS) applications, from
storing their data “in the cloud,” or from migrating some or all of
their computing infrastructure to virtual machines hosted by
cloud service providers. However, there’s actually an uptick in the
uptake of cloud computing in all of its various incarnations.
We (consumers and businesses) are using “cloud” services for all of the following kinds of activities:
1. Accessing and downloading media.
2. Accessing and downloading mobile apps.
3. Accessing and running business applications (CRM, hiring, ecommerce, logistics, provisioning, etc.).
4.
Collaborating with colleagues, clients, and customers (project
management, online communities, email, meeting scheduling).
5. Analyzing large amounts of data.
6. Storing large amounts of data (much of it unstructured, like video, images, text files, etc.).
7. Developing and testing new applications and online services.
8. Running distributed applications that need high performance around
the globe. (All of the social media apps we use are essentially
“cloud” applications—they run on virtual machines hosted in mostly
3rd-party data centers all over the world.)
9.
Scaling our operations to handle seasonal and other peaking
requirements—where we can take advantage of buying computing
capabilities by the hour, rather than pre-paying for capacity we
rarely need.
10. Back
up and Disaster Recovery—keeping copies of our systems and
data in remote locations, ready to run if a natural disaster impacts
our normal operations.
In short,
“cloud computing” in all of its instantiations—Software as a
Service, Platform as a Service, Infrastructure as a Service, Cloud
Storage, Cloud Computing, etc.—is here to stay. Taking advantage of
the cloud (virtual computers running software in data centers
distributed around the globe) is the most scalable and the most
cost-effective way to provide computing resources and
services to anyone who has reliable access to high bandwidth
networking via the Internet.
What About Security and Back Up?
Most of us now realize that we’re responsible for the security and
integrity of our information no matter where it sits on the planet.
And we are better off if we have more than one copy of anything
that’s really important.
SaaS and
cloud providers have had a lot of experience helping IT organizations
migrate some or all of their computing and/or storage to the
cloud. And most of them report that most IT organizations’ data
security practices leave quite a bit to be desired before they
migrate to the cloud. Their customers’ data security and
integrity typically improves dramatically as a result of
re-thinking their requirements and implementing better policies
and practices as they migrated some or all of their computing.
(Just because data is in your own physical data center doesn’t mean
it’s safe!)
It’s Time to Run Around in Front of the Cloud Parade
We’re now committed to living in the mobile Internet era. We treasure
our mobility and our unfettered access to information,
applications, media, and services. Cloud computing, in all its
forms, is here to stay. Small businesses and innovative service
providers have embraced cloud computing and services wholeheartedly
and are already reaping the benefits of “pay as you consume” for
software and computing and storage services. Medium-sized
businesses are the next to embrace cloud computing, because they
typically don’t have the inertia and overhead that comes with a
huge centralized IT organization. Large enterprises’ IT
organizations are the last to officially accept cloud computing as a
safe and compliant alternative for corporate IT. Yet many
departments in those same large enterprise organizations have been
the early adopters of cloud computing for the development and
testing of new software products and for the departmental (or even
corporate) adoption of SaaS for many of their companies’ most
critical applications.
Flash memory is the dominant nonvolatile (retaining information when
unpowered) memory thanks to its appearance in solid-state drives (SSDs)
and USB flash drives. Despite its popularity, it has issues when feature
sizes are scaled down to 30nm and below. In addition, flash has a
finite number of write-erase cycles and slow write speeds (on the order
of ms). Because of these shortcomings, researchers have been searching
for a successor even as consumers snap up flash-based SSDs.
There are currently a variety of alternative technologies competing to replace silicon-based flash memory, such as phase-change RAM (PRAM),
ferroelectric RAM (FERAM), magnetoresistive RAM (MRAM), and
resistance-change RAM (RRAM). So far, though, these approaches fail to
scale down to current process technologies well—either the switching
mechanism or switching current perform poorly at the nanoscale. All of
them, at least in their current state of development, also lack some
commercially-important properties such as write-cycle endurance,
long-term data retention, and fast switching speed. Fixing these issues
will be a basic requirement for next-gen non-volatile memory.
Or,
as an alternative, we might end up replacing this tech entirely.
Researchers from Samsung and Sejong University in Korea have published a
paper in Nature Materials that describes tanatalum oxide-based (TaOx) resistance-RAM (RRAM), which shows large improvements over current technology in nearly every respect.
RRAM devices work by applying a large enough voltage to switch material
that normally acts as an insulator (high-resistance state) into a
low-resistance state. In this case, the device is a sandwich structure
with a TaO2-x base layer and a thinner Ta2O5-x
insulating layer, surrounded by platinum (Pt) electrodes. This
configuration, known as metal-insulator-base-metal (MIMB), starts as an
insulator, but it can be switched to a low resistance, metal-metal
(filament)-base-metal (MMBM) state.
The nature of the switching process is not well understood in this case,
but the authors describe it as relying on the creation of conducting
filaments that extend through the Ta2O5-x layer.
These paths are created by applying sufficiently large voltages, which
drive the movement of oxygen ions through a redox (reduction-oxidaton)
process.
When in the MIMB state, the interface between the Pt electrode and the Ta2O5-x forms a metal-semiconductor junction known as a Schottky barrier, while the MMBM state forms an ohmic contact.
The main difference between these two is that the current-voltage
profile is linear and symmetric for ohmic but nonlinear and asymmetric
for Schottky. The presence of Schottky barriers is a benefit, as it
prevents stray current leakage through an array of multiple devices
(important for high-density storage).
The results presented by the authors appear to blow other memory
technologies out of the water, in pretty much every way we care about.
The devices presented here are 30nm thick, and the switching current is
50 ?A—an order of magnitude smaller than that of PRAM. They also
demonstrated an endurance of greater than 1012 switching cycles (higher than the previous best of 1010 and six orders of magnitude higher than that of flash memory at 104-106).
The device has a switching time of 10ns, and a data retention time
that's estimated to be 10 years operating at 85°C. This type of RRAM
also appears to work without problems in a vacuum, unlike
previously-demonstrated devices.
This may all seem too good to be true—it should be emphasized that this
was only a laboratory-scale demonstration, with 64 devices in an array
(therefore capable of storing only 64 bits). There will still be a few
years of development needed before we see gigabyte-size drives based on
this RRAM memory.
Like all semiconductor device fabrication, advances will be needed to
improve nanoscale lithography techniques for large-scale manufacturing
and, in this particular case, a better understanding of the basic
switching mechanism is also needed. However, based on the results shown
here, this new memory technology shows promise for use as a universal
memory storage: the same type could be used for storage and working
memory.
In Milwaukee, the fourth-poorest city in America, educators have launched a "guerrilla classroom" initiative
that transforms urban locations into impromptu classrooms for parents
and children. Across Milwaukee, playgrounds, bus stops and parks now
feature interactive displays and game-like learning environments that
encourage interactions between parents and children and teach real-world
applications of classroom subjects. It's all part of a pro bono
marketing effort by marketing agency Cramer-Krasselt,
Milwaukee Public Schools and COA Youth & Family Centers to raise
awareness about the positive impact of involving parents in the
education process. If successful, it's easy to see how similar groups of
determined "guerrilla educators" could extend this approach to other
cities.
The notion of the guerrilla classroom, of course, is an offshoot of guerrilla marketing, which in turn, is a capitalist expropriation of old school guerrilla warfare
tactics. The basic idea of any guerrilla marketing campaign, of course,
is to hit people with low-cost marketing and advertising messages in
the places where they least expect. Realizing that we're saturated with
advertising messages across TV, billboards and print publications,
guerrilla marketers solve for this problem by taking to the streets.
Literally. It could involve anything from elaborate PR stunts in public
places to street teams of young teens handing out cold drink samples on
a hot summer day. The best guerrilla marketing campaigns, of course,
are those in which we are so totally ambushed don't even realize that
we're being marketed to.
That's the brilliance of the Guerrilla Classroom initiative
- if it goes according to plan, these impromptu classrooms across the
city will draw in parents and children naturally. Waiting for that bus
for 15 minutes? Why not learn a little about mathematics while you wait?
Hanging out at the park? Why not take a minute to play a few little
word games that will help you read better? It's a well-known fact that
the greater the involvement of parents in the education process, the
better the results. The involvement doesn't have to occur at home or in a
formal classroom environment - it can now happen anywhere in the city,
at any time.
The Guerrilla Classroom initiative goes beyond the standard PSA message
that you might find around a city. These are not messages on
billboards telling parents and kids what to do -- these are
mini-classroom learning elements, distributed around the city. As a
result, the Guerrilla Classroom concept hits at one of the central
problems facing the urban parent: too little time to get involved in
the educational process. This is a depressing truth at both ends of the
socio-economic spectrum: minimum-wage workers are working too many
hours and multiple jobs to make ends meet, while urban elite parents
are typically too busy following their career and social-climbing goals
to spend quality one-on-one time with their kids.
Innovators now have a host of new tools - from mobile devices to the
latest thinking about game mechanics - to extend this concept of the
Guerrilla Classroom and reach an even-wider audience. Based on the early
success of the Skillshare model,
for example, it's not hard to imagine a day when small "flash mobs" of
kids decide to meet up all over the city in "cool" places like skate
parks to teach each other foreign languages, music, painting, or even
more fundamental things - like how to cope with parents who suddenly
have shown a much greater interest in their educational development.