Entries tagged as technology
Friday, July 22. 2011
Via geek.com
-----

Ahh, Watson. Your performance on Jeopardy
let the world know that computers were about more than just storing and
processing data the way computers always have. Watson showed us all
that computers were capable of thinking in very human ways, which is
both an extremely exciting and mildly frightening prospect.
A research group at MIT
has been working on a project along the same lines — a computer that
can process information in a human-like manner and then apply what it’s
learned to a specific situation. In this case, the information was the
instruction manual for the classic PC game Civilization. After
reading the manual, the computer was ready to do battle with the game’s
AI. The result: 79% of the time, the computer was victorious.
This is an undeniably impressive development, but we’re clearly not
in any real danger until the computer decides to man up and play without reading the instructions like any real gamer
would. MIT tried that as well, and while a 46% success rate doesn’t
look all that good percentage-wise, it’s pretty darn amazing when you
remember this is a computer playing Civilization with no
orientation of any kind. I’ve got plenty of friends that couldn’t
compete with that, though they all insist it’s because the game was
boring and they hated it.
The ultimate goal of the project was to prove that computers were
capable of processing natural language the way we do — and actually
learning from it, not merely spitting out responses the way an
intelligent voice response (IVR) system does, for example. A system like
this could one day power something like a tricorder, diagnosing
symptoms based on a cavernous cache of medical data. Don’t worry,
doctors, it’s going to be a while before computers actually replace you.
News@MIT
The research paper
Tuesday, July 19. 2011
Both Apple and Microsoft's new desktop operating systems borrow elements from mobile devices, in sometimes confusing ways.
Apple is widely expected to unveil a major update this week to OS X Lion, its operating system for desktop and laptop computers. Microsoft, meanwhile, is working on an even bigger overhaul of Windows, with a version called Windows 8.
Both new operating systems reflect a tectonic shift in personal computing. They incorporate elements from mobile operating systems alongside more conventional desktop features. But demos of both operating systems suggest that users could face a confusing mishmash of design ideas and interaction methods.
Windows 8 and OS X Lion include elements such as touch interaction and full-screen apps that will facilitate the kind of "unitasking" (as opposed to multitasking) that users have become accustomed to on mobile devices and tablets.
"The rise of the tablets, or at least the iPad, has suggested that there is a latent, unmet need for a new form of computing," says Peter Merholz, president of the user-experience and design firm Adaptive Path. However, he adds, "moving PCs in a tablet direction isn't necessarily sensible."
Cathy Shive, an independent software developer, would agree. She developed software for Mac desktop applications for six years before she switched and began developing for iOS (Apple's operating system for the iPhone and iPad). "When I first saw Steve Jobs's demo of Lion, I was really surprised—I was appalled, actually," she says.
Shive is surprised by the direction both Apple and Microsoft are taking. One fundamental dictate of usability design is that an interface should be tailored to the specific context—and hardware—in which it lives. A desktop PC is not the same thing as a tablet or a mobile device, yet in that initial demo, "It seemed like what [Jobs] was showing us was a giant iPad," says Shive.
A subsequent demonstration of Windows 8 by Microsoft vice president Julie Larson-Green confirmed that Redmond was also moving toward touch as a dominant interaction mechanism. One of the devices used in that demonstration, a "media tablet" from Taiwan-based ASUS, resembled an LCD monitor with no keyboard.
Not everyone is so skeptical about Apple and Microsoft's plans. Lukas Mathis, a programmer and usability expert, thinks that, on balance, this shift is a good thing. "If you watch casual PC users interact with their computers, you'll quickly notice that the mouse is a lot harder to use than we think," he says. "I'm glad to see finger-friendly, large user interface elements from phones and tablets make their way into desktop operating systems. This change was desperately needed, and I was very happy to see it."
Mathis argues that experienced PC users don't realize how crowded with "small buttons, unclear icons, and tiny text labels" typical desktop operating systems are.
Lion and Windows 8 solve these problems in slightly different ways. In Lion, file management is moving toward an iPhone/iPad-style model, where users launch applications from a "Launchpad," and their files are accessible from within those applications. In Windows 8, files, along with applications, bookmarks, and just about anything else, can be made accessible from a customizable start screen.
Some have criticized Mission Control, Apple's new centralized app and window management interface, saying that it adds complexity rather than introducing the simplicity of a mobile interface. At the other extreme, Lion allows any app to be rendered full-screen, which blocks out distractions but also forces users to switch applications more often than necessary.
"The problem [with a desktop OS] is that it's hard to manage windows," says Mathis. "The solution isn't to just remove windows altogether; the solution is to fix window management so it's easier to use, but still allows you to, say, write an essay in one window, but at the same time look at a source for your essay in a different window."
Windows 8, meanwhile, attempts to solve this problem in a more elegant way, with a "Windows Snap," which allows apps to be viewed side-by-side while eliminating the need to manage their dimensions by dragging them from the corner.
A problem with moving toward a touch-centric interface is that the mouse is absolutely necessary for certain professional applications. "I can't imagine touch in Microsoft Excel," says Shive. "That's going to be terrible," she says.
The most significant difference between Apple's approach and Microsoft's is that Windows 8 will be the same OS no matter what device it's on, from a mobile phone to a desktop PC. To accommodate a range of devices, Microsoft has left intact the original Windows interface, which users can switch to from the full-screen start screen and full-screen apps option.
Merholz believes Microsoft's attempt to make its interface consistent across all devices may be a mistake. "Microsoft has a history of overemphasizing the value of 'Windows everywhere.' There's a fear they haven't learned appropriateness, given the device and its context," he says.
Shive believes the same could be said of Apple. "Apple has been seduced by their own success, and they're jumping to translate that over to the desktop ... They think there's some kind of shortcut, where everyone is loving this interface on the mobile device, so they will love it on their desktop as well," she says.
In a sense, both Apple and Microsoft are about to embark on a beta test of what the PC should be like in an era when consumers are increasingly accustomed to post-PC modes of interaction. But it could be a bumpy process. "I think we can get there, but we've been using the desktop interface for 30 years now, and it's not going to happen overnight," says Shive.
-----
Personal Comments:
From my personal point of view and
based on my 30 years IT/Dev experience, I do not see the change of
desktop Look&Feel as a crisis but more as a simple and efficient aesthetic evolution.
Why? Because what was made for mobile
phone first and then for new coming mobile devices like tablets is
what some people were trying to do on laptop/desktop computer's GUI for
years: trying to make the GUI/desktop experience simple enough in
order to make computers accessible to anyone of us, even to the more
recusant to technology (see evolution of windows and Linux GUI). That
specific goal was successfully reached on mobile phones/devices in a very short time, pushing common people to
change of device every two years and making them enjoy new functionality/technology without having to read one single page of an
instruction manual (by the way, mobile phones are delivered without
any!).
It looks like technological constraints
and restrictions were needed in order to invent this kind of
interface. Touch screen only mobile phones were available since
years prior Apple produces its first iPhone (2007), remember the Sony-Ericsson P800
(2002) and its successor the P900 (2003), technically everything was here (they are close to the "classic" smartphone we are used to have in our pocket nowadays), but an efficient GUI and
in a more general way, an efficient OS was dramatically missing. What was done by
Apple with iOS, Google with Android and HTC with its UISense GUI on top of Android brings out and demonstrates the obvious potential of these mobile devices.
The adaptation of these GUI/OS on tablet (iOS,
Android 3.0), still with the touch-only constraint, rises up new
solutions for GUI while extending what can be done through few basic finger gestures. It sounds not surprising that classic
desktop/laptop computers are now trying to integrate the good of all this in
their own environment as they did not succeed in doing so on their own before. I would even say that this is an obvious
step forward as many ideas are adaptable to desktop computer world.
For example, making easier the installation of applications by making
available App store concept to desktop computers is an obvious step,
one does not have to think if the application is compatible with the local
hardware etc... the App store just focus on compatible applications,
seamlessly.
So more than entering a crisis/revolution, I would
say that desktop computer world will just exploit from the mobile
devices world what can be adapted in order to make the desktop
computer experience for the end-user as seamless as it is on mobile
devices... but for some basic tasks only.
You can embed a desktop OS in a very
nice and simple box making things looking very similar to mobile
device's simplicity, but this is just a kind of gift package which is
not valuable for all usages you can face on a desktop computer...
making this step forward looking like a set of cosmetic changes, and not more... because it just can not be more!
Today, one is used to glorious declaration each time a new OS is proposed to end-user, many so-called "new" features mentioned are not more than already existing ones that were re-design and pushed on the scene in order to obtain a kind of revolutionary OS impression: who can seriously consider full screen app or automatic save as new key features for a 21th century's new OS?
Let's go through some of the key new features announced by Apple in Mac OS X Lion:
- Multi-touch: this is not a new feature, it "just" adds some new functionality to map to already available multi-touch gestures.
- Full Screen management: it basically attached a virtual desktop to any application running in full screen. Thus, you can switch from/to full screen applications... the same way you were already able to do so by switching from one virtual desktop to another.
- LaunchPad: this is basically a graphical interface/shortcuts for the 'Applications' folder in the Finder. Ok it looks like the Apps grid on a tablet or a mobile phone... but as it was already presented as a list, the other option was... guess what... a grid!
- Mission Control: this is also an evolution of something that was already existing. The ability to see all your windows in addition to all your virtual desktops.
I'm pretty convinced that these new features are going to be really useful and pleasant to use, making the usage of the touchpad on MacBook even more primordial, but I do not see here a real revolution, neither a crisis, in the way we are going to work on desktop/laptop computers.
Monday, July 18. 2011
Via OStatic
-----
Now
that Chromebooks--portable computers based on Google's Chrome OS--are
out in the wild and hands-on reviews are appearing about them, it's
easier to gauge the prospects for this new breed of operating system.
As Jon Buys discussed here on OStatic,
these portables are not without their strong points. However, there
are criticisms appearing about the devices, and the criticisms echo
ones that we made early on. In short, Chromebooks force a cloud-only
compute model that leaves people used to working with local files and
data in the cold.
In his OStatic post on Chromebooks, Jon Buys noted that the devices
are "built to be the fastest way to get to the web." Indeed, Chromebooks
boot in seconds--partly thanks to a Linux core--and deposit the user
into an environment that looks and works like the Chrome browser.
However, Cade Metz, writing for The Register, notes this in a review of the Samsung Chromebook, which is available for under $500:
"Running Google's Chrome OS operating system, Chromebooks
seek to move everything you do onto the interwebs. Chrome OS is
essentially a modified Linux kernel that runs only one native
application: Google's Chrome browser….This also means that most of your
data sits on the web, inside services like Google Docs…The rub is that
the world isn't quite ready for a machine that doesn't give you access
to local files."
That's exactly what I felt woud be the shortcoming of devices based on Chrome OS, as seen in this post:
"With Chrome OS, Google is betting heavily on the idea
that consumers and business users will have no problem storing data and
using applications in the cloud, without working on the locally stored
data/applications model that they're used to. Here at OStatic, we always
questioned the aggressively cloud-centric stance that Chrome OS is
designed to take. Don't users want local applications too? Why don't I
just run Ubuntu and have my optimized mix of cloud and local apps? After
all, the team at Canonical, which has a few years more experience than
Google does in the operating system business, helped create Chrome OS. "
The interesting thing is, though, that Google may have an opportunity
to easily address this perceived shortcoming. For example, Chrome OS
now has a file manager, and it's not much of a leap to get from a file
manager to a reasonable set of ways to store local data and manage it.
Odds are that Google will change the way Chrome OS works going forward,
making it less cloud-centric. That would seem to be a practical course
of action.
Sunday, July 17. 2011
Via Forbes
-----
HTML5 is a hot topic, which is a good thing. The problem is that
99% of what’s been written has been about HTML5 replacing Flash. Why is
that a problem? Because not only is it irrelevant, but also it prevents
you from seeing the big picture about interoperability.
But first things first. A few facts:
- You do not build a web site in Flash. The only way to build a
website is to use HTML pages, and then to embed Flash elements in them.
- Flash as been around for more than 12 years. It is a de facto
standard for the publishing industry. (No Flash = no advanced features
in banners).
- HTML5 does not officially exist (yet). Rather, it’s a specification in working draft, scheduled for publication in 2014.
- Less than half of installed browsers are HTML5 compliant, with different levels of compliance.
- The video element in HTML5 is perfect for basic video
players, but Flash and Silverlight are much more suitable for advanced
video feature (streaming, caption, interactive features and
miscellaneous video effects).
These are not interpretations or opinions. These are facts. The truth is writing about the agony of Flash is an easy way to draw readers,
a much easier way than to adopt a nuanced stance. And this is why we
read so many garbage about HTML5 vs. Flash. (For an accurate
description, please read HTML5 fundamentals).
All this said, HTML5 will indeed replace Flash in certain circumstances, specifically Light interface enhancements.
To explain this, we must go back in time: HTML’s specifications evolved
over 10 years, thus web developers wishing to offer an enhanced
experience had no choice but Flash. In recent years, we began to see
Flash used for custom fonts and transitions. But HTML has at last
evolved into HTML5 (and CSS3), which allow web designers to use custom
fonts, gradients, rounded corners and transitions, among other uses. So
in this particular case (light interface enhancements), Flash is rapidly
losing ground to a much more legitimate HTML5.
So if HTML5 is more suitable for light interface enhancements, this
leaves rooms for Flash to do what it does best: heavy interface
enhancements, vector-based animations, advanced video and audio
features, and immersive environments.
To make a long story short: Flash has a 10 years advance over
HTML. This technology isn’t better but because it’s owned by a single
company has the entire control on its innovation rate. I have
no doubt that one day HTML will have the same capabilities as Flash
today, but in how many years? Don’t mistake me. Not every site needs
Flash or an equivalent RIA technology: Amazon, Ebay and Wikipedia built their audiences with classic HTML, as did millions of web sites.
So for the sake of precision: I am not an Adobe ambassador nor I am a
web standards’ ayatollah. I am just a web enthusiast enjoying what the
web best has to offer, whether powered by standard or proprietary
technologies. Moreover, standardization is not a simple process, because
what we refer to as standards (from MP3 and JPEG to h.264) are in fact
technologies owned by private companies or consortiums.
Then, there is the mobile argument. If iOS and Android provide users
with an HTML5 compliant browser, what about Blackberry? Symbian? WebOS?
Feature phones? Low cost tablets? If interoperability and wider reach are mandatory, then maybe the better way to achieve them will be to focus on APIs exploited by multiple interfaces, rather than on a miraculously adaptive HMTL5 front-end.
Of course, there are many technical arguments for one technology over the other. But the best and most important part is that you don’t have to choose between HTML5 and Flash because you can use both.
Maybe the best answer is to acknowledge that HTML5 and Flash have their
pros and cons and that you can use one or the other or both depending
on the experience you wish to provide, your ROI and SEO constraints, and
the human resources you access.
In short, it’s not a zero sum game. Rather, it’s a process of natural
evolution, where HTML is catching up while Flash is focusing on
advanced features (and narrowing, even as it consolidates, its market
share). Both are complementary. So please, stop comparing.
Thursday, July 14. 2011
Via Outside Innovation
-----
Over
the past few months, there have been a number of notable service
quality incidents and security breaches of online services,
including Sony’s PlayStation network, Amazon’s cloud service,
Dropbox’s storage in the cloud, and countless others. The bar talk
around “cloud” computing and online services would have you think
that businesses and consumers are shying away from using hosted
services, using Software as a Service (SaaS) applications, from
storing their data “in the cloud,” or from migrating some or all of
their computing infrastructure to virtual machines hosted by
cloud service providers. However, there’s actually an uptick in the
uptake of cloud computing in all of its various incarnations.
We (consumers and businesses) are using “cloud” services for all of the following kinds of activities:
1. Accessing and downloading media.
2. Accessing and downloading mobile apps.
3. Accessing and running business applications (CRM, hiring, ecommerce, logistics, provisioning, etc.).
4.
Collaborating with colleagues, clients, and customers (project
management, online communities, email, meeting scheduling).
5. Analyzing large amounts of data.
6. Storing large amounts of data (much of it unstructured, like video, images, text files, etc.).
7. Developing and testing new applications and online services.
8. Running distributed applications that need high performance around
the globe. (All of the social media apps we use are essentially
“cloud” applications—they run on virtual machines hosted in mostly
3rd-party data centers all over the world.)
9.
Scaling our operations to handle seasonal and other peaking
requirements—where we can take advantage of buying computing
capabilities by the hour, rather than pre-paying for capacity we
rarely need.
10. Back
up and Disaster Recovery—keeping copies of our systems and
data in remote locations, ready to run if a natural disaster impacts
our normal operations.
In short,
“cloud computing” in all of its instantiations—Software as a
Service, Platform as a Service, Infrastructure as a Service, Cloud
Storage, Cloud Computing, etc.—is here to stay. Taking advantage of
the cloud (virtual computers running software in data centers
distributed around the globe) is the most scalable and the most
cost-effective way to provide computing resources and
services to anyone who has reliable access to high bandwidth
networking via the Internet.
What About Security and Back Up?
Most of us now realize that we’re responsible for the security and
integrity of our information no matter where it sits on the planet.
And we are better off if we have more than one copy of anything
that’s really important.
SaaS and
cloud providers have had a lot of experience helping IT organizations
migrate some or all of their computing and/or storage to the
cloud. And most of them report that most IT organizations’ data
security practices leave quite a bit to be desired before they
migrate to the cloud. Their customers’ data security and
integrity typically improves dramatically as a result of
re-thinking their requirements and implementing better policies
and practices as they migrated some or all of their computing.
(Just because data is in your own physical data center doesn’t mean
it’s safe!)
It’s Time to Run Around in Front of the Cloud Parade
We’re now committed to living in the mobile Internet era. We treasure
our mobility and our unfettered access to information,
applications, media, and services. Cloud computing, in all its
forms, is here to stay. Small businesses and innovative service
providers have embraced cloud computing and services wholeheartedly
and are already reaping the benefits of “pay as you consume” for
software and computing and storage services. Medium-sized
businesses are the next to embrace cloud computing, because they
typically don’t have the inertia and overhead that comes with a
huge centralized IT organization. Large enterprises’ IT
organizations are the last to officially accept cloud computing as a
safe and compliant alternative for corporate IT. Yet many
departments in those same large enterprise organizations have been
the early adopters of cloud computing for the development and
testing of new software products and for the departmental (or even
corporate) adoption of SaaS for many of their companies’ most
critical applications.
Via ars technica
-----
Flash memory is the dominant nonvolatile (retaining information when
unpowered) memory thanks to its appearance in solid-state drives (SSDs)
and USB flash drives. Despite its popularity, it has issues when feature
sizes are scaled down to 30nm and below. In addition, flash has a
finite number of write-erase cycles and slow write speeds (on the order
of ms). Because of these shortcomings, researchers have been searching
for a successor even as consumers snap up flash-based SSDs.
There are currently a variety of alternative technologies competing to replace silicon-based flash memory, such as phase-change RAM (PRAM),
ferroelectric RAM (FERAM), magnetoresistive RAM (MRAM), and
resistance-change RAM (RRAM). So far, though, these approaches fail to
scale down to current process technologies well—either the switching
mechanism or switching current perform poorly at the nanoscale. All of
them, at least in their current state of development, also lack some
commercially-important properties such as write-cycle endurance,
long-term data retention, and fast switching speed. Fixing these issues
will be a basic requirement for next-gen non-volatile memory.
Or,
as an alternative, we might end up replacing this tech entirely.
Researchers from Samsung and Sejong University in Korea have published a
paper in Nature Materials that describes tanatalum oxide-based (TaOx) resistance-RAM (RRAM), which shows large improvements over current technology in nearly every respect.
RRAM devices work by applying a large enough voltage to switch material
that normally acts as an insulator (high-resistance state) into a
low-resistance state. In this case, the device is a sandwich structure
with a TaO2-x base layer and a thinner Ta2O5-x
insulating layer, surrounded by platinum (Pt) electrodes. This
configuration, known as metal-insulator-base-metal (MIMB), starts as an
insulator, but it can be switched to a low resistance, metal-metal
(filament)-base-metal (MMBM) state.
The nature of the switching process is not well understood in this case,
but the authors describe it as relying on the creation of conducting
filaments that extend through the Ta2O5-x layer.
These paths are created by applying sufficiently large voltages, which
drive the movement of oxygen ions through a redox (reduction-oxidaton)
process.
When in the MIMB state, the interface between the Pt electrode and the Ta2O5-x forms a metal-semiconductor junction known as a Schottky barrier, while the MMBM state forms an ohmic contact.
The main difference between these two is that the current-voltage
profile is linear and symmetric for ohmic but nonlinear and asymmetric
for Schottky. The presence of Schottky barriers is a benefit, as it
prevents stray current leakage through an array of multiple devices
(important for high-density storage).
The results presented by the authors appear to blow other memory
technologies out of the water, in pretty much every way we care about.
The devices presented here are 30nm thick, and the switching current is
50 ?A—an order of magnitude smaller than that of PRAM. They also
demonstrated an endurance of greater than 1012 switching cycles (higher than the previous best of 1010 and six orders of magnitude higher than that of flash memory at 104-106).
The device has a switching time of 10ns, and a data retention time
that's estimated to be 10 years operating at 85°C. This type of RRAM
also appears to work without problems in a vacuum, unlike
previously-demonstrated devices.
This may all seem too good to be true—it should be emphasized that this
was only a laboratory-scale demonstration, with 64 devices in an array
(therefore capable of storing only 64 bits). There will still be a few
years of development needed before we see gigabyte-size drives based on
this RRAM memory.
Like all semiconductor device fabrication, advances will be needed to
improve nanoscale lithography techniques for large-scale manufacturing
and, in this particular case, a better understanding of the basic
switching mechanism is also needed. However, based on the results shown
here, this new memory technology shows promise for use as a universal
memory storage: the same type could be used for storage and working
memory.
Saturday, July 09. 2011
Via Businessweek
-----
The Real Veterans of Technology To survive for more than 100 years, tech companies such as Siemens, Nintendo, and IBM have had to innovate
By Antoine Gara
When IBM turned 100 on June 16, it joined a surprising number of tech companies that have been evolving in order to survive since long before there was a Silicon Valley. Here’s a sampling of tech centenerians, and some of the breakthroughs that helped fuel their longevity.
Siemens: Topical Press Agency/Getty Images; Western Union: Hulton Archive/Getty Images; Diebold: Diebold Inc; Ericsson: SSPL/Getty Images; Nintendo: Tim Whitby/Alamy; Kodak: George Eastman House
Monday, May 16. 2011
Via ubergizmo
-----

With the GreenChip, each bulb becomes a small networked device
NXP has just announced its GreenChip, which gives every light bulb
the potential of being connected to a TCP/IP network to provide
real-time information and receive commands, wirelessly. This feels a bit
like science-fiction talk, but NXP has managed to build a chip that is
low-cost enough to be embedded into regular light bulbs (and more in the
future) with an increase of about $1 in manufacturing cost. Obviously,
$1 is not small relative to the price of a bulb but, in absolute terms,
it’s not bad at all — and the cost is bound to fall steadily, thanks to
Moore’s law.
But what can you do with wirelessly connected bulbs? For one, you can
dim, or turn them on and off using digital commands from any computer,
phone or tablet.
You can also do it remotely: those chips have the potential of making
home automation much easier and more standard than anything that came
before. Better home automation can also mean smarter (and automated)
energy -and money- savings. the bulbs are also smart enough to know how
much energy they have consumed.
Although the bulbs use internet addresses, they are not connected
directly to the web. They don’t use WIFI either, because that protocol
is too expensive and not energy-efficient for this usage. Instead, the
bulbs are linked through a 2.4-GHz IEEE 802.15.4 network and in standby
mode, the GreenChip consumes about 50mW.
The network itself is a mesh network that is connected to a “box”
that will itself be connected to your home network. Computers and mobile
devices send commands to the box, which sends them to the bulbs.
Because it is a mesh network, every bulb is considered to be a “network
extender”, so as long as there is 30 meters between two bulbs, the
network can be extended across very large surfaces. In a typical house,
that would mean no “dead spots”.
The first products will be manufactured by TCP, which manufactures
about 1M efficient light bulbs (of all sorts) per day. TCP supplies
other brands like Philips or GE. The prices of the final products have
yet to be determined, but NXP expect them to be attractive to consumers.
Of course, we need to see what the applications will look like too.
This is an interesting first step in embedding low-cost smart chips
in low-cost goods. Yet, this is a critical step in creating a smarter
local energy grid in our homes.
Thursday, May 05. 2011
Via Intel
-----

Intel’s WiDi technology is an interesting feature on some notebooks that allows the user to wirelessly shoot video from the computer over to another screen in range of the wireless transmission. WiDi stands for wireless display, and it supports 1080p video when used with machines that are running the new second-generation Core processors. Intel has announced that it has now updated WiDi to a later version, and the new version has some cool features that weren’t available before.
The release notes for the new 2.1 update tells what the new added
features are. Version 2.1 now has a unified 32-bit/64-bit installer
using a single file. The new version will stream up to 1080p resolution
video with hardware based H.264 encoding. The service now supports
802.11n PAN at 2.4GHz and 5GHz. Intel HD Graphics 3000 based hardware
encoding is supported with updated graphics drivers. Verizon 2.1 also
brings the ability to view HDCP 2.0 content with support for DVD,
Blu-ray and some online protected content.
Other new features include 6-channel 16-bit/48 kHz LPCM sound output
with playback application support. The latency in the new version has
dropped to under 300ms as well. WiDi will also now detect ISDB-T and
ISDB-S TV Tuners. You can download the update directly from Intel for
WiDi right now and enjoy shooting your wireless content over to a TV or
bigger computer screen.
|