The 5S features a fingerprint sensor, has an upgraded camera, and contains an A7 chip Photo: Justin Sullivan/Getty Images
Hackers from around the world have put together more than $15,000
they hope will be enough to entice the smartest hackers to break into
the new iPhone’s much-lauded fingerprint scanner.
The iPhone 5S, which was announced last week, features a fingerprint
scanner to unlock the device and make purchases. Apple has said that an
image of the fingerprint is not stored on the device, but only the data
to recognize the fingerprint when it is pressed on the sensor.
Security experts quickly grew suspicious after the product was
annouced, though. It is, after all, far easier to change a compromised
password than a compromised fingerprint if the data were to get into the
wrong hands.
To test Apple’s security claims, hackers are taking the challenge global.
The campaign is being run through IsTouchIdHackedYet.com, where individuals can put up their own money to reward the winner of the challenge.
The amount currently being offered is in excess of $15,000.
To win the prize, someone must be able to demonstrate that they:
Can lift a fingerprint from the iPhone 5S
Reproduces the fingerprint
Use the reproduced fingerprint to unlock the iPhone 5S in fewer than 5 attempts
It’s no small challenge.
Apple says the information gathered by the phone is not an image of
the fingerprint but an encrypted pile of data points that describe the
fingerprint. They’ve also said that the information is stored deep
within the phone and will be extremely difficult for anyone, including
Apple, to access. They’ve also said that the fingerprint data will not
be transmitted from the phone in any capacity.
This effectively means that winners of the prize will most likely try
to dust the phone itself for fingerprints and try to replicate the
finger by creating some kind of physical caste of the print to be used
on the scanner.
I am pleased to announce that we released the Kinect for Windows
software development kit (SDK) 1.8 today. This is the fourth update to
the SDK since we first released it commercially one and a half years
ago. Since then, we’ve seen numerous companies using Kinect for Windows
worldwide, and more than 700,000 downloads of our SDK.
We build each version of the SDK with our customers in mind—listening
to what the developer community and business leaders tell us they want
and traveling around the globe to see what these dedicated teams do, how
they do it, and what they most need out of our software development
kit.
The new background removal API is useful for advertising, augmented reality gaming, training and simulation, and more.
Kinect for Windows SDK 1.8 includes some key features and samples that the community has been asking for, including:
New background removal. An API removes the
background behind the active user so that it can be replaced with an
artificial background. This green-screening effect was one of the top
requests we’re heard in recent months. It is especially useful for
advertising, augmented reality gaming, training and simulation, and
other immersive experiences that place the user in a different virtual
environment.
Realistic color capture with Kinect Fusion. A new
Kinect Fusion API scans the color of the scene along with the depth
information so that it can capture the color of the object along with
its three-dimensional (3D) model. The API also produces a texture map
for the mesh created from the scan. This feature provides a full
fidelity 3D model of a scan, including color, which can be used for full
color 3D printing or to create accurate 3D assets for games, CAD, and
other applications.
Improved tracking robustness with Kinect Fusion.
This algorithm makes it easier to scan a scene. With this update, Kinect
Fusion is better able to maintain its lock on the scene as the camera
position moves, yielding a more reliable and consistent scanning.
HTML interaction sample. This sample demonstrates
implementing Kinect-enabled buttons, simple user engagement, and the use
of a background removal stream in HTML5. It allows developers to use
HTML5 and JavaScript to implement Kinect-enabled user interfaces, which
was not possible previously—making it easier for developers to work in
whatever programming languages they prefer and integrate Kinect for
Windows into their existing solutions.
Multiple-sensor Kinect Fusion sample. This sample
shows developers how to use two sensors simultaneously to scan a person
or object from both sides—making it possible to construct a 3D model
without having to move the sensor or the object! It demonstrates the
calibration between two Kinect for Windows sensors, and how to use
Kinect Fusion APIs with multiple depth snapshots. It is ideal for retail
experiences and other public kiosks that do not include having an
attendant available to scan by hand.
Adaptive UI sample. This sample demonstrates how to
build an application that adapts itself depending on the distance
between the user and the screen—from gesturing at a distance to touching
a touchscreen. The algorithm in this sample uses the physical
dimensions and positions of the screen and sensor to determine the best
ergonomic position on the screen for touch controls as well as ways the
UI can adapt as the user approaches the screen or moves further away
from it. As a result, the touch interface and visual display adapt to
the user’s position and height, which enables users to interact with
large touch screen displays comfortably. The display can also be adapted
for more than one user.
We also have updated our Human Interface Guidelines (HIG) with
guidance to complement the new Adaptive UI sample, including the
following:
Design a transition that reveals or hides additional information without obscuring the anchor points in the overall UI.
Design UI where users can accomplish all tasks for each goal within a single range.
My team and I believe that communicating naturally with computers
means being able to gesture and speak, just like you do when
communicating with people. We believe this is important to the evolution
of computing, and are committed to helping this future come faster by
giving our customers the tools they need to build truly innovative
solutions. There are many exciting applications being created with
Kinect for Windows, and we hope these new features will make those
applications better and easier to build. Keep up the great work, and
keep us posted!
MakerBot
is best known for its 3D printers, turning virtual products into real
ones, but the company’s latest hardware to go on sale, the MakerBot
Digitizer, takes things in the opposite direction. Announced back in March,
and on sale from today for $1,400, the Digitizer takes a real-world
object and, by spinning it on a rotating platform in front of a camera,
maps out a digital model that can then be saved, shared, modified, and
even 3D printed itself.
Although the process itself involves some complicated technology and
data-crunching, MakerBot claims that users themselves should be able to
scan in an object in just a couple of clicks. The company includes its
own MakerWare for Digitizer software, which creates files suitable for
both the firm’s own 3D printers and generic 3D files for other hardware.
Calibration is a matter of dropping the included glyph block on the
rotating platter and having the camera run through some preconfigured
tests. After that, you center the object you’re hoping to scan,
selecting whether they’re lightly colored, medium, or dark, and then
waiting until the process is done.
MakerBot Digitizer Desktop 3D Scanner overview:
That takes approximately twelve minutes per
object, MakerBot says, so don’t think of this as the 3D scanner
equivalent of a photocopier. The camera itself runs at 1.3-megapixels
and is paired with two Class 1 lasers for mapping out objects, and the
overall resolution is good for 0.5mm in terms of detail and +/- 2.0mm
for dimensional accuracy. Maximum object size is up to 20.3cm in
diameter and the same in height.
Once you’ve actually run something through the scanner, the core grid
file can be shared directly from the app to Thingiverse.com, or edited
and combined with other 3D files to make a new object altogether.
The MakerBot Digitizer Desktop Scanner is available for order now, priced at $1,400.
GAMES CONSOLE AND PHONE MAKER Microsoft has hinted that it will bring its Xbox Kinect camera technology to its Windows Phone 8 mobile operating system following its acquisition of Nokia.
During a conference call on Tuesday, Microsoft operating systems
group VP Terry Myerson hinted that the firm will work with Nokia to
integrated Kinect camera technology into future Windows Phone devices.
Speaking about the Nokia buyout, Myerson said, "In the area of imaging, the Nokia Lumia 1020 has no equal. We are excited to bring this together with our Kinect camera technology to delight our customers."
Myerson didn't elaborate further, so it's unclear when Microsoft will
introduce this technology and what it will be capable of doing.
However, this isn't the first we've heard about Kinect technology
possibly coming to Windows Phone devices. Previous rumours had
speculated that the integration will allow users of Windows Phone
devices to control their handsets using both voice and gestures.
It's possible that the Kinect integration could also enable more
immersive gaming on Windows Phone devices and facilitate deeper
integration with Microsoft's Xbox One games console.
While all this sounds promising, Kinect integration could also be a
bad thing for the Windows Phone ecosystem, with Microsoft having been
accused of spying on its users with Kinect in the wake of recent NSA
surveillance revelations.
Between
1981 and 1982, renowned photographer Ira Nowinski hiked all over the
Bay Area, taking hundreds of photos of arcades. In all, he snapped
around 700 images, and in awesome news for retro gaming fans many of
them are now available for viewing, courtesy of their acquisition by
Stanford University's library.
Once you're
done looking at the games, and in particular that cruisey arcade that's
nearly all cocktail units, get a load of the fashion. While
arcades still exist today, they sure don't have the same diversity of
clientèle you see here, like Mr. Texas on the Pac-Man cabinet up top.
A growing number of smartphones are shipping with NFC, or Near Field
Communication technology. This lets you send information between devices
by tapping them together. For example you can share a photo with a
friend or make a mobile payment from a digital wallet app.
But a team of researchers is showing off a way you can transmit more than just data — you can also transmit power.
For instance, you could pair a low-power E Ink display with your
smartphone and send across pictures and enough power to flip through a
few of those images.
This lets you use the E Ink screen as a secondary, low-power display
for your smartphone. E Ink only uses power when you refresh the screen,
so you only need a tiny bit of power to display an image and then it
will can be displayed indefinitely without any additional power.
So if you have directions, a map, phone number, or a photo that you
want to be able to look at continuously without running down your
smartphone battery, you can tap the phone against the E Ink screen to
quickly charge the secondary display and then transfer a screenshot.
Then you can slide your phone back in your pocket while the phone
number, address, or other data stays on the screen.
You can’t transmit a lot of energy over an NFC connection
this way, so you’re not exactly going to wirelessly charge your iPod
touch using this kind of setup. But it’s an interesting demo of how NFC,
E Ink, and smartphones can work together.
The demo is courtesy of a team at Intel, the University of Massachussetts and the University of Washington.
Lately there’s been a spate of articles about breakthroughs in
battery technology. Better batteries are important, for any of a number
of reasons: electric cars, smoothing out variations in the power grid,
cell phones, and laptops that don’t need to be recharged daily.
All of these nascent technologies are important, but some of them
leave me cold, and in a way that seems important. It’s relatively easy
to invent new technology, but a lot harder to bring it to market. I’m
starting to understand why. The problem isn’t just commercializing a new
technology — it’s everything that surrounds that new technology.
Take an article like Battery Breakthrough Offers 30 Times More Power, Charges 1,000 Times Faster.
For the purposes of argument, let’s assume that the technology works;
I’m not an expert on the chemistry of batteries, so I have no reason to
believe that it doesn’t. But then let’s take a step back and think about
what a battery does. When you discharge a battery, you’re using a
chemical reaction to create electrical current (which is moving
electrical charge). When you charge a battery, you’re reversing that
reaction: you’re essentially taking the current and putting that back in
the battery.
So, if a battery is going to store 30 times as much power and charge
1,000 times faster, that means that the wires that connect to it need to
carry 30,000 times more current. (Let’s ignore questions like “faster
than what?,” but most batteries I’ve seen take between two and eight
hours to charge.) It’s reasonable to assume that a new battery
technology might be able to store electrical charge more efficiently,
but the charging process is already surprisingly efficient: on the order of 50% to 80%, but possibly much higher for a lithium battery.
So improved charging efficiency isn’t going to help much — if charging a
battery is already 50% efficient, making it 100% efficient only
improves things by a factor of two. How big are the wires for an
automobile battery charger? Can you imagine wires big enough to handle
thousands of times as much current? I don’t think Apple is going to make
any thin, sexy laptops if the charging cable is made from 0000 gauge
wire (roughly 1/2 inch thick, capacity of 195 amps at 60 degrees C). And
I certainly don’t think, as the article claims, that I’ll be able to
jump-start my car with the battery in my cell phone — I don’t have any
idea how I’d connect a wire with the current-handling capacity of a
jumper cable to any cell phone I’d be willing to carry, nor do I want a
phone that turns into an incendiary firebrick when it’s charged, even if
I only need to charge it once a year.
Here’s an older article that’s much more in touch with reality: Battery breakthrough could bring electric cars to all.
The claims are much more limited: these new batteries deliver 2.5
times as much energy with roughly the same weight as current batteries.
But more than that, look at the picture. You don’t get a sense of the
scale, but notice that the tabs extending from the batteries (no doubt
the electrical contacts) are relatively large in relation to the
battery’s body, certainly larger in relation to the battery’s size than
the terminal posts on a typical auto battery. And even more, the
terminals are flat, which maximizes surface area, which maximizes both
heat dissipation (a big issue at high current), and surface area (to
transfer power more efficiently). That’s what I like to see, and that’s
what makes me think that this is a breakthrough that, while less
dramatic, isn’t being over-hyped by irresponsible reporting.
I’m not saying that the problems presented by ultra-high capacity
batteries aren’t solvable. I’m sure that the researchers are well aware
of the issues. Sadly, I’m not so surprised that the reporters who wrote
about the research didn’t understand the issues, resulting in some
rather naive claims about what the technology could accomplish. I can
imagine that there are ways to distribute current within the batteries
that might solve some of the current carrying issues. (For example, high
terminal voltages with an internal voltage divider network that
distributes current to a huge number of cells). As we used to say in
college, “That’s an engineering problem” — but it’s an engineering
problem that’s certainly not trivial.
This argument isn’t intended to dump cold water on battery research,
nor is it really to complain about the press coverage (though it was
relatively uninformed, to put it politely, about the realities of moving
electrical charge around). There’s a bigger point here: innovation is
hard. It’s not just about the conceptual breakthrough that lets you put
30 times as much charge in one battery, or 64 times as many power-hungry
CPUs on one Google Glass frame. It’s about everything else that
surrounds the breakthrough: supplying power, dissipating heat,
connecting wires, and more. The cost of innovation has plummeted in
recent years, and that will allow innovators to focus more on the hard
problems and less on peripheral issues like design and manufacturing.
But the hard problems remain hard.
With the wearable device known as MYO,
there’s no need for the computer to see you to understand your
commands. Instead, this armband connects to your device – Mac and
Windows for now, Android and iOS soon – with Bluetooth and reads
gestures you make with your hand and arm through muscle fluctuations.
This armband is already out in the wild – the full “second wave” for the
public comes in early 2014.
While devices like Leap Motion
require that you stay within a pre-determined space that its sensors
can “see”, MYO stays within the Bluetooth range of reading. This means
you’ll be kicking it at either 100 meters (330 ft) or 50 meters,
depending on if you’re working with Bluetooth LE (Low Energy) or not –
LE works with the shorter range.
The armband is a “one size fits all” sort of situation, and the final
form hasn’t exactly taken shape as yet. The mock-ups you see here are
about as close as the team has gotten to a real final form, while demo
videos still show a slightly less finessed iteration.
The consumer edition of this device will have a
pre-set selection of gestures easily recognized by the sensors in the
device. As the developers at Thalmic Labs suggest, this is only done to make things simple right out of the box:
“We want you to have the best experience with your MYO!
In order to do this, at the general consumer level we are going to be
providing you with set gestures that are easily recognized. This is not
due to a technical limitation but instead will allow you to have a more
seamless user experience out of the box. You will then be able to decide
what each gesture represents.”
Thalmic Labs is a North American based company that was only fully
founded in 2012, made up of 21 creative minds and headed by Canadian
entrepreneur Stephen Lake. MYO is currently up for pre-order for $149
and again, ships in “early 2014.”
In the latest of 3D-printed hardware, NASA
has completed a series of test firings of the agency’s first rocket
engine part made entirely from 3D printing. The component in question is
the rocket engine’s injector, and it went through several hot-fire
tests using a mix of liquid-oxygen and gaseous hydrogen.
However, NASA didn’t use ABS plastic that most 3D-printers use.
Instead, the agency used custom 3D printers to spray layers of metallic
powder using lasers. The lasers spray the powder in a specific pattern
in order to come up with the desired shape for an object. In this case: a
rocket engine injector.
The testing was done at NASA’s Glenn Research Center in Cleveland and
the project is in partnership with Aerojet Rocketdyne. The company
designed the injector and used 3D printing to make the component a
reality. If they were to make the injector using traditional
manufacturing processes, it would take over a year to make.
With 3D printing now an option, NASA and Aerojet Rocketdyne are able
to make the same component in just a matter of four months or less.
Costs are a huge factor too, and the 3D-printed reduces costsby up to
70% compared to traditional methods and materials. This could lead to
more efficient and cost-effective manufacturing of rocket engines.
NASA didn’t say what was next for the 3D-printed injector as far as
testing goes, nor do they have a timeline for when they expect to
officially implement the new technology in future rocket engines. We can
only expect them to implement it sooner rather than later, but it could
take several more years until it can be fully operational and on its
way into space.
3D printing is awesome, yet it still has a lot of untapped potential -- you can use it to create terrifying spiderbots and even tiny drones,
but you can't make electronic components out of pools of plastic.
Thankfully, a team of North Carolina State University researchers have
discovered a mixture of liquid metal that can retain shapes, which could
eventually be used for 3D printing. Liquid metals naturally have the
tendency to merge, but alloys composed of gallium and indium combined
form a skin around the material. This allows researchers to create
structures by piling drops on top of each other using a syringe, as well
as to create specific shapes by using templates. The team is looking
for a way to use the mixture with existing 3D printing technologies, but
it might take some before it's widely used as it currently costs 100
times more than plastic. We hope they address both issues in the near
future, so we can conjure up futuristic tech like bendy electronics, or
maybe even build a body to go with that artificial skin.