Through this signage at Promenade Temecula, the mall is notifying shoppers that their phones may be tracked as they move throughout the premises.
NEW YORK (CNNMoney) -- Attention holiday shoppers: your cell phone may be tracked this year.
Starting on Black Friday and running through New Year's Day, two U.S. malls -- Promenade Temecula in southern California and Short Pump Town Center in Richmond, Va. -- will track guests' movements by monitoring the signals from their cell phones.
While the data that's collected is anonymous, it can follow shoppers' paths from store to store.
The goal is for stores to answer questions like: How many Nordstrom shoppers also stop at Starbucks? How long do most customers linger in Victoria's Secret? Are there unpopular spots in the mall that aren't being visited?
While U.S. malls have long tracked how crowds move throughout their stores, this is the first time they've used cell phones.
But obtaining that information comes with privacy concerns.
The management company of both malls, Forest City Commercial Management, says personal data is not being tracked.
"We won't be looking at singular shoppers," said Stephanie Shriver-Engdahl, vice president of digital strategy for Forest City. "The system monitors patterns of movement. We can see, like migrating birds, where people are going to."
Still, the company is preemptively notifying customers by hanging small signs around the shopping centers. Consumers can opt out by turning off their phones.
The tracking system, called FootPath Technology, works through a series of antennas positioned throughout the shopping center that capture the unique identification number assigned to each phone (similar to a computer's IP address), and tracks its movement throughout the stores.
The system can't take photos or collect data on what shoppers have purchased. And it doesn't collect any personal details associated with the ID, like the user's name or phone number. That information is fiercely protected by mobile carriers, and often can be legally obtained only through a court order.
"We don't need to know who it is and we don't need to know anyone's cell phone number, nor do we want that," Shriver-Engdahl said.
Manufactured by a British company, Path Intelligence, this technology has already been used in shopping centers in Europe and Australia. And according to Path Intelligence CEO Sharon Biggar, hardly any shoppers decide to opt out.
"It's just not invasive of privacy," she said. "There are no risks to privacy, so I don't see why anyone would opt out."
Now, U.S. retailers including JCPenney (JCP,Fortune 500) and Home Depot (HD,Fortune 500) are also working with Path Intelligence to use their technology, Biggar said.
Home Depot has considered implementing the technology but is not currently using it any stores, a company spokesman said.JCPenney declined to comment on its relationship with the vendor.
Some retail analysts say the new technology is nothing to be worried about. Malls have been tracking shoppers for years through people counters, security cameras, heat maps and even undercover researchers who follow shoppers around.
And some even say websites that trackonline shoppersare more invasive, recording not only a user's name and purchases, but then targeting them with ads even after they've left a site.
"It's important for shoppers to realize this sort of data is being collected anyway," Biggar said.
Whereas a website can track a customer who doesn't make a purchase, physical stores have been struggling to perfect this kind of research, Biggar said. By combining the data from FootPath with their own sales figures, stores will have better measurements to help them improve the shopping experience.
"We can now say, you had 100 people come to this product, but no one purchased it," Biggar said. "From there, we can help a retailer narrow down what's going wrong."
But some industry analysts worry about the broader implications of this kind of technology.
"Most of this information is harmless and nobody ever does anything nefarious with it," said Sucharita Mulpuru, retail analyst at Forrester Research. "But the reality is, what happens when you start having hackers potentially having access to this information and being able to track your movements?"
Last year,hackers hit AT&T, exposing the unique ID numbers and e-mail addresses of more than 100,000 iPad 3G owners. To make it harder for hackers to get at this information, Path Intelligence scrambles those numbers twice.
"I'm sure as more people get more cell phones, it's probably inevitable that it will continue as a resource," Mulpuru said. "But I think the future is going to have to be opt in, not opt out."
Personal comment:
One step further. I guess we have to be thankful to be given the ability to opt out the system by 'just' switching off our cell-phone!!!
The Kinect for Windows team has announced that Microsoft is releasing PC-specific Kinect hardware in 2012, along with aKinect SDKthat will enable developers to build all sorts of apps for Kinect. Will the emergence of motion-based interfaces eventually overtake touchscreens in terms of intuitiveness in computing user interfaces?
Microsoft earlier announced the beta release of theKinect for Windows SDK, which enabled hackers and enthusiasts to build programs that can take advantage of the Kinect’s motion-sensing capabilities. Just recently, Kinect for Windows GM Craig Eisler announced that the company’s commercial program will launch 2012, which will involve not only software, but also new Kinect hardware specifically designed for personal computers. These will take advantage of computers’ different capabilities, as well as the drawbacks of using the Kinect sensor meant for Microsoft’s Xbox 360 console.
In particular, the benefits of the upcoming hardware will include the following:
Shorter USB cable to ensure reliable communication between the Kinect sensor and the different computer hardware. A small dongle will be included as a USB hub, so that other USB peripherals can also work alongside Kinect.
New “Near Mode” that changes the minimum distance from the usual 6 to 8 feet feet down to 40 to 50 cm, so that users sitting in front of their computer can still manipulate elements through movement, even at close range. Firmware optimizations will also make the Kinect more sensitive and responsive. It is hoped that these changes can lead to various new applications likevirtual keyboards, hand gestures and even the use of facial expressions to manipulate programs on a computer.
Microsoft is also launching a new Kinect Accelerator business incubator through Microsoft BizSpark, through which the company will actively provide assistance to 10 companies in using the Kinect as a development platform. Microsoft sees businesses and end-users taking advantage of Kinect in more than just gaming. This can include developing special apps for people with disabilities, special needs, or injuries, and can also open the floodgates to other gesture and motion-based UI engineering.
Will Kinect be the next big thing in user experience engineering after multi-touch touchscreens and accelerometers?
According toNieman Journalism Lab, an graduate student of MIT is developing a way to check for lying in political writing as easily as you check for spelling errors.
In a partnership with PolitiFact,Dan Schultzis looking to “bridge the gap between the corpus of facts and the actual media consumption experience.” That’s a lot of words to say this – it will know when you’re full of crap.
The project is using natural language processing to verify facts, via API, against the information contained in PolitiFact. That is to say that it’s not able to tell a lie from the truth on its own, but rather it does so by pulling in data on phrases that are in a system. Sometime next year, when the project is finished, Schultz plans to open-source it and then the abilities should grow.
As NJL posits, and we hope this to be the eventual truth, Schultz’s work could eventually end up being built into software that would scan sites such as Snopes, allowing you to easily debunk claims that so often get passed around as facts on the Internet.
“I’m very interested in looking at ways to trigger people’s critical abilities so they think a little bit harder about what they’re reading…before adopting it into their worldview.”
For a bit less B.S. in this world? Here’s hoping that this project gets the time, attention and money that it will undoubtedly need.
The Kilobots are an inexpensive system for testing synchronized and collaborative behavior in a very large swarm of robots. Photo courtesy of Michael Rubenstein
The Kilobots are coming. Computer scientists and engineers at Harvard University have developed and licensed technology that will make it easy to test collective algorithms on hundreds, or even thousands, of tiny robots.
Called Kilobots, the quarter-sized bug-like devices scuttle around on three toothpick-like legs, interacting and coordinating their own behavior as a team. AJune 2011 Harvard Technical Reportdemonstrated a collective of 25 machines implementing swarming behaviors such as foraging, formation control, and synchronization.
Once up and running, the machines are fully autonomous, meaning there is no need for a human to control their actions.
The communicative critters were created by members of the Self-Organizing Systems Research Group led by Radhika Nagpal, the Thomas D. Cabot Associate Professor of Computer Science at the Harvard School of Engineering and Applied Sciences (SEAS) and a Core Faculty Member at the Wyss Institute for Biologically Inspired Engineering at Harvard. Her team also includes Michael Rubenstein, a postdoctoral fellow at SEAS; and Christian Ahler, a fellow of SEAS and the Wyss Institute.
Thanks to a technology licensing deal with the K-Team Corporation, a Swiss manufacturer of high-quality mobile robots, researchers and robotics enthusiasts alike can now take command of their own swarm.
One key to achieving high-value applications for multi-robot systems in the future is the development of sophisticated algorithms that can coordinate the actions of tens to thousands of robots.
"The Kilobot will provide researchers with an important new tool for understanding how to design and build large, distributed, functional systems," says Michael Mitzenmacher, Area Dean for Computer Science at SEAS.
The name "Kilobot" does not refer to anything nefarious; rather, it describes the researchers' goal of quickly and inexpensively creating a collective of a thousand bots.
Inspired by nature, such swarms resemble social insects, such as ants and bees, that can efficiently search for and find food sources in large, complex environments, collectively transport large objects, and coordinate the building of nests and other structures.
Due to reasons of time, cost, and simplicity, the algorithms being developed today in research labs are only validated in computer simulation or using a few dozen robots at most.
In contrast, the design by Nagpal's team allows a single user to easily oversee the operation of a large Kilobot collective, including programming, powering on, and charging all robots, all of which would be difficult (if not impossible) using existing robotic systems.
So, what can you do with a thousand tiny little bots?
Robot swarms might one day tunnel through rubble to find survivors, monitor the environment and remove contaminants, and self-assemble to form support structures in collapsed buildings.
They could also be deployed to autonomously perform construction in dangerous environments, to assist with pollination of crops, or to conduct search and rescue operations.
For now, the Kilobots are designed to provide scientists with a physical testbed for advancing the understanding of collective behavior and realizing its potential to deliver solutions for a wide range of challenges.
-----
Personal comment:
This remembers me one project I have worked on, back in 2007, called "Variable Environment", which was involving swarm based robots called "e-puck" developed at EPFL. E-pucks were reacting in an autonomous manner to human activity around them.
The future of augmented-reality technology is here - as long as you're a rabbit. Bioengineers have placed the first contact lenses containing electronic displays into the eyes of rabbits as a first step on the way to proving they are safe for humans. The bunnies suffered no ill effects, the researchers say.
The first version may only have one pixel, but higher resolution lens displays - like those seen inTerminator- could one day be used as satnav enhancers showing you directional arrows for example, or flash up texts and emails - perhaps even video. In the shorter term, the breakthrough also means people suffering from conditions like diabetes and glaucoma may find they have a novel way to monitor their conditions.
In February,New Scientistrevealedthe litany of research projects underway in the field of contact lens enhancement. While one companyhas fielded a contact lens technologyusing a surface-mounted strain gauge to assess glaucoma risk, none have built in a display, or the lenses needed for focused projection onto the retina - and then tested it in vivo. They have now.
"We have demonstrated the operation of a contact lens display powered by a remote radiofrequency transmitter in free space and on a live rabbit," says a US and Finnish team led by Babak Praviz of the University of Washington in Seattle.
"This verifies that antennas, radio chips, control circuitry, and micrometre-scale light sources can be integrated into a contact lens and operated on live eyes."
The test lens was powered remotely using a 5-millimetre-long antenna printed on the lens to receive gigahertz-range radio-frequency energy from a transmitter placed ten centimetres from the rabbit's eye. To focus the light on the rabbit's retina, the contact lens itself was fabricated as a Fresnel lens - in which a series of concentric annular sections is used to generate the ultrashort focal length needed.
They found their lens LED glowed brightly up to a metre away from the radio source in free space, but needed to be 2 centimetres away when the lens was placed in a rabbit's eye and the wireless reception was affected by body fluids. All the 40-minute-long tests on live rabbits were performed under general anaesthetic and showed that the display worked well - and fluroescence tests showed no damage or abrasions to the rabbit's eyes after the lenses were removed.
While making a higher resolution display is next on their agenda, there are uses for this small one, say the researchers: "A display with a single controllable pixel could be used in gaming, training, or giving warnings to the hearing impaired."
"This is clearly way off in the future. But we're aware of the research that is ongoing in this field and we're watching the technology's potential for biosensing and drug delivery applications in particular," says a spokesperson for the British Contact Lens Association in London.
The rush to make computers smaller and smaller has been going on for some time now, but we may have a winner–at least for now–in terms of the small computer race. It’s called the Cotton Candy from FXI Tech, and though it just looks like yourstandard USBthumb drive, it turns out it’s packing an entire very small computer in its tiny packaging.
The specs, admittedly, aren’t anything truly spectacular, offering up a dual-core ARM Cortex A9 on the processor end, backed up by an ARM Mali-400MP GPU, wi-fi and Bluetooth connectivity, a USB plug and a microSD card slot as well as its own Android operating system. But when you consider that it’s all encasedin a devicethat’s the size of a basic key chain, well, suddenly the whole picture looks a lot more interesting.
What this is designed to do is hook into much larger displays, thanks to that HDMI plug, and allow you to perform many of your basic computer functions. You’ve got Bluetooth for the peripherals, microSD for the storage, cloud access from the Android app…it’s a very simple, very basic, but extremely portable setup. And, you can even hook it into another computer with the USB plug included, which in turn will let you borrow the peripherals hooked into that computer (great if you needed to print something, I’d say) to do the various jobs you want done.
And if you want an ultra-small computer to take with you most anywhere you go, Cotton Candy should be on hand in time for Christmas 2012, and the pricing is expected to land at the $200 mark, which isn’t half bad. Though it does make me wonder why most wouldn’t just buy a full on laptop for not too much more, especially if they buy used.
Still though, an ultra-small PC for an ultra-small price tag is in the offing, so what do you guys think? Will the Cotton Candy catch on? Or will we be seeing these go for half that or less just to clear them out? No matter what you think, we love hearing from you, so head on down to the comments section and tell us what you think!
A new installation at the Amsterdam Foam gallery by Erik Kessels takes a literal look at the digital deluge of photos online by printing out 24 hours worth of uploads to Flickr. The result is rooms filled with over 1,000,000 printed photos, piled up against the walls.
There’s a sense of waste and a maddening disorganization to it all, both of which are apparently intentional. According to Creative Review, Kessels said of his own project:
“We’re exposed to an overload of images nowadays,” says Kessels. “This glut is in large part the result of image-sharing sites like Flickr, networking sites like Facebook, and picture-based search engines. Their content mingles public and private, with the very personal being openly and un-selfconsciously displayed. By printing all the images uploaded in a 24-hour period, I visualise the feeling of drowning in representations of other peoples’ experiences.”
Humbling, and certainly thought provoking, Kessel’s work challenges the notion that everything can and should be shared, which has become fundamental to the modern web. Then again, perhaps it’s only wasteful and overwhelming when you print all the pictures and divorce them from their original context.
Focus in future will be on HTML5 as mobile world shifts towards
non-proprietary open standards – and now questions will linger over use
of Flash on desktop
Adobe is killing off development of its
mobile Flash plugin, and laying off 750 staff as part of broader
restructuring. Photograph: Paul Sakuma/AP
Mobile Flash is being killed off. The plugin that launched a thousand online forum arguments and a technology standoff between Apple and the format's creator, Adobe,
will no longer be developed for mobile browsers, the company said in a
note that will accompany a financial briefing to analysts.
Instead the company will focus on development around HTML5
technologies, which enable modern browsers to do essentially the same
functions as Flash did but without relying on Adobe's proprietary
technologies, and which can be implemented across platforms.
The existing plugins for the Android and BlackBerry platforms will be given bug fixes and security updates, the company said in a statement first revealed by ZDNet. But further development will end.
The
decision also raises a question mark over the future of Flash on
desktop PCs. Security vulnerabilities in Flash on the desktop have been
repeatedly exploited to infect PCs in the past 18 months, while Microsoft
has also said that the default browser in its forthcoming Windows 8
system, expected at the end of 2012, will not include the Flash plugin
by default. Apple, which in the third quarter captured 5% of the world
market, does not include Flash in its computers by default.
John Nack, a principal product manager at Adobe, commented on his personal blog
(which does not necessarily reflect Adobe views) that: "Adobe saying
that Flash on mobile isn't the best path forward [isn't the same as]
Adobe conceding that Flash on mobile (or elsewhere) is bad technology.
Its quality is irrelevant if it's not allowed to run, and if it's not
allowed to run, then Adobe will have to find different ways to meet
customers' needs."
Around 250m iOS (iPhone, iPod Touches and iPad)
devices have been sold since 2007. There are no clear figures for how
many are now in use. More recently Larry Page, chief executive of
Google, said that a total of 190m Android devices have been activated.
It is not clear how many of those include a Flash plugin in the browser.
"Our
future work with Flash on mobile devices will be focused on enabling
Flash developers to package native apps with Adobe Air for all the major
app stores," Adobe said in the statement. "We will no longer adapt
Flash Player for mobile devices to new browser, OS version or device
configurations.
"Some of our source code licensees may opt to
continue working on and releasing their own implementations. We will
continue to support the current Android and PlayBook configurations with
critical bug fixes and security updates."
The decision comes as
Adobe plans to cut 750 staff, principally in North America and Europe.
An Adobe spokesperson declined to give any figures for the extent of
layoffs in the UK. The company reiterated its expectation that it will
meet revenue targets for the fourth quarter.
The reversal by Adobe
– and its decision to focus on the open HTML5 platform for mobile –
brings to an end a long and tumultuous row between Apple and Adobe over
the usefulness of Flash on the mobile platform. The iPhone launched in
2007 without Flash capability, as did the iPad in 2010.
Steve
Jobs, then Apple's chief executive, and Apple's engineers insisted that
Flash was a "battery hog" and introduced security and stability flaws;
Adobe countered that it was broadly implemented in desktop PCs and used
widely on the web.
Jobs's antagonism was partly driven, his
biography reveals, by Adobe's reluctance after he rejoined Apple in 1996
to port its movie-editing programs to the Mac and to keep its Photoshop
suite comparable on the Mac platform with the Windows one.
But
Jobs also insisted that mobile Flash failed in the role of providing a
good user experience, and also would restrict Apple's ability to push
forward on the iOS platform. Studies of browser crash reports by Apple's
teams showed that Flash was responsible for a signficant proportion of
user problems; Apple was also not satisfied that a Flash plugin would be
available for the first iPhone in 2007 which would not consume more
battery power than would be acceptable.
Jobs managed to persuade
Eric Schmidt, then Google's chief executive and a member of the Apple
board, to get YouTube to make videos available in the H.264 format
without a Flash "wrapper", as was then used for the desktop
implementation.
But the disagreements between Apple and Adobe
intensified, especially when Android devices began appearing which did
use the Flash plugin. Apple refused to use it, and banned apps from its
App Store which tried to use or include Flash.
In "Thoughts on Flash",
an open letter published by Jobs in April 2010, he asserted that "Flash
was created during the PC era – for PCs and mice. Flash is a successful
business for Adobe, and we can understand why they want to push it
beyond PCs. But the mobile era is about low power devices, touch
interfaces and open web standards – all areas where Flash falls short.
"New
open standards created in the mobile era, such as HTML5, will win on
mobile devices (and PCs too). Perhaps Adobe should focus more on
creating great HTML5 tools for the future, and less on criticizing Apple
for leaving the past behind."
The first human genome cost $3 billion to complete; now we can sequence the entire population of Chicago for the same price
The mythical "$1,000 genome" is almost upon us, said Jonathan Rothberg, CEO of sequencing technology company Ion Torrent, at MIT's Emerging Technology conference. If his prediction comes true, it will represent an astonishing triumph in rapid technological development. The rate at which genome sequencing has become more affordable isfaster than Moore's law. (You can read a Q&ATRdid with Rothberg earlier this yearhere, and a profile of his companyhere).
"By this time next year sequencing human genomes as fast and cheap as bacterial genome," said Rothberg. (Earlier, he'd commented that his company can now do an entire bacterial genome in about two hours.)
I was in the room on October 19 when he said it, and I would have thought it pure hubris were it not for Rothberg's incredible track record in this area, from founding successful previous-generation sequencing company 454 Life Sciences to recent breakthroughs made with the same technology he proposes will get us to the $1,000 genome.
The Personal Genome Maker is already showing up in clinical labs, even doctors' offices
The key to this breakthrough, says Rothberg, is that the PGM does not rely on conventional wet chemistry to sequence DNA. Instead, it works almost entirely throughconventional microchip technology, which means Ion Torrent is leveraging decades of investment in conventional transistors and chips.
So what's the age of the $1,000 genome look like? Until we know what more of those genes actually correlate with, for most of us it won't be so different from the present.
"Right now don't have very many correlations between those 3 billion base pairs [of the human genome] and outcomes or medicines," says Rothberg. He predicts it will take at least 10 years of clinical experiments with full genome sequencing to get us to the point where we can begin to unlock its value.
"And it will be 20 years before we understand cancer at same level as HIV and can come up with combinations of medicine [tailored] for each individual," says Rothberg.
The next generation of mobile processors has arrived in the form of the NIVIDA Tegra 3, formerly known as Project Kal-El, a quad-core chipset with aspirations to dominate the Android landscape in 2012 as theTegra 2 dual-core processor dominatedthe majority of 2011. Though many of the details have already been revealed by NVIDIA before today on how Tegra 3 functions and is able to bring you the consumer more power, less battery consumption, and more effective workload distribution, this marks both the official naming of the chip as well as the official distribution of easy to process videos on how Tegra 3 will affect the average mobile device user.
NVIDIA’s Tegra 3 chipset has been gone over in full detail by your humble narrator in two posts here on SlashGear just a few weeks ago in two posts, one on how there are actually[five cores, not just four], and another all about[Variable Symmetric Multiprocessing]aka vSMP. Note that back then NVIDIA had not yet revealed that the final market name for the processor would be “Tegra 3? at the time these posts were published, instead still using the codename “Project Kal-El” to identify the chipset. The most important thing you should take away from these posts is this: your battery life will be better and the distribution of power needed by your processor cores will be handled more intelligently.
NVIDIA has provided a few videos that will explain again in some rather easy to process detail what we’re dealing with here in the Tegra 3. The first of these videos shows visually what cores use which amount of power as several different tasks are performed. Watch as a high-powered game uses all four cores while browsing a webpage might only use a single core. This is the power of Variable Symmetric Multiprocessing in action.
NVIDIA Tegra 3: Fifth Companion Core
Next there’s a demonstration of an upcoming game that would never have been able to exist on a mobile platform if it hadn’t been for NVIDIA’s new chip architecture and the power of a quad-core chipset – along with NVIDIA’s twelve GPU cores of course. We had a look at this game back earlier this year in thefirst Glowball post– now we go underwater:
Glowball Video 2: Tegra 3 goes underwater
Finally there’s a lovely set of videos showing you exactly what it means for game developers and gamers to be working with the Tegra 3 chipset. The first video shows off how next-generation games are being made specifically for this chipset, developers working hand in hand with NVIDIA to optimize their games for the Tegra 3 so that gamers can get the most awesome experience in mobile history. Devour this, if you will:
NVIDIA Tegra 3: Developers bring Next-Generation Games to Mobile
You can also see several examples of the games in the video and how they’ve been improved in the Tegra 3 world.Riptide GPas well asShadowgunhave been reviewed and given hands-on videos by your humble narrator in the past – can’t wait for the enhanced visions! Next have a look at these games running side-by-side with their original versions. Make sure you’re sitting down, because you’re going to get pumped up.
Side-by-side Gameplay Competition vs Tegra 3
Down to the frames per second, this new chipset will change the world you live in as far as gaming goes. Of course it doesn’t stop there, but in that gaming is one of the best ways to test a processor on this platform, one made with gaming in mind of course, you’ve got to appreciate the power. Have a peek at this tiny chart to see what we mean:
Then head over to the post from ASUS on what the very first hardware running the Tegra 3 will look like. It’s theASUS Eee Pad Transformer Prime, a 10.1-inch tablet from the makers ofthe original Transformer, a device made to pummel the competition and usher in a whole new age in mobile computing. We look forward to the future, NVIDIA, bring on another year ofcomplete and total annihilationof the competition!