Thursday, May 02. 2013
Via Slash Gear
We’ve been hearing a lot about Google‘s self-driving car lately, and we’re all probably wanting to know how exactly the search giant is able to construct such a thing and drive itself without hitting anything or anyone. A new photo has surfaced that demonstrates what Google’s self-driving vehicles see while they’re out on the town, and it looks rather frightening.
The image was tweeted by Idealab founder Bill Gross, along with a claim that the self-driving car collects almost 1GB of data every second (yes, every second). This data includes imagery of the cars surroundings in order to effectively and safely navigate roads. The image shows that the car sees its surroundings through an infrared-like camera sensor, and it even can pick out people walking on the sidewalk.
Of course, 1GB of data every second isn’t too surprising when you consider that the car has to get a 360-degree image of its surroundings at all times. The image we see above even distinguishes different objects by color and shape. For instance, pedestrians are in bright green, cars are shaped like boxes, and the road is in dark blue.
However, we’re not sure where this photo came from, so it could simply be a rendering of someone’s idea of what Google’s self-driving car sees. Either way, Google says that we could see self-driving cars make their way to public roads in the next five years or so, which actually isn’t that far off, and Tesla Motors CEO Elon Musk is even interested in developing self-driving cars as well. However, they certainly don’t come without their problems, and we’re guessing that the first batch of self-driving cars probably won’t be in 100% tip-top shape.
Tuesday, March 19. 2013
At SXSW this afternoon, Google provided developers with a first glance at the Google Glass Mirror API, the main interface between Google Glass, Google’s servers and the apps that developers will write for them. In addition, Google showed off a first round of applications that work on Glass, including how Gmail works on the device, as well as integrations from companies like the New York Times, Evernote, Path and others.
The Mirror API is essentially a REST API, which should make developing for it very easy for most developers. The Glass device essentially talks to Google’s servers and the developers’ applications then get the data from there and also push it to Glass through Google’s APIs. All of this data is then presented on Glass through what Google calls “timeline cards.” These cards can include text, images, rich HTML and video. Besides single cards, Google also lets developers use what it calls bundles, which are basically sets of cards that users can navigate using their voice or the touchpad on the side of Glass.
It looks like sharing to Google+ is a built-in feature of the Mirror API, but as Google’s Timothy Jordan noted in today’s presentation, developers can always add their own sharing options, as well. Other built-in features seem to include voice recognition, access to the camera and a text-to-speech engine.
Because Glass is a new and unique form factor, Jordan also noted, Google is setting a few rules for Glass apps. They shouldn’t, for example, show full news stories but only headlines, as everything else would be too distracting. For longer stories, developers can always just use Glass to read text to users.
Essentially, developers should make sure that they don’t annoy users with too many notifications, and the data they send to Glass should always be relevant. Developers should also make sure that everything that happens on Glass should be something the user expects, said Jordan. Glass isn’t the kind of device, he said, where a push notification about an update to your app makes sense.
Using Glass With Gmail, Evernote, Path and Others
As part of today’s presentation, Jordan also detailed some Glass apps Google has been working on itself, and apps that some of its partners have created. The New York Times app, for example, shows headlines and then lets you listen to a summary of the article by telling Glass to “read aloud.” Google’s own Gmail app uses voice recognition to answer emails (and it obviously shows you incoming mail, as well). Evernote’s Skitch can be used to take and share photos, and Jordan also showed a demo of social network Path running on Glass to share your location.
So far, there is no additional information about the Mirror API on any of Google’s usual sites, but we expect the company to release more information shortly and will update this post once we hear more.
Monday, December 10. 2012
The Museum of Modern Art in New York (MoMA) last week announced that it is bolstering its collection of work with 14 videogames, and plans to acquire a further 26 over the next few years. And that’s just for starters. The games will join the likes of Hector Guimard’s Paris Metro entrances, the Rubik’s Cube, M&Ms and Apple’s first iPod in the museum’s Architecture & Design department.
The move recognises the design achievements behind each creation, of course, but despite MoMA’s savvy curatorial decision, the institution risks becoming a catalyst for yet another wave of awkward ‘are games art?’ blog posts. And it doesn’t exactly go out of its way to avoid that particular quagmire in the official announcement.
“Are video games art? They sure are,” it begins, worryingly, before switching to a more considered tack, “but they are also design, and a design approach is what we chose for this new foray into this universe. The games are selected as outstanding examples of interaction design — a field that MoMA has already explored and collected extensively, and one of the most important and oft-discussed expressions of contemporary design creativity.”
MoMA worked with scholars, digital conservation and legal experts, historians and critics to come up with its criteria and final list of games, and among the yardsticks the museum looked at for inclusion are the visual quality and aesthetic experience of each game, the ways in which the game manipulates or stimulates player behaviour, and even the elegance of its code.
That initial list of 14 games makes for convincing reading, too: Pac-Man, Tetris, Another World, Myst, SimCity 2000, Vib-Ribbon, The Sims, Katamari Damacy, Eve Online, Dwarf Fortress, Portal, flOw, Passage and Canabalt.
But the wishlist also extends to Spacewar!, a selection of Magnavox Odyssey games, Pong, Snake, Space Invaders, Asteroids, Zork, Tempest, Donkey Kong, Yars’ Revenge, M.U.L.E, Core War, Marble Madness, Super Mario Bros, The Legend Of Zelda, NetHack, Street Fighter II, Chrono Trigger, Super Mario 64, Grim Fandango, Animal Crossing, and, of course, Minecraft.
Art, design or otherwise, MoMA’s focused collection is an uncommonly informed and well-considered list. And their inclusion within MoMA’s hallowed walls, and the recognition of their cultural and historical relevance that is implied, is certainly a boon for videogames on the whole. But reactions to the move have been mixed. The Guardian’s Jonathan Jones posted a blog last week titled Sorry MoMA, Videogames Are Not Art, in which he suggests that exhibiting Pac-Man and Tetris alongside work by Picasso and Van Gogh will mean “game over for any real understanding of art”.
“The worlds created by electronic games are more like playgrounds where experience is created by the interaction between a player and a programme,” he writes. “The player cannot claim to impose a personal vision of life on the game, while the creator of the game has ceded that responsibility. No one ‘owns’ the game, so there is no artist, and therefore no work of art.”
While he clearly misunderstands the capacity of a game to manifest the personal – and singular – vision of its creator, he nonetheless raises valid fears that the creative motivations behind many videogames’ – predominantly commercially-driven entertainment – are incompatible with those of serious art and that their inclusion in established museums risks muddying its definition. But while many commentators have fallen into the same trap of invoking comparisons with cubist and impressionist painters, MoMA has drawn no such parallels.
“We have to keep in mind it’s the design collection that is snapping up video games,” Passage creator Jason Rorher tells us when we put the question to him. “This is the same collection that houses Lego, teapots, and barstools. I’m happy with that, because I primarily think of myself as a designer. But sadly, even the mightiest games in this acquisition look silly when stood up next to serious works of art. I mean what’s the artistic payload of, Passage? ‘You’re gonna die someday.’ You can’t find a sentiment that’s more artistically worn out than that.”
But while he doesn’t see these games’ inclusion as a significant landmark – in fact, he even raises concerns over bandwagon-hopping – he’s still elated to have been included.
“I’m shocked to see my little game standing there next to landmarks like Pac-Man, Tetris, Another World, and… all of them really, all the way up to Canabalt,” he says. “The most pleasing aspect of it, for me, is that something I have made will be preserved and maintained into the future, after I croak. The ephemeral nature of digital-download video games has always worried me. Heck, the Mac version of Passage has already been broken by Apple’s updates, and it’s only been five years!”
Talking of Canabalt, creator Adam Saltsman echoes Rohrer’s sentiment: “Obviously it is a pretty huge honour, but I think it’s also important to note that these selections are part of the design wing of the museum, so Tetris won’t exactly be right next to Dali or Picasso! That doesn’t really diminish the excitement for me though. The MoMA is an incredible institution, and to have my work selected for archival alongside obvious masterpieces like Tetris is pretty overwhelming. “
MoMA’s not the only art institution with an interest in videogames, of course. The Smithsonian American Art Museum ran an exhibition titled The Art of Video Games earlier this year, while the Barbican has put its weight behind all manner of events, including 2002?s The History, Culture and Future of Computer Games, Ear Candy: Video Game Music, and the touring Game On exhibition.
Chris Melissinos, who was one of the guest curators who put the Smithsonian exhibition together and subsequently acted as an adviser to MoMA as it selected its list, doesn’t think such interest is damaging to art, or indeed a sign of out-of-step institutions jumping on the bandwagon. It’s simply, he believes, a reaction to today’s culture.
“This decision indicates that videogames have become an important cultural, artistic form of expression in society,” he told the Independent. “It could become one of the most important forms of artistic expression. People who apply themselves to the craft view themselves as [artists], because they absolutely are. This is an amalgam of many traditional forms of art.”
Of the initial selection, Eve is arguably the most ambitious, and potentially divisive, selection, but perhaps also the best placed to challenge Jones’ predispositions on experiential ownership and creative limitation. It is, after all, renowned for its vociferous, self-governing player community.
“Eve’s been around for close to a decade, is still growing, and through its lifetime has won several awards and achievements, but being acquired into the permanent collection of a world leading contemporary art and design museum is a tremendous honour for us,” Eve Online creative director Torfi Frans Ólafsson tells us. “Eve is born out of a strong ideology of player empowerment and sandbox openness, which especially in our earlier days was often at the cost of accessibility and mainstream appeal.
“Sitting up there along with industrial design like the original iPod, and fancy, unergonomic lemon presses tells us that we were right to stand by our convictions, so in that sense, it’s somewhat of a vindication of our efforts.”
But how do you present an entire universe to an audience that is likely to spend a few short minutes looking at each exhibit? Developer CCP is turning to its many players for help.
“We’ve decided to capture a single day of Eve: Sunday the 9th of December,” explains Ólafsson. “Through a variety of player made videos, CCP videos, massive data analysis and info graphics.”
In presenting Eve in this way, CCP and the games players are collaborating on a strong, coherent vision of the alternative reality they’ve collectively helped to build, and more importantly, reinforcing and redefining the notion of authorship. It doesn’t matter whether you’re an apologist for videogames’ entitlement to the status of art, or someone who appreciates the aesthetics of their design, the important thing here is that their cultural importance is recognised. Sure, the notion of a game exhibit that doesn’t include gameplay might stick in the craw of some, but MoMA’s interest is clearly broader. Ólafsson isn’t too worried, either.
“Even if we don’t fully succeed in making the 3.5 million people that visit the MoMA every year visually grok the entire universe in those few minutes they might spend checking Eve out, I can promise you it sure will look pretty there on the wall.”
Thursday, November 08. 2012
Wednesday, June 20. 2012
We've been writing a lot recently about how the private space industry is poised to make space cheaper and more accessible. But in general, this is for outfits such as NASA, not people like you and me.
Today, a company called NanoSatisfi is launching a Kickstarter project to send an Arduino-powered satellite into space, and you can send an experiment along with it.
Whether it's private industry or NASA or the ESA or anyone else, sending stuff into space is expensive. It also tends to take approximately forever to go from having an idea to getting funding to designing the hardware to building it to actually launching something. NanoSatisfi, a tech startup based out of NASA's Ames Research Center here in Silicon Valley, is trying to change all of that (all of it) by designing a satellite made almost entirely of off-the-shelf (or slightly modified) hobby-grade hardware, launching it quickly, and then using Kickstarter to give you a way to get directly involved.
ArduSat is based on the CubeSat platform, a standardized satellite framework that measures about four inches on a side and weighs under three pounds. It's just about as small and cheap as you can get when it comes to launching something into orbit, and while it seems like a very small package, NanoSatisfi is going to cram as much science into that little cube as it possibly can.
Here's the plan: ArduSat, as its name implies, will run on Arduino boards, which are open-source microcontrollers that have become wildly popular with hobbyists. They're inexpensive, reliable, and packed with features. ArduSat will be packing between five and ten individual Arduino boards, but more on that later. Along with the boards, there will be sensors. Lots of sensors, probably 25 (or more), all compatible with the Arduinos and all very tiny and inexpensive. Here's a sampling:
Yeah, so that's a lot of potential for science, but the entire Arduino sensor suite is only going to cost about $1,500. The rest of the satellite (the power system, control system, communications system, solar panels, antennae, etc.) will run about $50,000, with the launch itself costing about $35,000. This is where you come in.
NanoSatisfi is looking for Kickstarter funding to pay for just the launch of the satellite itself: the funding goal is $35,000. Thanks to some outside investment, it's able to cover the rest of the cost itself. And in return for your help, NanoSatisfi is offering you a chance to use ArduSat for your own experiments in space, which has to be one of the coolest Kickstarter rewards ever.
Now, NanoSatisfi itself doesn't really expect to get involved with a lot of the actual experiments that the ArduSat does: rather, it's saying "here's this hardware platform we've got up in space, it's got all these sensors, go do cool stuff." And if the stuff that you can do with the existing sensor package isn't cool enough for you, backers of the project will be able to suggest new sensors and new configurations, even for the very first generation ArduSat.
To make sure you don't brick the satellite with buggy code, NanSatisfi will have a duplicate satellite in a space-like environment here on Earth that it'll use to test out your experiment first. If everything checks out, your code gets uploaded to the satellite, runs in whatever timeslot you've picked, and then the results get sent back to you after your experiment is completed. Basically, you're renting time and hardware on this satellite up in space, and you can do (almost) whatever you want with that.
ArduSat has a lifetime of anywhere from six months to two years. None of this payload stuff (neither the sensors nor the Arduinos) are specifically space-rated or radiation-hardened or anything like that, and some of them will be exposed directly to space. There will be some backups and redundancy, but partly, this will be a learning experience to see what works and what doesn't. The next generation of ArduSat will take all of this knowledge and put it to good use making a more capable and more reliable satellite.
This, really, is part of the appeal of ArduSat: with a fast,
efficient, and (relatively) inexpensive crowd-sourced model, there's a
huge potential for improvement and growth. For example, If this
Kickstarter goes bananas and NanoSatisfi runs out of room for people to
get involved on ArduSat, no problem, it can just build and launch
another ArduSat along with the first, jammed full of (say) fifty more
Arduinos so that fifty more experiments can be run at the same time. Or
it can launch five more ArduSats. Or ten more. From the decision to
start developing a new ArduSat to the actual launch of that ArduSat is a
period of just a few months. If enough of them get up there at the same
time, there's potential for networking multiple ArduSats together up in
space and even creating a cheap and accessible global satellite array.
Longer term, there's also potential for making larger ArduSats with more complex and specialized instrumentation. Take ArduSat's camera: being a little tiny satellite, it only has a little tiny camera, meaning that you won't get much more detail than a few kilometers per pixel. In the future, though, NanoSatisfi hopes to boost that to 50 meters (or better) per pixel using a double or triple-sized satellite that it'll call OptiSat. OptiSat will just have a giant camera or two, and in addition to taking high resolution pictures of Earth, it'll also be able to be turned around to take pictures of other stuff out in space. It's not going to be the next Hubble, but remember, it'll be under your control.
Assuming the Kickstarter campaign goes well, NanoSatisfi hopes to complete construction and integration of ArduSat by about the end of the year, and launch it during the first half of 2013. If you don't manage to get in on the Kickstarter, don't worry- NanoSatisfi hopes that there will be many more ArduSats with many more opportunities for people to participate in the idea. Having said that, you should totally get involved right now: there's no cheaper or better way to start doing a little bit of space exploration of your very own.
Check out the ArduSat Kickstarter video below, and head on through the link to reserve your spot on the satellite.
Wednesday, April 18. 2012
Via Christian Babski
Here(@American Scientist) is an interesting and rather complete article on programming language [evolution,war] with some infographics on programming history and methods. It is not that technical, making the reading accessible to any one.
Tuesday, April 17. 2012
Develop looks at the platform with ambitions to challenge Adobe's ubiquitous Flash web player
Initially heralded as the future of browser gaming and the next step beyond the monopolised world of Flash, HTML5 has since faced criticism for being tough to code with and possessing a string of broken features.
The coding platform, the fifth iteration of the HTML standard, was supposed to be a one stop shop for developers looking to create and distribute their game to a multitude of platforms and browsers, but things haven’t been plain sailing.
And whilst this has worked to a certain degree, and a number of companies such as Microsoft, Apple, Google and Mozilla under the W3C have collaborated to bring together a single open standard, the problems it possesses cannot be ignored.
Thursday, April 12. 2012
Paranoid Shelter is a recent installation / architectural device that fabric | ch finalized later in 2011 after a 6 months residency at the EPFL-ECAL Lab in Renens (Switzerland). It was realized with the support of Pro Helvetia, the OFC, the City of Lausanne and the State of Vaud. It was initiated and first presented as sketches back in 2008 (!), in the context of a colloquium about surveillance at the Palais de Tokyo in Paris.
Being created in the context of a theatrical collaboration with french writer and essayist Eric Sadin around his books about contemporary surveillance (Surveillance globale and Globale paranoïa --both published back in 2009--), Paranoid Shelter revisits the old figure/myth of the architectural shelter, articulated by the use of surveillance technologies as building blocks.
Additionnal information on the overall project can be found through the two following links:
A compressed preview and short of the play by NOhista.
On the first technical drawings and sketches of the Paranoid Shelter project, the entire system was just looking like a (big) mess of wires, sensors and video cameras, all concentrated on a pretty tiny space where humans will have difficulties to move in. The entire space is consciously organised around tracking methods/systems, the space being delimited by 3 [augmented] posts which host a set of sensors, video cameras and microphones. It includes networked [power over ethernet] video cameras, microphones and a set of wireless ambient sensors (giving the ability of measuring temperature, O2 and CO2 gaz concentration, current atmospheric pressure, light, etc...).
Based on a real-time analysis of major sensors hardware, the system is able to control DMX lights, a set of two displays (one LCD screen and one projector) and to produce sound through a dynamically generated text to speech process.
All programs were developed using openFrameworks enhanced by a set of dedicated in-house C++ libraries in order to be able to capture networked camera video flow, control any DMX compatible piece of hardware and collect wireless Libelum sensor's data. Sound analysis programs, LCD display program and the main program are all connected to each other via a local network. The main program is in charge of collecting other program's data, performing the global analysis of the system's activity, recording system's raw information to a database and controlling system's [re]actions (lights, display).
The overall system can act in an [autonomous] way by controlling the entire installation behavior while it can also be remotely controlled when used on stage, in the context of a theater play.
Collecting all sensor's flows is one of the basic task. Cameras are used to track movements, microphones measure sound activity and sensors collect a set of ambient parameters. Even if data capture consists in some basic network based tasks, it is easily raised to upper complexity level when each data collection should occur simultaneously, in real-time, [without,with] a [limited,acceptable] delay. Major raw data analysis have to occur directly after data acquisition in order to minimize the time-shift in the system's space awareness. This first level of data analysis brings out mainly frequencies information, quantity of activity and 2D location tracking (from the point of view of each camera). Every single piece of raw information is systematically recorded in a dedicated database : it reduces system's memory footprint (by keeping it almost constant) without loosing any activity information. From time to time the system can access these recorded information in its post-analysis process, when required, mainly to add a time-scale dimension on the global activity that occurred in the monitored space. Time isolated information can be interpreted in a rough and basic way, while time composition of the same information or a set of information may bring additional meanings by verifying information consistency over time (of course, it could be in a negative or a positive way, by confirming or refuting a first level deduced activity information). Another level of analysis can be reached by taking in account the spacial distribution of sensors in the overall installation. The system is then able to compute 3D information getting an awareness of activities within the space it is monitoring. It generates a second level of data analysis, spatialised, that will increase the global understanding of captured data by the system.
Recorded activities are made available to the [audience,visitors] through a wifi access point. Networked cameras can be accessed in real time, giving the ability to humans to see some of the system's [inputs]. Thus, network activity is also monitored as another sign of human presence, the system can then [detect] activity elsewhere than in its dedicated space.
Whatever how numerous are collected data, the system faces a real problem when it comes to the interpretation of these data while not having benefit of a human brain. Events that are quite obvious to humans, do not mean anything to computers and softwares. In order to avoid the use of some artificial neural networks simulation (which may still be a good option to explore), I have decided to compute a limited set of parameters, all based on previously analysed data, only computed lately when the system may decide to react to perceived activities. It defines a kind of global [mood] of the system, based on which it will [decide] whether to be aggressive (from a human point of view) by making the global tracking activity [noticeable] by humans evolving in the installation's space, or by focusing tracking sensors on a given area or by trying to enhance some sensor's information analysis, whether to settle in a kind of silent mode.
Moreover, the evolution of these parameters are also studied in time, making the [mood] evolving in a human way, increasing and decreasing [analogically]. System's [mood] may be wrong or [unjustified,weird] from a human point of view, but that's where [multi-dimensional] software becomes interesting. Beyond a certain complexity, by adding computation layers on top of each over, having written every single line of code does not allow the programmer to predict precisely what next system's [re]action will be.
We did reach here monitoring system limitations which is obviously [interpretation,comprehension]. As long as automatic system can not correctly [understand] data, humans will need to be in the loop, making all these monitoring systems quite useless [as expert system], except for producing an enormous quantity of data that still need to be post-analysed by a human brain. As the system is producing an important set of heteregeneous data, a set of rules may suggest to the system some sort of data correlation. These rules should not be too [tights,precises] in order to avoid producing obvious system's interpretation, while keeping them slightly [out of focus] may allow [smart,astonishing] conclusion being produced. So there's rooms here for additional implementation of the data analysis processes that can still completely change the way the entire installation [can,may] behave.
Thursday, April 05. 2012
Insect printable robot. Photo: Jason Dorfman, CSAIL/MIT
Today, MIT announced a new project, “An Expedition in Computing Printable Programmable Machines,” that aims to give everyone a chance to have his or her own robot.
Need help peering into that unreasonably hard-to-reach cabinet, or wiping down your grimy 15th-story windows? Walk on over to robo-Kinko’s to print, and within 24 hours you could have a fully programmed working origami bot doing your dirty work.
“No system exists today that will take, as specification, your functional needs and will produce a machine capable of fulfilling that need,” MIT robotics engineer and project manager Daniela Rus said.
Unfortunately, the very earliest you’d be able to get your hands on an almost-instant robot might be 2017. The MIT scientists, along with collaborators at Harvard University and the University of Pennsylvania, received a $10 million grant from the National Science Foundation for the 5-year project. Right now, it’s at very early stages of development.
So far, the team has prototyped two mechanical helpers: an
insect-like robot and a gripper. The 6-legged tick-like printable robot
could be used to check your basement for gas leaks or to play with your
cat, Rus says. And the gripper claw, which picks up objects, might be
helpful in manufacturing, or for people with disabilities, she says.
The two prototypes cost about $100 and took about 70 minutes to build. The real cost to customers will depend on the robot’s specifications, its capabilities and the types of parts that are required for it to work.
The researchers want to create a one-size-fits-most platform to circumvent the high costs and special hardware and software often associated with robots. If their project works out, you could go to a local robo-printer, pick a design from a catalog and customize a robot according to your needs. Perhaps down the line you could even order-in your designer bot through an app.
Their approach to machine building could “democratize access to robots,” Rus said. She envisions producing devices that could detect toxic chemicals, aid science education in schools, and help around the house.
Although bringing robots to the masses sounds like a great idea (a sniffing bot to find lost socks would come in handy), there are still several potential roadblocks to consider — for example, how users, especially novice ones, will interact with the printable robots.
“Maybe this novice user will issue a command that will break the device, and we would like to develop programming environments that have the capability of catching these bad commands,” Rus said.
As it stands now, a robot would come pre-programmed to perform a set of tasks, but if a user wanted more advanced actions, he or she could build up those actions using the bot’s basic capabilities. That advanced set of commands could be programmed in a computer and beamed wirelessly to the robot. And as voice parsing systems get better, Rus thinks you might be able to simply tell your robot to do your bidding.
Durability is another issue. Would these robots be single-use only? If so, trekking to robo-Kinko’s every time you needed a bot to look behind the fridge might get old. These are all considerations the scientists will be grappling with in the lab. They’ll have at least five years to tease out some solutions.
In the meantime, it’s worth noting that other other groups are also building robots using printers. German engineers printed a white robotic spider last year. The arachnoid carried a camera and equipment to assess chemical spills.
And at Drexel University, paleontologist Kenneth Lacovara and mechanical engineer James Tangorra are trying to create a robotic dinosaur from dino-bone replicas. The 3-D-printed bones are scaled versions of laser-scanned fossils. By the end of 2012, Lacovara and Tangorra hope to have a fully mobile robotic dinosaur, which they want to use to study how dinosaurs, like large sauropods, moved.
Lancovara thinks the MIT project is an exciting and promising one: “If it’s a plug-and-play system, then it’s feasible,” he said. But “obviously, it [also] depends on the complexity of the robot.” He’s seen complex machines with working gears printed in one piece, he says.
Right now, the MIT researchers are developing an API that would facilitate custom robot design and writing algorithms for the assembly process and operations.
If their project works out, we could all have a bot to call our own in a few years. Who said print was dead?
Friday, March 30. 2012
Facial-recognition platform Face.com could foil the plans of all those under-age kids looking to score some booze. Fake IDs might not fool anyone for much longer, because Face.com claims its new application programming interface (API) can be used to detect a person’s age by scanning a photo.
With its facial recognition system, Face.com has built two Facebook apps that can scan photos and tag them for you. The company also offers an API for developers to use its facial recognition technology in the apps they build.
Its latest update to the API can scan a photo and supposedly determine a person’s minimum age, maximum age, and estimated age. It might not be spot-on accurate, but it could get close enough to determine your age group.
“Instead of trying to define what makes a person young or old, we provide our algorithms with a ton of data and the system can reverse engineer what makes someone young or old,” Face.com chief executive Gil Hirsch told VentureBeat in an interview. ”We use the general structure of a face to determine age. As humans, our features are either heighten or soften depending on the age. Kids have round, soft faces and as we age, we have elongated faces.”
The algorithms also take wrinkles, facial smoothness, and other telling age signs into account to place each scanned face into a general age group. The accuracy, Hirsch told me, is determined by how old a person looks, not necessarily how old they actually are. The API also provides a confidence level on how well it could determine the age, based on image quality and how the person looks in photo, i.e. if they are turned to one side or are making a strange face.
“Adults are much harder to figure out [their age], especially celebrities. On average, humans are much better at detecting ages than machines,” said Hirsch.
The hope is to build the technology into apps that restrict or tailor content based on age. For example the API could be built into a Netflix app, scan a child’s face when they open the app, determine they’re too young to watch The Hangover, and block it. Or — and this is where the tech could get futuristic and creepy — a display with a camera could scan someone’s face when they walk into a store and deliver ads based on their age.
In addition to the age-detection feature, Face.com says it has updated its API with 30 percent better facial recognition accuracy and new recognition algorithms. The updates were announced Thursday and the API is available for any developer to use.
One developer has already used the API to build app called Age Meter, which is available in the Apple App Store. On its iTunes page, the entertainment-purposes-only app shows pictures of Justin Bieber and Barack Obama with approximate ages above their photos.
Other companies in this space include Cognitec, with its FaceVACS software development kit, and Bayometric, which offers FaceIt Face Recognition. Google has also developed facial-recognition technology for Android 4.0 and Apple applied for a facial recognition patent last year.
The technology behind scanning someone’s picture, or even their face, to figure out their age still needs to be developed for complete accuracy. But, the day when bouncers and liquor store cashiers can use an app to scan a fake ID’s holder’s face, determine that they are younger than the legal drinking age, and refuse to sell them wine coolers may not be too far off.
(Page 1 of 3, totaling 28 entries) » next page
Show tagged entries