Monday, January 30. 2017
Hungry penguins have inspired a novel way of making sure computer code in smart cars does not crash.
Tools based on the way the birds co-operatively hunt for fish are being developed to test different ways of organising in-car software.
The tools look for safe ways to organise code in the same way that penguins seek food sources in the open ocean.
Experts said such testing systems would be vital as cars get more connected.
Engineers have often turned to nature for good solutions to tricky problems, said Prof Yiannis Papadopoulos, a computer scientist at the University of Hull who developed the penguin-inspired testing system.
The way ants pass messages among nest-mates has helped telecoms firms keep telephone networks running, and many robots get around using methods of locomotion based on the ways animals move.
Penguins were another candidate, said Prof Papadopoulos, because millions of years of evolution has helped them develop very efficient hunting strategies.
This was useful behaviour to copy, he said, because it showed that penguins had solved a tricky optimisation problem – how to ensure as many penguins as possible get enough to eat.
“Penguins are social birds and we know they live in colonies that are often very large and can include hundreds of thousands of birds. This raises the question of how can they sustain this kind of big society given that together they need a vast amount of food.
“There must be something special about their hunting strategy,” he said, adding that an inefficient strategy would mean many birds starved.
Prof Papadopoulos said many problems in software engineering could be framed as a search among all hypothetical solutions for the one that produces the best results. Evolution, through penguins and many other creatures, has already searched through and discarded a lot of bad solutions.
Studies of hunting penguins have hinted at how they organised themselves.
“They forage in groups and have been observed to synchronise their dives to get fish,” said Prof Papadopoulos. “They also have the ability to communicate using vocalisations and possibly convey information about food resources.”
The communal, co-ordinated action helps the penguins get the most out of a hunting expedition. Groups of birds are regularly reconfigured to match the shoals of fish and squid they find. It helps the colony as a whole optimise the amount of energy they have to expend to catch food.
“This solution has generic elements which can be abstracted and be used to solve other problems,” he said, “such as determining the integrity of software components needed to reach the high safety requirements of a modern car.”
Integrity in this sense means ensuring the software does what is intended, handles data well, and does not introduce errors or crash.
By mimicking penguin behaviour in a testing system which seeks the safest ways to arrange code instead of shoals of fish, it becomes possible to slowly zero in on the best way for that software to be structured.
The Hull researchers, in conjunction with Dr Youcef Gheraibia, a postdoctoral researcher from Algeria, turned to search tools based on the collaborative foraging behaviour of penguins.
The foraging-based system helped to quickly search through the many possible ways software can be specified to home in on the most optimal solutions in terms of safety and cost.
Currently, complex software was put together and tested manually, with only experience and engineering judgement to guide it, said Prof Papadopoulos. While this could produce decent results it could consider only a small fraction of all possible good solutions.
The penguin-based system could crank through more solutions and do a better job of assessing which was best, he said.
Mike Ahmadi, global director of critical systems security at Synopsys, which helps vehicle-makers secure code, said modern car manufacturing methods made optimisation necessary.
“When you look at a car today, it’s essentially something that’s put together from a vast and extended supply chain,” he said.
Building a car was about getting sub-systems made by different manufacturers to work together well, rather than being something made wholly in one place.
That was a tricky task given how much code was present in modern cars, he added.
“There’s about a million lines of code in the average car today and there’s far more in connected cars.”
Carmakers were under pressure, said Mr Ahmadi, to adapt cars quickly so they could interface with smartphones and act as mobile entertainment hubs, as well as make them more autonomous.
“From a performance point of view carmakers have gone as far as they can,” he said. “What they have discovered is that the way to offer features now is through software.”
Security would become a priority as cars got smarter and started taking in and using data from other cars, traffic lights and online sources, said Nick Cook from software firm Intercede, which is working with carmakers on safe in-car software.
“If somebody wants to interfere with a car today then generally they have to go to the car itself,” he said. “But as soon as it’s connected they can be anywhere in the world.
“Your threat landscape is quite significantly different and the opportunity for a hack is much higher.”
Tuesday, December 16. 2014
If the brain is a collection of electrical signals, then, if you could catalog all those those signals digitally, you might be able upload your brain into a computer, thus achieving digital immortality.
While the plausibility—and ethics—of this upload for humans can be debated, some people are forging ahead in the field of whole-brain emulation. There are massive efforts to map the connectome—all the connections in the brain—and to understand how we think. Simulating brains could lead us to better robots and artificial intelligence, but the first steps need to be simple.
So, one group of scientists started with the roundworm Caenorhabditis elegans, a critter whose genes and simple nervous system we know intimately.
The OpenWorm project has mapped the connections between the worm’s 302 neurons and simulated them in software. (The project’s ultimate goal is to completely simulate C. elegans as a virtual organism.) Recently, they put that software program in a simple Lego robot.
The worm’s body parts and neural networks now have LegoBot equivalents: The worm’s nose neurons were replaced by a sonar sensor on the robot. The motor neurons running down both sides of the worm now correspond to motors on the left and right of the robot, explains Lucy Black for I Programmer. She writes:
Timothy Busbice, a founder for the OpenWorm project, posted a video of the Lego-Worm-Bot stopping and backing:
The simulation isn’t exact—the program has some simplifications on the thresholds needed to trigger a "neuron" firing, for example. But the behavior is impressive considering that no instructions were programmed into this robot. All it has is a network of connections mimicking those in the brain of a worm.
Of course, the goal of uploading our brains assumes that we aren’t already living in a computer simulation. Hear out the logic: Technologically advanced civilizations will eventually make simulations that are indistinguishable from reality. If that can happen, odds are it has. And if it has, there are probably billions of simulations making their own simulations. Work out that math, and "the odds are nearly infinity to one that we are all living in a computer simulation," writes Ed Grabianowski for io9.
Is your mind spinning yet?
Thursday, November 20. 2014
Mame (standing for Multiple Arcade Machine Emulator) is an emulator able to load and run classic arcade games on your computer. In the late 90s, one was able to download arcade game's board numerical images (like a backup of the game program, called 'Rom files') from a set of websites. Arcade game lovers were trying to find as much as possible old game's boards in order to extract ROM programs from the physical board and make them available within the MAME emulator (which drives the emulator to support more and more games, by adding more and more emulated environments, as running a game within MAME is like switching on the original game, you have to go through ROM and hardware check like if your were in front of the original game hardware). One of these well none web sites was mame.dk.
While, for years, no one seems to care about those forgotten games brought back to life for free, classic arcade game's owners start to re-distribute their forgotten games on console like XBox or PS (re-discovering that one may be able to make money with those old games), and start to shutdown MAME based web sites, suing them for illegal distribution of games (obvious violation of copyright laws when freely distributing game's ROM files). Nevertheless this copyright infringement issue, MAME community has performed an impressive work of archive, saving thousand of vintage games that would have disappeared otherwise.
The Internet Arcade is a web-based library of arcade (coin-operated) video games from the 1970s through to the 1990s, emulated in JSMAME, part of the JSMESS software package. Containing hundreds of games ranging through many different genres and styles, the Arcade provides research, comparison, and entertainment in the realm of the Video Game Arcade.
The game collection ranges from early "bronze-age" videogames, with black and white screens and simple sounds, through to large-scale games containing digitized voices, images and music. Most games are playable in s ome form, although some are useful more for verification of behavior or programming due to the intensity and requirements of their systems.
Many games have a "boot-up" sequence when first turned on, where the systems run through a check and analysis, making sure all systems are go. In some cases, odd controllers make proper playing of the systems on a keyboard or joypad a pale imitation of the original experience. Please report any issues to the Internet Arcade Operator, Jason Scott.
If you are encountering issues with control, sound, or other technical problems, read this entry of some common solutions.
Also, Armchair Arcade (a video game review site) has written an excellent guide to playing on the Internet Arcade as well.
Friday, September 05. 2014
The Internet of Things is still too hard. Even some of its biggest backers say so.
For all the long-term optimism at the M2M Evolution conference this week in Las Vegas, many vendors and analysts are starkly realistic about how far the vaunted set of technologies for connected objects still has to go. IoT is already saving money for some enterprises and boosting revenue for others, but it hasn’t hit the mainstream yet. That’s partly because it’s too complicated to deploy, some say.
For now, implementations, market growth and standards are mostly concentrated in specific sectors, according to several participants at the conference who would love to see IoT span the world.
Cisco Systems has estimated IoT will generate $14.4 trillion in economic value between last year and 2022. But Kevin Shatzkamer, a distinguished systems architect at Cisco, called IoT a misnomer, for now.
“I think we’re pretty far from envisioning this as an Internet,” Shatzkamer said. “Today, what we have is lots of sets of intranets.” Within enterprises, it’s mostly individual business units deploying IoT, in a pattern that echoes the adoption of cloud computing, he said.
In the past, most of the networked machines in factories, energy grids and other settings have been linked using custom-built, often local networks based on proprietary technologies. IoT links those connected machines to the Internet and lets organizations combine those data streams with others. It’s also expected to foster an industry that’s more like the Internet, with horizontal layers of technology and multivendor ecosystems of products.
What’s holding back the Internet of Things
The good news is that cities, utilities, and companies are getting more familiar with IoT and looking to use it. The less good news is that they’re talking about limited IoT rollouts for specific purposes.
“You can’t sell a platform, because a platform doesn’t solve a problem. A vertical solution solves a problem,” Shatzkamer said. “We’re stuck at this impasse of working toward the horizontal while building the vertical.”
“We’re no longer able to just go in and sort of bluff our way through a technology discussion of what’s possible,” said Rick Lisa, Intel’s group sales director for Global M2M. “They want to know what you can do for me today that solves a problem.”
One of the most cited examples of IoT’s potential is the so-called connected city, where myriad sensors and cameras will track the movement of people and resources and generate data to make everything run more efficiently and openly. But now, the key is to get one municipal project up and running to prove it can be done, Lisa said.
The conference drew stories of many successful projects: A system for tracking construction gear has caught numerous workers on camera walking off with equipment and led to prosecutions. Sensors in taxis detect unsafe driving maneuvers and alert the driver with a tone and a seat vibration, then report it to the taxi company. Major League Baseball is collecting gigabytes of data about every moment in a game, providing more information for fans and teams.
But for the mass market of small and medium-size enterprises that don’t have the resources to do a lot of custom development, even targeted IoT rollouts are too daunting, said analyst James Brehm, founder of James Brehm & Associates.
There are software platforms that pave over some of the complexity of making various devices and applications talk to each other, such as the Omega DevCloud, which RacoWireless introduced on Tuesday. The DevCloud lets developers write applications in the language they know and make those apps work on almost any type of device in the field, RacoWireless said. Thingworx, Xively and Gemalto also offer software platforms that do some of the work for users. But the various platforms on offer from IoT specialist companies are still too fragmented for most customers, Brehm said. There are too many types of platforms—for device activation, device management, application development, and more. “The solutions are too complex.”
He thinks that’s holding back the industry’s growth. Though the past few years have seen rapid adoption in certain industries in certain countries, sometimes promoted by governments—energy in the U.K., transportation in Brazil, security cameras in China—the IoT industry as a whole is only growing by about 35 percent per year, Brehm estimates. That’s a healthy pace, but not the steep “hockey stick” growth that has made other Internet-driven technologies ubiquitous, he said.
What lies ahead
Brehm thinks IoT is in a period where customers are waiting for more complete toolkits to implement it—essentially off-the-shelf products—and the industry hasn’t consolidated enough to deliver them. More companies have to merge, and it’s not clear when that will happen, he said.
“I thought we’d be out of it by now,” Brehm said. What’s hard about consolidation is partly what’s hard about adoption, in that IoT is a complex set of technologies, he said.
And don’t count on industry standards to simplify everything. IoT’s scope is so broad that there’s no way one standard could define any part of it, analysts said. The industry is evolving too quickly for traditional standards processes, which are often mired in industry politics, to keep up, according to Andy Castonguay, an analyst at IoT research firm Machina.
Instead, individual industries will set their own standards while software platforms such as Omega DevCloud help to solve the broader fragmentation, Castonguay believes. Even the Industrial Internet Consortium, formed earlier this year to bring some coherence to IoT for conservative industries such as energy and aviation, plans to work with existing standards from specific industries rather than write its own.
Ryan Martin, an analyst at 451 Research, compared IoT standards to human languages.
“I’d be hard pressed to say we are going to have one universal language that everyone in the world can speak,” and even if there were one, most people would also speak a more local language, Martin said.
Thursday, July 03. 2014
Published on Jun 10, 2014The beauty of hackers, says cybersecurity expert Keren Elazari, is that they force us to evolve and improve. Yes, some hackers are bad guys, but many are working to fight government corruption and advocate for our rights. By exposing vulnerabilities, they push the Internet to become stronger and healthier, wielding their power to create a better world.
Tuesday, May 13. 2014
Forensic experts have long been able to match a series of prints to the hand that left them, or a bullet to the gun that fired it. Now, the same thing is being done with the photos taken by digital cameras, and is ushering in a new era of digital crime fighting.
New technology is now allowing law enforcement officers to search through any collection of images to help track down the identity of photo-taking criminals, such as smartphone thieves and child pornographers.
Investigations in the past have shown that a digital photo can be paired with the exact same camera that took it, due to the patterns of Sensor Pattern Noise (SPN) imprinted on the photos by the camera's sensor.
Since each pattern is idiosyncratic, this allows law enforcement to "fingerprint" any photos taken. And once the signature has been identified, the police can track the criminal across the Internet, through social media and anywhere else they've kept photos.
Researchers have grabbed photos from Facebook, Flickr, Tumblr, Google+, and personal blogs to see whether one individual image could be matched to a specific user's account.
In a paper entitled "On the usage of Sensor Pattern Noise for Picture-to-Identity linking through social network accounts", the team argues that "digital imaging devices have gained an important role in everyone's life, due to a continuously decreasing price, and of the growing interest on photo sharing through social networks"
Friday, March 07. 2014
Via The Telegraph
Scientists have developed the ultimate lie detector for social media – a system that can tell whether a tweeter is telling the truth.
The creators of the system called Pheme, named after the Greek mythological figure known for scandalous rumour, say it can judge instantly between truth and fiction in 140 characters or less.
Researchers across Europe are joining forces to analyse the truthfulness of statements that appear on social media in “real time” and hope their system will prevent scurrilous rumours and false statements from taking hold, the Times reported.
The creators believe that the system would have proved useful to the police and authorities during the London Riots of 2011. Tweeters spread false reports that animals had been released from London Zoo and landmarks such as the London Eye and Selfridges had been set on fire, which caused panic and led to police being diverted.
Kalina Bontcheva, from the University of Sheffield’s engineering department, said that the system would be able to test information quickly and trace its origins. This would enable governments, emergency services, health agencies, journalists and companies to respond to falsehoods.
Tuesday, March 04. 2014
The Kentucky Senate just passed a law that will let students take computer programming classes to satisfy their foreign language requirements. Do you think that's a good move?
What this new law means is, rather than taking three years of Spanish or French or whatever, kids can choose to learn to code. Sure, whether it's Java or German, they're both technically languages. But they're also two very different skills. You could easily argue that it's still very necessary for students to pick up a few years of a foreign tongue—though, on the other hand, coding is a skill that's probably a hell of a lot more practically applicable for today's high school students.
I, for one, have said countless times that if I could travel through time, I probably would have taken some computer science classes in college. Too late now, but not for Kentucky teenagers. So what do you think of this new law?
Image by Olly/Shutterstock
Wednesday, October 30. 2013
15.10.13 - Two EPFL spin-offs, senseFly and Pix4D, have modeled the Matterhorn in 3D, at a level of detail never before achieved. It took senseFly’s ultralight drones just six hours to snap the high altitude photographs that were needed to build the model.
They weigh less than a kilo each, but they’re as agile as eagles in the high mountain air. These “ebees” flying robots developed by senseFly, a spin-off of EPFL’s Intelligent Systems Laboratory (LIS), took off in September to photograph the Matterhorn from every conceivable angle. The drones are completely autonomous, requiring nothing more than a computer-conceived flight plan before being launched by hand into the air to complete their mission.
Three of them were launched from a 3,000m “base camp,” and the fourth made the final assault from the summit of the stereotypical Swiss landmark, at 4,478m above sea level. In their six-hour flights, the completely autonomous flying machines took more than 2,000 high-resolution photographs. The only remaining task was for software developed by Pix4D, another EPFL spin-off from the Computer Vision Lab (CVLab), to assemble them into an impressive 300-million-point 3D model. The model was presented last weekend to participants of the Drone and Aerial Robots Conference (DARC), in New York, by Henri Seydoux, CEO of the French company Parrot, majority shareholder in senseFly.
All-terrain and even in swarms
Last week the dynamic Swiss company – which has just moved into new, larger quarters in Cheseaux-sur-Lausanne – also announced that it had made software improvements enabling drones to avoid colliding with each other in flight; now a swarm of drones can be launched simultaneously to undertake even more rapid and precise mapping missions.
Tuesday, July 02. 2013
As a programmer who wants to write decent performing code, I am very
interested in understanding the architectures of CPUs and GPUs. However,
unlike desktop and server CPUs, mobile CPU and GPU vendors tend to do
very little architectural disclosure - a fact that we've been working
hard to change over the past few years. Often times all that's available
are marketing slides with fuzzy performance claims. This situation
frustrates me to no end personally. We've done quite a bit of low-level
mobile CPU analysis at AnandTech in pursuit of understanding
architectures where there is no publicly available documentation. In
this spirit, I wrote a few synthetic tests to better understand the
performance of current-gen ARM CPU cores without having to rely upon
vendor supplied information. For this article I'm focusing exclusively
on floating point performance.
However, as it turns out, the method I used for measuring the time
spent in each frequency state does not work on aSMP designs like the
Krait 200 based Snapdragon S4 and Krait 300 based Snapdragon 600. For
Krait 200, the results reported here are for MSM8960 which shouldn't
really have thermal throttling issues. My results on the MSM8960 also
line up quite neatly with the assumption that the CPU spent most or all
of its time in the test in the peak frequency state. Brian also ran the
test on a Nexus 4 and the results were essentially identical as both
have the same peak, which is additional confirmation that our results
are likely correct. Thus I will assume a frequency of 1.5 GHz while
discussing Krait 200 results. Results on Krait 300 (Snapdragon 600)
however are more mixed. I am not sure if it is reaching peak frequency
on all the tests and thus I am less sure of the per-cycle estimates on
this chip. Brian also ran the tests on another handset (LG Optimus G
Pro) with the same Snapdragon 600, and the results were qualitatively
Before we discuss the results, it is important to keep in mind that the results and per-cycle timing estimates reported are what I observed from the tests. I did my best to ensure that the design of the tests was very conducive to achieving high throughput. However, it is possible there may be some cases where an architecture can achieve higher performance than what what I was able to get out of my tests. With that out of the way, lets look at the results.
In the data, we need to distinguish between number of instructions and number of flops. I count scalar addition and multiply as one flop and scalar MACs as two flops. I count NEON addition and multiply as four flops and NEON MACs are counted as eight flops. Thus, we get the following per-cycle instruction throughput estimates:
We start with the Cortex A9. Cortex A9 achieves throughput of 1 operation/cycle for most scalar instructions, except for fp64 MUL and fp64 MAC, which can only be issued once every two cycles. The mixed test reveals that though fp64 muls can only be issued every two cycles, Cortex A9 can issue a fp64 add in the otherwise empty pipeline slot. Thus, in the mixed test it was able to achieve throughput of 1 instruction/cycle. NEON implementation in Cortex A9 has a 64-bit datapath and all NEON instructions take 2 cycles. Qualcomm's Scorpion implementation of scalar implementations is similar to Cortex A9 except that it seems unable to issue fp64 adds immediately after fp64 muls in the mixed test. Scorpion uses a full 128-bit datapath for NEON and has twice the throughput of Cortex A9.
Krait 200 features an improved multiplier, and offers 1
instruction/cycle throughput for most scalar and NEON instructions.
Interestingly, Krait 200 has half the per-cycle throughput for MAC
instructions, which is a regression compared to Scorpion. Krait 300
improves the MAC throughput compared to Krait 200, but still appears to
be unable to reach throughput of 1 instruction/cycle possibly revealing
some issues in the pipeline. An alternate explanation is that Snapdragon
600 reduced the frequency in the MAC tests for some unknown reason.
Without accurate frequency information, currently it is difficult to
make that judgment. Cortex A15 is the clear leader here, and offers
throughput of 1 FP instruction/cycle in all our tests.
(Page 1 of 4, totaling 40 entries) » next page
Show tagged entries