Friday, March 07. 2014
Via The Telegraph
Scientists have developed the ultimate lie detector for social media – a system that can tell whether a tweeter is telling the truth.
The creators of the system called Pheme, named after the Greek mythological figure known for scandalous rumour, say it can judge instantly between truth and fiction in 140 characters or less.
Researchers across Europe are joining forces to analyse the truthfulness of statements that appear on social media in “real time” and hope their system will prevent scurrilous rumours and false statements from taking hold, the Times reported.
The creators believe that the system would have proved useful to the police and authorities during the London Riots of 2011. Tweeters spread false reports that animals had been released from London Zoo and landmarks such as the London Eye and Selfridges had been set on fire, which caused panic and led to police being diverted.
Kalina Bontcheva, from the University of Sheffield’s engineering department, said that the system would be able to test information quickly and trace its origins. This would enable governments, emergency services, health agencies, journalists and companies to respond to falsehoods.
Thursday, March 06. 2014
Great news for connectivity connoisseurs: the analyst firm TeleGeography just published this year’s edition of its world map, featuring all the submarine cable systems that comprise the arteries of the internet.
The map also shows the cables’ landing points (easier to see if you zoom in on the interactive version), which is handy for those who take an interest in the current surveillance scandal. Why is British intelligence so good at tapping cables? Here’s why – so many of them pass through the U.K.:
The 2014 edition includes 263 cables that are lit (in service), and 22 that should be lit by the end of 2015, so 285 cable systems in total. Last year’s map showed 244 cables, and the year before that just 150, so the cable-laying boom of a few years back has definitely slowed down.
Unfortunately this year’s edition lacks a neat feature of Telegeography’s 2012 and 2013 maps, which was a breakdown of how much of the cable systems’ capacity is actually being used. It also doesn’t have the 2013 edition’s Olde Worlde appeal. On the plus side, it does offer a good breakdown of cable faults over recent years, cable-laying ships and maintenance zones, if that’s your thing.
One cable system that’s not on the map, probably because it will only go live in 2016, is the Asia Africa Europe-1 (AAE-1) cable that was detailed on Tuesday. AAE-1 will run from South-East Asia to Africa and Europe via the Middle East, and yesterday the backing consortium announced membership including the likes of China Unicom, PCCW, Etisalat and Ooredoo.
Tuesday, October 22. 2013
Current wireless networks have a problem: The more popular they become, the slower they are. Researchers at Fudan University in Shanghai have just become the latest to demonstrate a technology that transmits data as light instead of radio waves, which gets around the congestion issue and could be ten times faster than traditional Wi-Fi.
In dense urban areas, the range within which Wi-Fi signals are transmitted is increasingly crowded with noise—mostly, other Wi-Fi signals. What’s more, the physics of electromagnetic waves sets an upper limit to the bandwidth of traditional Wi-Fi. The short version: you can only transmit so much data at a given frequency. The lower the frequency of the wave, the less it can transmit.
AP Photo/Kin Cheung
But what if you could transmit data using waves of much higher frequencies, and without needing a spectrum license from your country’s telecoms regulator? Light, like radio, is an electromagnetic wave, but it has about 100,000 times the frequency of a Wi-Fi signal, and nobody needs a license to make a light bulb. All you need is a way to make its brightness flicker very rapidly and accurately so it can carry a signal.
First, data are transmitted to an LED light bulb—it could be the one illuminating the room in which you’re sitting now. Then the lightbulb is flicked on and off very quickly, up to billions of times per second. That flicker is so fast that the human eye cannot perceive it. (For comparison, the average energy-saving compact fluorescent bulb already flickers between 10,000 and 40,000 times per second.) Then a receiver on a computer or mobile device—basically, a little camera that can see visible light—decodes that flickering into data. LED bulbs can be flicked on and off quickly enough to transmit data around ten times as fast the fastest Wi-Fi networks. (If they could be manipulated faster, the bandwidth would be even higher.)
Li-Fi has one big drawback compared to Wi-Fi: you, or rather your device, need to be within sight of the bulb. It wouldn’t necessarily need to be a special bulb; in principle, overhead lights at work or at home could be wired to the internet. But it would mean that, unlike with Wi-Fi, you couldn’t go into the next room unless there were wired bulbs there too.
However, a new generation of ultrafast Wi-Fi devices that we’re likely to start using soon face a similar limitation. They use a higher range of radio frequencies, which aren’t as crowded with other signals (at least for now), and have a higher bandwidth, but, like visible light, cannot penetrate walls.
Engineers and a handful of startups, like Oledcomm, have been experimenting with Li-Fi technology. The Fudan University team unveiled an experimental Li-Fi network in which four PCs were all connected to the same light bulb. Other researchers are working on transmitting data via different colors of LED lights—imagine, for example, transmitting different signals through each of the the red, green and blue LEDs inside a multi-colored LED light bulb.Because of its limitations, Li-Fi won’t do away with other wireless networks. But it could supplement them in congested areas, and replace them in places where radio signals need to be kept to a minimum, like hospitals, or where they don’t work, such as underwater.
Wednesday, October 02. 2013
Via The Verge
For years, scientists have struggled to collect accurate real-time data on earthquakes, but a new article published today in the Bulletin of the Seismological Society of America may have found a better tool for the job, using the same accelerometers found in most modern smartphones. The article finds that the MEMS accelerometers in current smartphones are sensitive enough to detect earthquakes of magnitude five or higher when located near the epicenter. Because the devices are so widely used, scientists speculate future smartphone models could be used to create an "urban seismic network," transmitting real-time geological data to authorities whenever a quake takes place.
The authors pointed to Stanford's Quake-Catcher Network as an inspiration, which connects seismographic equipment to volunteer computers to create a similar network. But using smartphone accelerometers would be cheaper and easier to carry into extreme environments. The sensor will need to become more sensitive before it can be used in the field, but the authors say once technology catches up, a smartphone accelerometer could be the perfect earthquake research tool. As one researcher told The Verge, "right from the start, this technology seemed to have all the requirements for monitoring earthquakes — especially in extreme environments, like volcanoes or underwater sites."
Tuesday, October 01. 2013
Google faces financial sanctions in France after failing to comply with an order to alter how it stores and shares user data to conform to the nation's privacy laws.
Google was ordered in June by the CNIL to comply with French data protection laws within three months. But Google had not changed its policies to comply with French laws by a deadline on Friday, because the company said that France's data protection laws did not apply to users of certain Google services in France, the CNIL said.
The company "has not implemented the requested changes," the CNIL said.
As a result, "the chair of the CNIL will now designate a rapporteur for the purpose of initiating a formal procedure for imposing sanctions, according to the provisions laid down in the French data protection law," the watchdog said. Google could be fined a maximum of €150,000 ($202,562), or €300,000 for a second offense, and could in some circumstances be ordered to refrain from processing personal data in certain ways for three months.
What bothers France
The CNIL took issue with several areas of Google's data policies, in particular how the company stores and uses people's data. How Google informs users about data that it processes and obtains consent from users before storing tracking cookies were cited as areas of concern by the CNIL.
Google is also embroiled with European authorities in an antitrust case for allegedly breaking competition rules. The company recently submitted proposals to avoid fines in that case.
Monday, September 09. 2013
Cards are fast becoming the best design pattern for mobile devices.
We are currently witnessing a re-architecture of the web, away from pages and destinations, towards completely personalised experiences built on an aggregation of many individual pieces of content. Content being broken down into individual components and re-aggregated is the result of the rise of mobile technologies, billions of screens of all shapes and sizes, and unprecedented access to data from all kinds of sources through APIs and SDKs. This is driving the web away from many pages of content linked together, towards individual pieces of content aggregated together into one experience.
The aggregation depends on:
If the predominant medium of our time is set to be the portable screen (think phones and tablets), then the predominant design pattern is set to be cards. The signs are already here…
Twitter is moving to cards
Twitter recently launched Cards, a way to attached multimedia inline with tweets. Now the NYT should care more about how their story appears on the Twitter card (right hand in image above) than on their own web properties, because the likelihood is that the content will be seen more often in card format.
Google is moving to cards
Everyone is moving to cards
Pinterest (above left) is built around cards. The new Discover feature on Spotify (above right) is built around cards. Much of Facebook now represents cards. Many parts of iOS7 are now card based, for example the app switcher and Airdrop.
The list goes on. The most exciting thing is that despite these many early card based designs, I think we’re only getting started. Cards are an incredible design pattern, and they have been around for a long time.
Cards give bursts of information
Cards as an information dissemination medium have been around for a very long time. Imperial China used them in the 9th century for games. Trade cards in 17th century London helped people find businesses. In 18th century Europe footmen of aristocrats used cards to introduce the impending arrival of the distinguished guest. For hundreds of years people have handed around business cards.
We send birthday cards, greeting cards. My wallet is full of debit cards, credit cards, my driving licence card. During my childhood, I was surrounded by games with cards. Top Trumps, Pokemon, Panini sticker albums and swapsies. Monopoly, Cluedo, Trivial Pursuit. Before computer technology, air traffic controllers used cards to manage the planes in the sky. Some still do.
Cards are a great medium for communicating quick stories. Indeed the great (and terrible) films of our time are all storyboarded using a card like format. Each card representing a scene. Card, Card, Card. Telling the story. Think about flipping through printed photos, each photo telling it’s own little tale. When we travelled we sent back postcards.
What about commerce? Cards are the predominant pattern for coupons. Remember cutting out the corner of the breakfast cereal box? Or being handed coupon cards as you walk through a shopping mall? Circulars, sent out to hundreds of millions of people every week are a full page aggregation of many individual cards. People cut them out and stick them to their fridge for later.
Cards can be manipulated.
In addition to their reputable past as an information medium, the most important thing about cards is that they are almost infinitely manipulatable. See the simple example above from Samuel Couto Think about cards in the physical world. They can be turned over to reveal more, folded for a summary and expanded for more details, stacked to save space, sorted, grouped, and spread out to survey more than one.
When designing for screens, we can take advantage of all these things. In addition, we can take advantage of animation and movement. We can hint at what is on the reverse, or that the card can be folded out. We can embed multimedia content, photos, videos, music. There are so many new things to invent here.
Cards are perfect for mobile devices and varying screen sizes. Remember, mobile devices are the heart and soul of the future of your business, no matter who you are and what you do. On mobile devices, cards can be stacked vertically, like an activity stream on a phone. They can be stacked horizontally, adding a column as a tablet is turned 90 degrees. They can be a fixed or variable height.
Cards are the new creative canvas
It’s already clear that product and interaction designers will heavily use cards. I think the same is true for marketers and creatives in advertising. As social media continues to rise, and continues to fragment into many services, taking up more and more of our time, marketing dollars will inevitably follow. The consistent thread through these services, the predominant canvas for creativity, will be card based. Content consumption on Facebook, Twitter, Pinterest, Instagram, Line, you name it, is all built on the card design metaphor.
I think there is no getting away from it. Cards are the next big thing in design and the creative arts. To me that’s incredibly exciting.
Wednesday, September 04. 2013
Robotics engineer Taylor Alexander needed to lift a nuclear cooling tower off its foundation using 19 high-strength steel cables, and the Android app that was supposed to accomplish it, for which he’d just paid a developer $20,000, was essentially worthless. Undaunted and on deadline—the tower needed a new foundation, and delays meant millions of dollars in losses—he re-wrote the app himself. That’s when he discovered just how hard it is to connect to sensors via the standard long-distance industrial wireless protocol, known as Zigbee.
It took him months of hacking just to create a system that could send him a single number—which represented the strain on each of the cables—from the sensors he was using. Surely, he thought, there must be a better way. And that’s when he realized that the solution to his problem would also unlock the potential of what’s known as the “internet of things” (the idea that every object we own, no matter how mundane, is connected to the internet and can be monitored and manipulated via the internet, whether it’s a toaster, a lightbulb or your car).
The result is an in-the-works project called Flutter. It’s what Taylor calls a “second network”—an alternative to Wi-Fi that can cover 100 times as great an area, with a range of 3,200 feet, using relatively little power, and is either the future of the way that all our connected devices will talk to each other or a reasonable prototype for it.
“We have Wi-Fi in our homes, but it’s not a good network for our things,” says Taylor. Wi-Fi was designed for applications that require fast connections, like streaming video, but it’s vulnerable to interference and has a limited range—often, not enough even to cover an entire house.
For applications with a very limited range—for example anything on your body that you might want to connect with your smartphone—Bluetooth, the wireless protocol used by keyboards and smart watches, is good enough. For industrial applications, the Zigbee standard has been in use for at least a decade. But there are two problems with Zigbee: the first is that, as Alexander discovered, it’s difficult to use. The second is that the Zigbee devices are not open source, which makes them difficult to integrate with the sort of projects that hardware startups might want to create.
Flutter’s nearest competitors, Spark Core and Electric Imp, both use Wi-Fi, which limits their usability to home-bound projects like adding your eggs to the internet of things and klaxons that tell you when your favorite Canadian hockey team has scored a goal.
Flutter’s other differentiator is cost; a Flutter radio costs just $20,
which still allows Taylor a healthy margin above the $6 in parts that
comprise the Flutter.
Making Flutter cheap means that hobbyists can connect that many more devices—say, all the lights in a room, or temperature and moisture sensors in a greenhouse. No one is quite sure what the internet of things will lead to because the enabling technologies, including cheap wireless radios like Flutter, have yet to become widespread. The present day internet of things is a bit like where personal computers were around the time Steve Wozniak and Steve Jobs were showing off their Apple I at the Palo Alto home-brew computer club: It’s mostly hobbyists, with a few big corporations sniffing around the periphery.
“I think the internet of things is not going to start with products, but projects,” says Taylor. His goal is to use the current crowd-funding effort for Flutter to pay for the coding of the software protocol that will run Flutter, since the microchips it uses are already available from manufacturers. The resulting software will allow Flutter to create a “mesh network,” which would allow individual Flutter radios to re-transmit data from any other Flutter radio that’s in range, potentially giving hobbyists or startups the ability to cover whole cities with networks of Flutter radios and their attached sensors.
Wednesday, July 10. 2013
Via Slash Gear
Samsung and HTC are flirting with advanced home automation control in future Galaxy and One smartphones, it’s reported, turning new smartphones into universal remotes for lighting, entertainment, and more. The two companies are each separately working on plans for what Pocket-lint‘s source describes as “home smartphones” that blur the line between mobile products and gadgets found around the home.
For Samsung, the proposed solution is to embed ZigBee into its new phones, it’s suggested. The low-power networking system – already found in products like Philips’ Hue remote-controlled LED lightbulbs, along with Samsung’s own ZigBee bulbs – creates mesh networks for whole-house coverage, and can be embedded into power switches, thermostats, and more.
Samsung is already a member of the ZigBee Alliance, and has been flirting with remote control functionality – albeit using the somewhat more mundane infrared standard – in its more recent Galaxy phones. The Galaxy S 4, for instance, has an IR blaster that, with the accompanying app, can be used to control TVs and other home entertainment kit.
HTC, meanwhile, is also bundling infrared with its recent devices; the HTC One’s power button is actually also a hidden IR blaster, for instance, and like Samsung the smartphone comes with a TV remote app that can pull in real-time listings and control cable boxes and more. It’s said to be looking to ZigBee RF4CE, a newer iteration which is specifically focused on home entertainment and home automation hardware.
Samsung is apparently considering a standalone ZigBee-compliant accessory dongle, though exactly what they add-on would do is unclear. HTC already has a limited range of accessories for wireless home use, though focused currently on streaming media, such as the Media Link HD.
When we could expect to see the new devices with ZigBee support is unclear, and course it will take more than just a handset update to get a home equipped for automation. Instead, there’ll need to be greater availability – and understanding – of automation accessories, though there Samsung could have an edge given its other divisions make TVs, fridges, air conditioners, and other home tech.
Wednesday, May 15. 2013
Researchers with the NASA Jet Propulsion Laboratory have undertaken a large project that will allow them to measure the carbon footprint of megacities – those with millions of residents, such as Los Angeles and Paris. Such an endevour is achieved using sensors mounted in high locations above the cities, such as a peak in the San Gabriel Mountains and a high-up level on the Eiffel Tower that is closed to tourist traffic.
The sensors are designed to detect a variety of greenhouse gases, including methane and carbon dioxide, augmenting other stations that are already located in various places globally that measure greenhouse gases. These particular sensors are designed to achieve two purposes: monitor the specific carbon footprint effects of large cities, and as a by-product of that information to show whether such large cities are meeting – or are even capable of meeting – their green initiative goals.
Such measuring efforts will be intensified this year. In Los Angeles, for example, scientists working on the project will add a dozen gas analyzers to various rooftop locations throughout the city, as well as to a Prius, which will be driven throughout the city and a research aircraft to be navigated to “methane hotspots.” The data gathered from all these sensors, both present and slated for installation, is then analyzed using software that looks at whether levels have increased, decreased, or are stable, as well as determining where the gases originated from.
One of the examples given is vehicle emissions, with scientists being able to determine (using this data) the effects of switching to green vehicles over more traditional ones and whether its results indicate that it is something worth pursuing or whether it needs to be further analyzed for potential effectiveness. Reported the Associated Press, three years ago California saw 58-percent of its carbon dioxide come from gasoline-powered cars.
California is looking to reducing its emissions levels to a sub-35-percent level over 1990 by the year 2030, a rather ambitious goal. In 2010, it was responsible for producing 408 million tons of carbon dioxide, which outranks just about every country on the planet, putting it about on par with all of Spain. Thus far into the project, both the United States and France have individually spent approximately $3 million the project.
Thursday, May 02. 2013
A team at the European Organisation for Nuclear Research (Cern) has launched a project to re-create the first web page.
The aim is to preserve the original hardware and software associated with the birth of the web.
The world wide web was developed by Prof Sir Tim Berners-Lee while working at Cern.
The initiative coincides with the 20th anniversary of the research centre giving the web to the world.
According to Dan Noyes, the web manager for Cern's communication group, re-creation of the world's first website will enable future generations to explore, examine and think about how the web is changing modern life.
"I want my children to be able to understand the significance of this point in time: the web is already so ubiquitous - so, well, normal - that one risks failing to see how fundamentally it has changed," he told BBC News
"We are in a unique moment where we can still switch on the first web server and experience it. We want to document and preserve that".
At the heart of the original web is technology to decentralise control and make access to information freely available to all. It is this architecture that seems to imbue those that work with the web with a culture of free expression, a belief in universal access and a tendency toward decentralising information.Subversive
It is the early technology's innate ability to subvert that makes re-creation of the first website especially interesting.
While I was at Cern it was clear in speaking to those involved with the project that it means much more than refurbishing old computers and installing them with early software: it is about enshrining a powerful idea that they believe is gradually changing the world.
I went to Sir Tim's old office where he worked at Cern's IT department trying to find new ways to handle the vast amount of data the particle accelerators were producing.
I was not allowed in because apparently the present incumbent is fed up with people wanting to go into the office.
But waiting outside was someone who worked at Cern as a young researcher at the same time as Sir Tim. James Gillies has since risen to be Cern's head of communications. He is occasionally referred to as the organisation's half-spin doctor, a reference to one of the properties of some sub-atomic particles.
Mr Gillies is among those involved in the project. I asked him why he wanted to restore the first website.
"One of my dreams is to enable people to see what that early web experience was like," was the reply.
"You might have thought that the first browser would be very primitive but it was not. It had graphical capabilities. You could edit into it straightaway. It was an amazing thing. It was a very sophisticated thing."
Those not heavily into web technology may be sceptical of the idea that using a 20-year-old machine and software to view text on a web page might be a thrilling experience.
But Mr Gillies and Mr Noyes believe that the first web page and web site is worth resurrecting because embedded within the original systems developed by Sir Tim are the principles of universality and universal access that many enthusiasts at the time hoped would eventually make the world a fairer and more equal place.
The first browser, for example, allowed users to edit and write directly into the content they were viewing, a feature not available on present-day browsers.Ideals eroded
And early on in the world wide web's development, Nicola Pellow, who worked with Sir Tim at Cern on the www project, produced a simple browser to view content that did not require an expensive powerful computer and so made the technology available to anyone with a simple computer.
According to Mr Noyes, many of the values that went into that original vision have now been eroded. His aim, he says, is to "go back in time and somehow preserve that experience".
"This universal access of information and flexibility of delivery is something that we are struggling to re-create and deal with now.
"Present-day browsers offer gorgeous experiences but when we go back and look at the early browsers I think we have lost some of the features that Tim Berners-Lee had in mind."
Mr Noyes is reaching out to ask those who were involved in the NeXT computers used by Sir Tim for advice on how to restore the original machines.Awe
The machines were the most advanced of their time. Sir Tim used two of them to construct the web. One of them is on show in an out-of-the-way cabinet outside Mr Noyes's office.
I told him that as I approached the sleek black machine I felt drawn towards it and compelled to pause, reflect and admire in awe.
"So just imagine the reaction of passers-by if it was possible to bring the machine back to life," he responded, with a twinkle in his eye.
The initiative coincides with the 20th anniversary of Cern giving the web away to the world free.
There was a serious discussion by Cern's management in 1993 about whether the organisation should remain the home of the web or whether it should focus on its core mission of basic research in physics.
Sir Tim and his colleagues on the project argued that Cern should not claim ownership of the web.Great giveaway
Management agreed and signed a legal document that made the web publicly available in such a way that no one could claim ownership of it and that would ensure it was a free and open standard for everyone to use.
Mr Gillies believes that the document is "the single most valuable document in the history of the world wide web".
He says: "Without it you would have had web-like things but they would have belonged to Microsoft or Apple or Vodafone or whoever else. You would not have a single open standard for everyone."
The web has not brought about the degree of social change some had envisaged 20 years ago. Most web sites, including this one, still tend towards one-way communication. The web space is still dominated by a handful of powerful online companies.
But those who study the world wide web, such as Prof Nigel Shadbolt, of Southampton University, believe the principles on which it was built are worth preserving and there is no better monument to them than the first website.
"We have to defend the principle of universality and universal access," he told BBC News.
"That it does not fall into a special set of standards that certain organisations and corporations control. So keeping the web free and freely available is almost a human right."
(Page 1 of 6, totaling 57 entries) » next page
Show tagged entries