Wednesday, September 10. 2014
Project «SurPRISE – Surveillance, Privacy and Security: A large scale participatory assessment of criteria and factors determining acceptability and acceptance of security technologies in Europe»
About the Project
SurPRISE re-examines the relationship between security and privacy, which is commonly positioned as a ‘trade-off’. Where security measures and technologies involve the collection of information about citizens, questions arise as to whether and to what extent their privacy has been infringed. This infringement of individual privacy is sometimes seen as an acceptable cost of enhanced security. Similarly, it is assumed that citizens are willing to trade off their privacy for enhanced personal security in different settings. This common understanding of the security-privacy relationship, both at state and citizen level, has informed policymakers, legislative developments and best practice guidelines concerning security developments across the EU.
However, an emergent body of work questions the validity of the security-privacy trade-off. This work suggests that it has over-simplified how the impact of security measures on citizens is considered in current security policies and practices. Thus, the more complex issues underlying privacy concerns and public skepticism towards surveillance-oriented security technologies may not be apparent to legal and technological experts. In response to these developments, this project will consult with citizens from several EU member and associated states on the question of the security-privacy trade-off as they evaluate different security technologies and measures.
When it comes to speeding up Web traffic over the Internet, sometimes too much of a good thing may not be such a good thing at all.
The Internet Engineering Task Force is putting the final touches on HTTP/2, the second version of the Hypertext Transport Protocol (HTTP). The working group has issued a last call draft, urging interested parties to voice concerns before it becomes a full Internet specification.
Not everyone is completely satisfied with the protocol however.
“There is a lot of good in this proposed standard, but I have some deep reservations about some bad and ugly aspects of the protocol,” wrote Greg Wilkins, lead developer of the open source Jetty server software, noting his concerns in a blog item posted Monday.
Others, however, praise HTTP/2 and say it is long overdue.
“A lot of our users are experimenting with the protocol,” said Owen Garrett, head of products for server software provider NGINX. “The feedback is that generally, they have seen big performance benefits.”
First created by Web originator Tim Berners-Lee and associates, HTTP quite literally powers today’s Web, providing the language for a browser to request a Web page from a server.
SPDY is speediest
Version 2.0 of HTTP, based largely on the SPDY protocol developed by Google, promises to be a better fit for how people use the Web.
“The challenge with HTTP is that it is a fairly simple protocol, and it can be quite laborious to download all the resources required to render a Web page. SPDY addresses this issue,” Garrett said.
While the first generation of Web sites were largely simple and relatively small, static documents, the Web today is used as a platform for delivering applications and bandwidth intensive real-time multimedia content.
HTTP/2 speeds basic HTTP in a number of ways. HTTP/2 allows servers to send all the different elements of a requested Web page at once, eliminating the serial sets of messages that have to be sent back and forth under plain HTTP.
HTTP/2 also allows the server and the browser to compress HTTP, which cuts the amount of data that needs to be communicated between the two.
As a result, HTTP/2 “is really useful for organization with sophisticated Web sites, particularly when its users are distributed globally or using slower networks—mobile users for instance,” Garrett said.
While enthusiastic about the protocol, Wilkins did have several areas of concern. For instance, HTTP/2 could make it more difficult to incorporate new Web protocols, most notably the communications protocol WebSocket, Wilkins asserted.
Wilkins noted that HTTP/2 blurs what were previously two distinct layers of HTTP—the semantic layer, which describes functionality, and the framework layer, which is the structure of the message. The idea is that it is simpler to write protocols for a specification with discrete layers.
The protocol also makes it possible to hide content, including malicious content, within the headers, bypassing the notice of today’s firewalls, Wilkins said.
Servers need more power
HTTP/2 could also put a lot more strain on existing servers, Wilkins noted, given that they will now be fielding many more requests at once.
HTTP/2 “clients will send requests much more quickly, and it is quite likely you will see spikier traffic as a result,” Garrett agreed.
As a result, a Web application, if it doesn’t already rely on caching or load balancing, may have to do so with HTTP/2, Garrett said.
The SPDY protocol is already used by almost 1 percent of all the websites, according to an estimate of the W3techs survey company.
NGINX has been a big supporter of SPDY and HTTP/2, not surprising given that the company’s namesake server software was designed for high-traffic websites.
Approximately 88 percent of sites that offer SPDY do so with NGINX, according to W3techs.
Yet NGINX has characterized SPDY to its users as “experimental,” Garrett said, largely because the technology is still evolving and hasn’t been nailed down yet by the formal specification.
“We’re really forward to when the protocol is rubber-stamped,” Garrett said. Once HTTP/2 is approved, “We can recommend it to our customers with confidence,” Garrett said.
Tuesday, September 09. 2014
Via Tech Crunch
Mimosa Networks is finally ready to help make gigabit wireless technology a reality. The company, which recently came out of stealth, is launching a series of products that it hopes to sell to a new generation of wireless ISPs.
Its wireless Internet products include its new B5 Backhaul radio hardware and its Mimosa Cloud Services planning and analytics offering. By using the two in combination, new ISPs can build high-capacity wireless networks at a fraction of the cost it would take to lay a fiber network.
The B5 backhaul radio is a piece of hardware that uses multiple-input and multiple-output (MIMO) technology to provide up to 16 streams and 4 Gbps of output when multiple radios are using the same channel.
With a single B5 radio, customers can provide a gigabit of throughput for up to eight or nine miles, according to co-founder and chief product officer Jaime Fink. The longer the distance, the less bandwidth is available, of course. But Fink said the company is running one link of about 60 miles that still gets a several hundred megabits of throughput along the California coast.
Not only does the product offer high data speeds on 5 GHz wireless spectrum, but it also makes that spectrum more efficient. It uses spectrum analysis and load balancing to optimize bandwidth, frequency, and power use based on historical and real-time data to adapt to wireless interference and other issues.
In addition to the hardware, Mimosa’s cloud services will help customers plan and deploy networks with analytics tools to determine how powerful and efficiently their existing equipment is running. That will enable new ISPs to more effectively determine where to place new hardware to link up with other base stations.
The product also is designed to support networks as they grow, and it makes sure that ISPs can spot problems as they happen. The Cloud Services product is available now, but the backhaul radio will be available for about $900 later this fall.
Mimosa is launching these first products after raising $38 million in funding from New Enterprise Associates and Oak Investment Partners. That includes a $20 million Series C round led by return investor NEA that was recently closed.
The company was founded by Brian Hinman, who had previously co-founded PictureTel, Polycom, and 2Wire, along with Fink, who previously served as CTO of 2Wire and SVP of technology for Pace after it acquired the home networking equipment company. Now they’re hoping to make wireless gigabit speeds available for new ISPs.
Monday, September 08. 2014
After Leap Motion's somewhat disappointing debut, you'd be forgiving for wanting to wave off the idea of third-party gesture control peripherals. But wait! Unlike Leap, Reactiv isn't trying to revolutionize human-computer interactions with its Touch+ controller—there's no wizard-like finger waggling or Minority Report-style hand waving here. Instead, the Touch+'s dual cameras turn any surface into multi-touch input device.
Touch+ was born out of Haptix, a Kickstarter project that raised more than $180,000 from backers. Over the past year, Reactiv refined the Haptix vision to eventually become Touch+.
While Touch+ certainly won't be for everyone, Reactiv is positioning the multitouch PC controller as more than a mere tool for games and art projects. The video above shows the device being used in an office meeting, acting as a cursor control for a businessman's laptop before being repositioned on the fly to point at a projected display, instantly allowing the man to reach up with his hands to circle objects on the image.
What's more, PCWorld sister site CITEWorld managed to snag a live demo with Touch+, and the founders focused on the potential productivity uses of the device: Enabling mouse-free control of Excel and PowerPoint, naturally manipulating pictures in PhotoShop, creating designs in CAD, the aforementioned presentation capabilities, and so forth.
The Touch+ works with Windows PCs or Macs, connecting via USB 2.0 or 3.0. If you choose to point it at your keyboard, the device will temporarily suspend its multitouch capabilities while you type, then resume when your fingers stop bobbing up and down.
Sound interesting? An alpha version of Touch+ is available now on the Reactiv website for $75. Until we get our own hands on the device, however, we won't know for sure how the device stacks up to competitors like the Leap Motion.
Friday, September 05. 2014
The Internet of Things is still too hard. Even some of its biggest backers say so.
For all the long-term optimism at the M2M Evolution conference this week in Las Vegas, many vendors and analysts are starkly realistic about how far the vaunted set of technologies for connected objects still has to go. IoT is already saving money for some enterprises and boosting revenue for others, but it hasn’t hit the mainstream yet. That’s partly because it’s too complicated to deploy, some say.
For now, implementations, market growth and standards are mostly concentrated in specific sectors, according to several participants at the conference who would love to see IoT span the world.
Cisco Systems has estimated IoT will generate $14.4 trillion in economic value between last year and 2022. But Kevin Shatzkamer, a distinguished systems architect at Cisco, called IoT a misnomer, for now.
“I think we’re pretty far from envisioning this as an Internet,” Shatzkamer said. “Today, what we have is lots of sets of intranets.” Within enterprises, it’s mostly individual business units deploying IoT, in a pattern that echoes the adoption of cloud computing, he said.
In the past, most of the networked machines in factories, energy grids and other settings have been linked using custom-built, often local networks based on proprietary technologies. IoT links those connected machines to the Internet and lets organizations combine those data streams with others. It’s also expected to foster an industry that’s more like the Internet, with horizontal layers of technology and multivendor ecosystems of products.
What’s holding back the Internet of Things
The good news is that cities, utilities, and companies are getting more familiar with IoT and looking to use it. The less good news is that they’re talking about limited IoT rollouts for specific purposes.
“You can’t sell a platform, because a platform doesn’t solve a problem. A vertical solution solves a problem,” Shatzkamer said. “We’re stuck at this impasse of working toward the horizontal while building the vertical.”
“We’re no longer able to just go in and sort of bluff our way through a technology discussion of what’s possible,” said Rick Lisa, Intel’s group sales director for Global M2M. “They want to know what you can do for me today that solves a problem.”
One of the most cited examples of IoT’s potential is the so-called connected city, where myriad sensors and cameras will track the movement of people and resources and generate data to make everything run more efficiently and openly. But now, the key is to get one municipal project up and running to prove it can be done, Lisa said.
The conference drew stories of many successful projects: A system for tracking construction gear has caught numerous workers on camera walking off with equipment and led to prosecutions. Sensors in taxis detect unsafe driving maneuvers and alert the driver with a tone and a seat vibration, then report it to the taxi company. Major League Baseball is collecting gigabytes of data about every moment in a game, providing more information for fans and teams.
But for the mass market of small and medium-size enterprises that don’t have the resources to do a lot of custom development, even targeted IoT rollouts are too daunting, said analyst James Brehm, founder of James Brehm & Associates.
There are software platforms that pave over some of the complexity of making various devices and applications talk to each other, such as the Omega DevCloud, which RacoWireless introduced on Tuesday. The DevCloud lets developers write applications in the language they know and make those apps work on almost any type of device in the field, RacoWireless said. Thingworx, Xively and Gemalto also offer software platforms that do some of the work for users. But the various platforms on offer from IoT specialist companies are still too fragmented for most customers, Brehm said. There are too many types of platforms—for device activation, device management, application development, and more. “The solutions are too complex.”
He thinks that’s holding back the industry’s growth. Though the past few years have seen rapid adoption in certain industries in certain countries, sometimes promoted by governments—energy in the U.K., transportation in Brazil, security cameras in China—the IoT industry as a whole is only growing by about 35 percent per year, Brehm estimates. That’s a healthy pace, but not the steep “hockey stick” growth that has made other Internet-driven technologies ubiquitous, he said.
What lies ahead
Brehm thinks IoT is in a period where customers are waiting for more complete toolkits to implement it—essentially off-the-shelf products—and the industry hasn’t consolidated enough to deliver them. More companies have to merge, and it’s not clear when that will happen, he said.
“I thought we’d be out of it by now,” Brehm said. What’s hard about consolidation is partly what’s hard about adoption, in that IoT is a complex set of technologies, he said.
And don’t count on industry standards to simplify everything. IoT’s scope is so broad that there’s no way one standard could define any part of it, analysts said. The industry is evolving too quickly for traditional standards processes, which are often mired in industry politics, to keep up, according to Andy Castonguay, an analyst at IoT research firm Machina.
Instead, individual industries will set their own standards while software platforms such as Omega DevCloud help to solve the broader fragmentation, Castonguay believes. Even the Industrial Internet Consortium, formed earlier this year to bring some coherence to IoT for conservative industries such as energy and aviation, plans to work with existing standards from specific industries rather than write its own.
Ryan Martin, an analyst at 451 Research, compared IoT standards to human languages.
“I’d be hard pressed to say we are going to have one universal language that everyone in the world can speak,” and even if there were one, most people would also speak a more local language, Martin said.
Via The Register
Google is attempting to shunt users away from old browsers by intentionally serving up a stale version of the ad giant's search homepage to those holdouts.
The tactic appears to be falling in line with Mountain View's policy on its other Google properties, such as Gmail, which the company declines to fully support on aged browsers.
However, it was claimed on Friday in a Google discussion thread that the multinational had unceremoniously dumped a past its sell-by-date version of the Larry Page-run firm's search homepage on those users who have declined to upgrade their Opera and Safari browsers.
A user with the moniker DJSigma wrote on the forum:
In a later post, DJSigma added that there seemed to be a glitch on Google search.
The Opera user then said that the problem appeared to be "intermittent". Others flagged up similar issues on the Google forum and said they hoped it was just a bug.
While someone going by the name MadFranko008 added:
Some Safari 5.1.x and Opera 12.x netizens were able to fudge the system by customising their browser's user agent. But others continued to complain about Google's "clunky", old search homepage.
A Google employee, meanwhile, said that the tactic was deliberate in a move to flush out stick-in-the-mud types who insisted on using older versions of browsers.
"Thanks for the reports. I want to assure you this isn't a bug, it's working as intended," said a Google worker going by the name nealem. She added:
In a separate thread, as spotted by a Reg reader who brought this sorry affair to our attention, user MadFranko008 was able to show that even modern browsers - including the current version of Chrome - were apparently spitting out glitches on Apple Mac computers.
Google then appeared to have resolved the search "bug" spotted in Chrome.
Monday, August 11. 2014
Aiming to do for Machine Learning what MySQL did for database servers, U.S. and UK-based PredictionIO has raised $2.5 million in seed funding from a raft of investors including Azure Capital, QuestVP, CrunchFund (of which TechCrunch founder Mike Arrington is a Partner), Stanford University‘s StartX Fund, France-based Kima Ventures, IronFire, Sood Ventures and XG Ventures. The additional capital will be used to further develop its open source Machine Learning server, which significantly lowers the barriers for developers to build more intelligent products, such as recommendation or prediction engines, without having to reinvent the wheel.
Being an open source company — after pivoting from offering a “user behavior prediction-as-a-service” under its old TappingStone product name — PredictionIO plans to generate revenue in the same way MySQL and other open source products do. “We will offer an Enterprise support license and, probably, an enterprise edition with more advanced features,” co-founder and CEO Simon Chan tells me.
The problem PredictionIO is setting out to solve is that building Machine Learning into products is expensive and time-consuming — and in some instances is only really within the reach of major and heavily-funded tech companies, such as Google or Amazon, who can afford a large team of PhDs/data scientists. By utilising the startup’s open source Machine Learning server, startups or larger enterprises no longer need to start from scratch, while also retaining control over the source code and the way in which PredictionIO integrates with their existing wares.
In fact, the degree of flexibility and reassurance an open source product offers is the very reason why PredictionIO pivoted away from a SaaS model and chose to open up its codebase. It did so within a couple of months of launching its original TappingStone product. Fail fast, as they say.
“We changed from TappingStone (Machine Learning as a Service) to PredictionIO (open source server) in the first 2 months once we built the first prototype,” says Chan. “As developers ourselves, we realise that Machine Learning is useful only if it’s customizable to each unique application. Therefore, we decided to open source the whole product.”
The pivot appears to be working, too, and not just validated by today’s funding. To date, Chan says its open source Machine Learning server is powering thousands of applications with 4000+ developers engaged with the project. “Unlike other data science tools that focus on solving data researchers’ problems, PredictionIO is built for every programmer,” he adds.
Other competitors Chan cites include “closed ‘black box” MLaaS services or software’, such as Google Prediction API, Wise.io, BigML, and Skytree.
Examples of who is currently using PredictionIO include Le Tote, a clothing subscription/rental service that is using PredictionIO to predict customers’ fashion preferences, and PerkHub, which is using PredictionIO to personalize product recommendations in the weekly ‘group buying’ emails they send out.
Friday, August 08. 2014
Can a silicon chip act like a human brain? Researchers at IBM say they’ve built one that mimics the brain better than any that has come before it.
In a paper published in the journal Science today, IBM said it used conventional silicon manufacturing techniques to create what it calls a neurosynaptic processor that could rival a traditional supercomputer by handling highly complex computations while consuming no more power than that supplied by a typical hearing aid battery.
The chip is also one of the biggest ever built, boasting some 5.4 billion transistors, which is about a billion more than the number of transistors on an Intel Xeon chip.
To do this, researchers designed the chip with a mesh network of 4,096 neurosynaptic cores. Each core contains elements that handle computing, memory and communicating with other parts of the chip. Each core operates in parallel with the others.
Multiple chips can be connected together seamlessly, IBM says, and they could be used to create a neurosynaptic supercomputer. The company even went so far as to build one using 16 of the chips.
The new design could shake up the conventional approach to computing, which has been more or less unchanged since the 1940s and is known as the Von Neumann architecture. In English, a Von Neumann computer — you’re using one right now — stores the data for a program in memory.
This chip, which has been dubbed TrueNorth, relies on its network of neurons to detect and recognize patterns in much the same way the human brain does. If you’ve read your Ray Kurzweil, this is one way to understand how the brain works — recognizing patterns. Put simply, once your brain knows the patterns associated with different parts of letters, it can string them together in order to recognize words and sentences. If Kurzweil is correct, you’re doing this right now, using some 300 million pattern-recognizing circuits in your brain’s neocortex.
The chip would seem to represent a breakthrough in one of the long-term problems in computing: Computers are really good at doing math and reading words, but discerning and understanding meaning and context, or recognizing and classifying objects — things that are easy for humans — have been difficult for traditional computers. One way IBM tested the chip was to see if it could detect people, cars, trucks and buses in video footage and correctly recognize them. It worked.
In terms of complexity, the TrueNorth chip has a million neurons, which is about the same number as in the brain of a common honeybee. A typical human brain averages 100 billion. But given time, the technology could be used to build computers that can not only see and hear, but understand what is going on around them.
Currently, the chip is capable of 46 billion synaptic operations per second per watt, or SOPS. That’s a tricky apples-to-oranges comparison to a traditional supercomputer, where performance is measured in the number of floating point operations per second, or FLOPS. But the most energy-efficient supercomputer now running tops out at 4.5 billion FLOPS.
Down the road, the researchers say in their paper, they foresee TrueNorth-like chips being combined with traditional systems, each solving problems it is best suited to handle. But it also means that systems that in some ways will rival the capabilities of current supercomputers will fit into a machine the size of your smartphone, while consuming even less energy.
Wednesday, July 16. 2014
Tuesday, July 15. 2014
(Page 1 of 47, totaling 462 entries) » next page
Show tagged entries