Wednesday, October 15. 2014
Via NBC News
Two Silicon Valley giants now offer women a game-changing perk: Apple and Facebook will pay for employees to freeze their eggs.
Facebook recently began covering egg freezing, and Apple will start in January, spokespeople for the companies told NBC News. The firms appear to be the first major employers to offer this coverage for non-medical reasons.
“Having a high-powered career and children is still a very hard thing to do,” said Brigitte Adams, an egg-freezing advocate and founder of the patient forum Eggsurance.com. By offering this benefit, companies are investing in women, she said, and supporting them in carving out the lives they want.
When successful, egg freezing allows women to put their fertility on ice, so to speak, until they’re ready to become parents. But the procedure comes at a steep price: Costs typically add up to at least $10,000 for every round, plus $500 or more annually for storage.
With notoriously male-dominated Silicon Valley firms competing to attract top female talent, the coverage may give Apple and Facebook a leg up among the many women who devote key childbearing years to building careers. Covering egg freezing can be viewed as a type of “payback” for women’s commitment, said Philip Chenette, a fertility specialist in San Francisco.
The companies offer egg-freezing coverage under slightly different terms: Apple covers costs under its fertility benefit, and Facebook under its surrogacy benefit, both up to $20,000. Women at Facebook began taking advantage of the coverage this year.
While techniques and success rates are improving, there's no guarantee the procedure will lead to a baby down the road. The American Society for Reproductive Medicine doesn’t keep comprehensive stats on babies born from frozen eggs – in fact, the group cautions against relying on egg freezing to extend fertility – though experts say the earlier a woman freezes her eggs, the greater her chances of success. Doctors often recommend women freeze at least 20 eggs, which can require two costly rounds.
But in the two years since the ASRM lifted the “experimental” label from egg freezing, experts say they’ve seen a surge in women seeking out the procedure. Fertility doctors in New York and San Francisco report that egg-freezing cases have nearly doubled over the past year.
For many women, taking the step to boost their chances of having kids in the future is worth the uncertainty. A majority of patients who froze their eggs reported feeling “empowered” in a 2013 survey published in the journal Fertility and Sterility. Women who know they want kids someday “can go on with their lives and know that they've done everything that they can,” said Chenette.
Egg freezing has even been described as a key to “leveling the playing field” between men and women: Without the crushing pressure of a ticking biological clock, women have more freedom in making life choices, say advocates. A Bloomberg Businessweek magazine cover story earlier this year asked: Will freezing your eggs free your career? “Not since the birth control pill has a medical technology had such potential to change family and career planning,” wrote author Emma Rosenblum.
News of the firms’ egg-freezing coverage comes in the midst of what’s been described as a Silicon Valley “perks arms race.” It’s only the latest in a generous list of family and wellness-oriented health benefits from Apple and Facebook (whose COO, of course, is feminist change agent and “Lean In” author Sheryl Sandberg). Both companies offer benefits for fertility treatment and adoption. Facebook famously gives new parents $4,000 in so-called “baby cash” to use however they’d like.
Silicon Valley firms are hardly alone in offering generous benefits to attract and keep talent, but they appear to be leading the way with egg freezing. Advocates say they’ve heard murmurs of large law, consulting, and finance firms helping to cover the costs, but no companies are broadcasting this support. “It’s very forward-looking,” said Eggsurance’s Adams.
Companies may be concerned about the public relations implications of the benefit – in the most cynical light, egg-freezing coverage could be viewed as a ploy to entice women to sell their souls to their employer, sacrificing childbearing years for the promise of promotion.
“Would potential female associates welcome this option knowing that they can work hard early on and still reproduce, if they so desire, later on?” asked Glenn Cohen, co-director of Harvard Law School’s Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics, in a blog post last year. “Or would they take this as a signal that the firm thinks that working there as an associate and pregnancy are incompatible?”
But the more likely explanation for lack of coverage is simply that egg freezing is still new, and conversation around the procedure has only recently gone mainstream. “I think we've reached a tipping point,” said Adams. “When I used to say ‘egg freezing,’ people would stare at me with their mouths open.” Now? Most people know someone who’s done or considered it.
Many large companies adopt new benefits in response to employee demand – firms have recently started to offer benefits for transgender employees, for example. As women’s awareness of egg freezing grows, more employers may jump on the band wagon.
“The attitude toward egg freezing is very different,” and more positive, than just a few years ago, said Christy Jones, founder of Extend Fertility, a company that offers and promotes egg freezing across the country. Women are making the proactive decision to freeze their eggs at a younger age, and the choice is "more one of empowerment than 'this is my last chance.'"
EggBanxx, the first service to help women finance egg freezing, has recently begun to capitalize on this shift by hosting “egg-freezing parties,” where experts educate guests. “Maybe you haven’t found Mr. Right just yet or perhaps you would like more time to focus on your education or career,” the company website says. “Whatever the reasons, freezing your eggs now will allow you to tackle conception later.”
‘Back to work the next day’
Women generally need about two weeks of flexibility for one cycle of egg freezing. After about ten days of fertility drug injections, patients undergo a relatively short outpatient procedure – and they’re “back to work the next day,” said Lynn Westphal, Associate Professor Obstetrics and Gynecology at Stanford University Medical Center. From there, eggs are frozen and stored until a woman is ready to use them, at which point she’ll begin the process of in vitro fertilization.
Once a woman freezes her eggs, she may never return to use them, fertility doctors report. Some women get pregnant the old-fashioned way, others make different life plans. Westphal compares egg freezing to car insurance: You hope you don’t have to use what you’ve put away, but if you find yourself in a situation where you need to, you’re glad to have the protection.
Will the perk pay off for companies? The benefit will likely encourage women to stay with their employer longer, cutting down on recruiting and hiring costs. And practically speaking, when women freeze their eggs early, firms may save on pregnancy costs in the long run, said Westphal. A woman could avoid paying to use a donor egg down the road, for example, or undergoing more intensive fertility treatments when she’s ready to have a baby.
But the emotional and cultural payoff may be more valuable, said Jones: Offering this benefit “can help women be more productive human beings.”
Friday, October 10. 2014
Robert Alexander, NASA/University of Michigan
Robert Alexander spends parts of his day listening to a soft white noise, similar to water falling on the outside of a house during a rainstorm. Every once in a while, he hears an anomalous sound and marks the corresponding time in the audio file. Alexander is listening to the sun’s magnetic field and marking potential areas of interest. After only ten minutes, he has listened to one month’s worth of data.
Alexander is a PhD candidate in design science at the University of Michigan. He is a sonification specialist who trains heliophysicists at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, to pick out subtle differences by listening to satellite data instead of looking at it.
Sonification is the process of displaying any type of data or measurement as sound, such as the beep from a heart rate monitor measuring a person’s pulse, a door bell ringing every time a person enters a room, or, in this case, explosions indicating large events occurring on the sun. In certain cases, scientists can use their ears instead of their eyes to process data more rapidly -- and to detect more details – than through visual analysis. A paper on the effectiveness of sonification in analyzing data from NASA satellites was published in the July issue of Journal of Geophysical Research: Space Physics.
“NASA produces a vast amount of data from its satellites. Exploring such large quantities of data can be difficult,” said Alexander. "Sonification offers a promising supplement to standard visual analysis techniques.”
LISTENING TO SPACE
Alexander's focus is on improving and quantifying the success of these techniques. The team created audio clips from the data and shared them with researchers. While the original data from the Wind satellite was not in audio file format, the satellite records electromagnetic fluctuations that can be converted directly to audio samples. Alexander and his team used custom written computer algorithms to convert those electromagnetic frequencies into sound. Listen to the following multimedia clips to hear the sounds of space.
This clip has three distinct sections: a warble noise leading up to a short knock at slightly higher frequency followed by a quieter segment containing broadband noise that is both rising and hissing. This clip gathered from NASA's Wind satellite on Nov. 20, 2007, contains a reverse shock. This type of event occurs when a fast stream of plasma – that is, the super hot, charged gas that fills space— is followed by a slower one, resulting in a shock wave that travels towards the sun.
This audio clip is the previous clip played backwards. Here, trained listeners will notice the reverse shock event played backwards sounds similar to forward shock event.
This clip contains audified data from the joint European Space Agency (ESA) and NASA Ulysses satellite gathered on October 26, 1995. The participant in Alexander's study was able to detect artificial noise produced from the instrument, which he did not notice in previous visual analysis. Here, the artificial noise can be heard as a drifting tone.
PROCESSING AN OVERWHELMING AMOUNT OF DATA
Alexander's focus is on using clips like these to quantify and improve sonification techniques in order to speed up access to the incredible amounts of data provided by space satellites. For example, he works with space scientist Robert Wicks at NASA Goddard to analyze the high-resolution observations of the sun. Wicks studies the constant stream of particles from our closest star, known as the solar wind – a wind that can cause space weather effects that interfere with human technology near Earth. The team uses data from NASA's Wind satellite. Launched in 1994, Wind orbits a point in between Earth and the sun, constantly observing the temperature, density, speed and the magnetic field of the solar wind as it rushes past.
Wicks analyzes changes in Wind's magnetic field data. Such data not only carries information about the solar wind, but understanding such changes better might help give a forewarning of problematic space weather that can affect satellites near Earth. The Wind satellite also provides an abundance of magnetometer data points, as the satellite measures the magnetic field 11 times per second. Such incredible amounts of information are beneficial -- but only if all the data can be analyzed.
“There is a very long, accurate time series of data, which gives a fantastic view of solar wind changes and what’s going on at small scales,” said Wicks. “There's a rich diversity of physical processes going on, but it is more data than I can easily look through.”
The traditional method of processing the data involves making an educated assertion about where a certain event in the solar wind -- such as subtle wave movements made by hot plasma -- might show up and then visually searching, which can be very time consuming. Instead, Alexander listens to sped up versions of the Wind data and compiles a list of noteworthy regions that scientists like Wicks can return to and further analyze, expediting the process.
In one example, Alexander’s team analyzed data points from the Wind satellite from November 2007, condensing three hours of real-time recording to a three second audio clip. To an untrained ear, the data sounds like a microphone recording on a windy day. When Alexander presented these sounds to a researcher, however, the researcher could identify a distinct chirping at the beginning of the audio clip followed by a percussive event, culminating in a loud boom.
By listening only to the auditory representation of the data, the study’s participant was able to correctly predict what this would look like on a more traditional graph. He correctly deduced that that the chirp would show up as a particular kind of peak on a kind of graph called a spectrogram, a graph that shows different levels of frequencies present in the waves that Wind recorded. The researcher also correctly predicted that the corresponding spectrogram representation of the percussive event would display a steep slope.
CONVERTING DATA INTO SOUND
Alexander translates the data into audio files through a process known as audification, a specific type of sonification that involves directly listening to raw, unedited satellite data. Translating this data into audio can be likened to part of the process of collecting sound from a person singing into a microphone at a recording studio with reel-to-reel tape. When a person sings into a microphone, it detects changes in pressure and converts the pressure signals to changes in magnetic intensity in the form of an electrical signal. The electrical signals are stored on the reel tape. Magnetometers on the Wind satellite measure changes in magnetic field directly creating a similar kind of electrical signal. Alexander writes a computer program to translate this data to an audio file.
“The tones come out of the data naturally. If there is a frequency embedded in the data, then that frequency becomes audible as a sound,” said Alexander.
Listening to data is not new. In a study in 1982, researchers used audification to identify micrometeroids, or small ring particles, hitting the Voyager 2 spacecraft as it traversed Saturn's rings. The impacts were visually obscured in the data but could be easily heard – sounding like intense impulses, almost like a hailstorm.
However, the method is not often used in the science community because it requires a certain level of familiarity with the sounds. For instance, the listener needs to have an understanding of what typical solar wind turbulence sounds like in order to identify atypical events. “It’s about using your ear to pick out subtle differences,” Alexander said.
Alexander initially spent several months with Wicks teaching him how to listen to magnetometer data and highlighting certain elements. But the hard work is paying off as analysis gets faster and easier, leading to new assessments of the data.
“I’ve never listened to the data before,” said Wicks. “It has definitely opened up a different perspective.”
Gartner sees things like robots and drones replacing a third of all workers by 2025, and whether you want to believe it or not, is entirely your business.
This is Gartner being provocative, as it typically is, at the start of its major U.S. conference, the Symposium/ITxpo.
Take drones, for instance.
"One day, a drone may be your eyes and ears," said Peter Sondergaard, Gartner's research director. In five years, drones will be a standard part of operations in many industries, used in agriculture, geographical surveys and oil and gas pipeline inspections.
"Drones are just one of many kinds of emerging technologies that extend well beyond the traditional information technology world -- these are smart machines," said Sondergaard.
Smart machines are an emerging "super class" of technologies that perform a wide variety of work, both the physical and the intellectual kind, said Sondergaard. Machines, for instance, have been grading multiple choice for years, but now they are grading essays and unstructured text.
This cognitive capability in software will extend to other areas, including financial analysis, medical diagnostics and data analytic jobs of all sorts, says Gartner.
"Knowledge work will be automated," said Sondergaard, as will physical jobs with the arrival of smart robots.
"Gartner predicts one in three jobs will be converted to software, robots and smart machines by 2025," said Sondergaard. "New digital businesses require less labor; machines will be make sense of data faster than humans can."
Among those listening in this audience was Lawrence Strohmaier, the CIO of Nuverra Environmental Solutions, who said Gartner's prediction is similar to what happened in other eras of technological advance.
"The shift is from doing to implementing, so the doers go away but someone still has to implement," said Strohmaier. IT is a shift, although a slow one, to new types of jobs, no different than what happened in the machine age, he said.
The forecast of the impact of technology on jobs was also a warning to the CIOs and IT managers at this conference to consider how they will adapt.
"The door is open for the CIO and the IT organization to be a major player in digital leadership," said David Aron, a Gartner analyst.
CIOs have been steadily gaining authority, and 41% of CIOs now report to the CEO, a record level, said Aron. That's based on data from 2,810 CIOs globally.
To be effective leaders, Gartner argues that CIOs have shifted from being focused on measuring things like cost to being able to lead with vision and describe what their business or government agency must do to take advantage of smarter technologies.
Thursday, October 02. 2014
A startup called Algorithmia has a new twist on online matchmaking. Its website is a place for businesses with piles of data to find researchers with a dreamboat algorithm that could extract insights–and profits–from it all.
The aim is to make better use of the many algorithms that are developed in academia but then languish after being published in research papers, says cofounder Diego Oppenheimer. Many have the potential to help companies sort through and make sense of the data they collect from customers or on the Web at large. If Algorithmia makes a fruitful match, a researcher is paid a fee for the algorithm’s use, and the matchmaker takes a small cut. The site is currently in a private beta test with users including academics, students, and some businesses, but Oppenheimer says it already has some paying customers and should open to more users in a public test by the end of the year.
“Algorithms solve a problem. So when you have a collection of algorithms, you essentially have a collection of problem-solving things,” says Oppenheimer, who previously worked on data-analysis features for the Excel team at Microsoft.
Oppenheimer and cofounder Kenny Daniel, a former graduate student at USC who studied artificial intelligence, began working on the site full time late last year. The company raised $2.4 million in seed funding earlier this month from Madrona Venture Group and others, including angel investor Oren Etzioni, the CEO of the Allen Institute for Artificial Intelligence and a computer science professor at the University of Washington.
Etzioni says that many good ideas are essentially wasted in papers presented at computer science conferences and in journals. “Most of them have an algorithm and software associated with them, and the problem is very few people will find them and almost nobody will use them,” he says.
One reason is that academic papers are written for other academics, so people from industry can’t easily discover their ideas, says Etzioni. Even if a company does find an idea it likes, it takes time and money to interpret the academic write-up and turn it into something testable.
To change this, Algorithmia requires algorithms submitted to its site to use a standardized application programming interface that makes them easier to use and compare. Oppenheimer says some of the algorithms currently looking for love could be used for machine learning, extracting meaning from text, and planning routes within things like maps and video games.
Early users of the site have found algorithms to do jobs such as extracting data from receipts so they can be automatically categorized. Over time the company expects around 10 percent of users to contribute their own algorithms. Developers can decide whether they want to offer their algorithms free or set a price.
All algorithms on Algorithmia’s platform are live, Oppenheimer says, so users can immediately use them, see results, and try out other algorithms at the same time.
The site lets users vote and comment on the utility of different algorithms and shows how many times each has been used. Algorithmia encourages developers to let others see the code behind their algorithms so they can spot errors or ways to improve on their efficiency.
One potential challenge is that it’s not always clear who owns the intellectual property for an algorithm developed by a professor or graduate student at a university. Oppenheimer says it varies from school to school, though he notes that several make theirs open source. Algorithmia itself takes no ownership stake in the algorithms posted on the site.
Eventually, Etzioni believes, Algorithmia can go further than just matching up buyers and sellers as its collection of algorithms grows. He envisions it leading to a new, faster way to compose software, in which developers join together many different algorithms from the selection on offer.
Wednesday, September 10. 2014
Project «SurPRISE – Surveillance, Privacy and Security: A large scale participatory assessment of criteria and factors determining acceptability and acceptance of security technologies in Europe»
About the Project
SurPRISE re-examines the relationship between security and privacy, which is commonly positioned as a ‘trade-off’. Where security measures and technologies involve the collection of information about citizens, questions arise as to whether and to what extent their privacy has been infringed. This infringement of individual privacy is sometimes seen as an acceptable cost of enhanced security. Similarly, it is assumed that citizens are willing to trade off their privacy for enhanced personal security in different settings. This common understanding of the security-privacy relationship, both at state and citizen level, has informed policymakers, legislative developments and best practice guidelines concerning security developments across the EU.
However, an emergent body of work questions the validity of the security-privacy trade-off. This work suggests that it has over-simplified how the impact of security measures on citizens is considered in current security policies and practices. Thus, the more complex issues underlying privacy concerns and public skepticism towards surveillance-oriented security technologies may not be apparent to legal and technological experts. In response to these developments, this project will consult with citizens from several EU member and associated states on the question of the security-privacy trade-off as they evaluate different security technologies and measures.
When it comes to speeding up Web traffic over the Internet, sometimes too much of a good thing may not be such a good thing at all.
The Internet Engineering Task Force is putting the final touches on HTTP/2, the second version of the Hypertext Transport Protocol (HTTP). The working group has issued a last call draft, urging interested parties to voice concerns before it becomes a full Internet specification.
Not everyone is completely satisfied with the protocol however.
“There is a lot of good in this proposed standard, but I have some deep reservations about some bad and ugly aspects of the protocol,” wrote Greg Wilkins, lead developer of the open source Jetty server software, noting his concerns in a blog item posted Monday.
Others, however, praise HTTP/2 and say it is long overdue.
“A lot of our users are experimenting with the protocol,” said Owen Garrett, head of products for server software provider NGINX. “The feedback is that generally, they have seen big performance benefits.”
First created by Web originator Tim Berners-Lee and associates, HTTP quite literally powers today’s Web, providing the language for a browser to request a Web page from a server.
SPDY is speediest
Version 2.0 of HTTP, based largely on the SPDY protocol developed by Google, promises to be a better fit for how people use the Web.
“The challenge with HTTP is that it is a fairly simple protocol, and it can be quite laborious to download all the resources required to render a Web page. SPDY addresses this issue,” Garrett said.
While the first generation of Web sites were largely simple and relatively small, static documents, the Web today is used as a platform for delivering applications and bandwidth intensive real-time multimedia content.
HTTP/2 speeds basic HTTP in a number of ways. HTTP/2 allows servers to send all the different elements of a requested Web page at once, eliminating the serial sets of messages that have to be sent back and forth under plain HTTP.
HTTP/2 also allows the server and the browser to compress HTTP, which cuts the amount of data that needs to be communicated between the two.
As a result, HTTP/2 “is really useful for organization with sophisticated Web sites, particularly when its users are distributed globally or using slower networks—mobile users for instance,” Garrett said.
While enthusiastic about the protocol, Wilkins did have several areas of concern. For instance, HTTP/2 could make it more difficult to incorporate new Web protocols, most notably the communications protocol WebSocket, Wilkins asserted.
Wilkins noted that HTTP/2 blurs what were previously two distinct layers of HTTP—the semantic layer, which describes functionality, and the framework layer, which is the structure of the message. The idea is that it is simpler to write protocols for a specification with discrete layers.
The protocol also makes it possible to hide content, including malicious content, within the headers, bypassing the notice of today’s firewalls, Wilkins said.
Servers need more power
HTTP/2 could also put a lot more strain on existing servers, Wilkins noted, given that they will now be fielding many more requests at once.
HTTP/2 “clients will send requests much more quickly, and it is quite likely you will see spikier traffic as a result,” Garrett agreed.
As a result, a Web application, if it doesn’t already rely on caching or load balancing, may have to do so with HTTP/2, Garrett said.
The SPDY protocol is already used by almost 1 percent of all the websites, according to an estimate of the W3techs survey company.
NGINX has been a big supporter of SPDY and HTTP/2, not surprising given that the company’s namesake server software was designed for high-traffic websites.
Approximately 88 percent of sites that offer SPDY do so with NGINX, according to W3techs.
Yet NGINX has characterized SPDY to its users as “experimental,” Garrett said, largely because the technology is still evolving and hasn’t been nailed down yet by the formal specification.
“We’re really forward to when the protocol is rubber-stamped,” Garrett said. Once HTTP/2 is approved, “We can recommend it to our customers with confidence,” Garrett said.
Tuesday, September 09. 2014
Via Tech Crunch
Mimosa Networks is finally ready to help make gigabit wireless technology a reality. The company, which recently came out of stealth, is launching a series of products that it hopes to sell to a new generation of wireless ISPs.
Its wireless Internet products include its new B5 Backhaul radio hardware and its Mimosa Cloud Services planning and analytics offering. By using the two in combination, new ISPs can build high-capacity wireless networks at a fraction of the cost it would take to lay a fiber network.
The B5 backhaul radio is a piece of hardware that uses multiple-input and multiple-output (MIMO) technology to provide up to 16 streams and 4 Gbps of output when multiple radios are using the same channel.
With a single B5 radio, customers can provide a gigabit of throughput for up to eight or nine miles, according to co-founder and chief product officer Jaime Fink. The longer the distance, the less bandwidth is available, of course. But Fink said the company is running one link of about 60 miles that still gets a several hundred megabits of throughput along the California coast.
Not only does the product offer high data speeds on 5 GHz wireless spectrum, but it also makes that spectrum more efficient. It uses spectrum analysis and load balancing to optimize bandwidth, frequency, and power use based on historical and real-time data to adapt to wireless interference and other issues.
In addition to the hardware, Mimosa’s cloud services will help customers plan and deploy networks with analytics tools to determine how powerful and efficiently their existing equipment is running. That will enable new ISPs to more effectively determine where to place new hardware to link up with other base stations.
The product also is designed to support networks as they grow, and it makes sure that ISPs can spot problems as they happen. The Cloud Services product is available now, but the backhaul radio will be available for about $900 later this fall.
Mimosa is launching these first products after raising $38 million in funding from New Enterprise Associates and Oak Investment Partners. That includes a $20 million Series C round led by return investor NEA that was recently closed.
The company was founded by Brian Hinman, who had previously co-founded PictureTel, Polycom, and 2Wire, along with Fink, who previously served as CTO of 2Wire and SVP of technology for Pace after it acquired the home networking equipment company. Now they’re hoping to make wireless gigabit speeds available for new ISPs.
Monday, September 08. 2014
After Leap Motion's somewhat disappointing debut, you'd be forgiving for wanting to wave off the idea of third-party gesture control peripherals. But wait! Unlike Leap, Reactiv isn't trying to revolutionize human-computer interactions with its Touch+ controller—there's no wizard-like finger waggling or Minority Report-style hand waving here. Instead, the Touch+'s dual cameras turn any surface into multi-touch input device.
Touch+ was born out of Haptix, a Kickstarter project that raised more than $180,000 from backers. Over the past year, Reactiv refined the Haptix vision to eventually become Touch+.
While Touch+ certainly won't be for everyone, Reactiv is positioning the multitouch PC controller as more than a mere tool for games and art projects. The video above shows the device being used in an office meeting, acting as a cursor control for a businessman's laptop before being repositioned on the fly to point at a projected display, instantly allowing the man to reach up with his hands to circle objects on the image.
What's more, PCWorld sister site CITEWorld managed to snag a live demo with Touch+, and the founders focused on the potential productivity uses of the device: Enabling mouse-free control of Excel and PowerPoint, naturally manipulating pictures in PhotoShop, creating designs in CAD, the aforementioned presentation capabilities, and so forth.
The Touch+ works with Windows PCs or Macs, connecting via USB 2.0 or 3.0. If you choose to point it at your keyboard, the device will temporarily suspend its multitouch capabilities while you type, then resume when your fingers stop bobbing up and down.
Sound interesting? An alpha version of Touch+ is available now on the Reactiv website for $75. Until we get our own hands on the device, however, we won't know for sure how the device stacks up to competitors like the Leap Motion.
Friday, September 05. 2014
The Internet of Things is still too hard. Even some of its biggest backers say so.
For all the long-term optimism at the M2M Evolution conference this week in Las Vegas, many vendors and analysts are starkly realistic about how far the vaunted set of technologies for connected objects still has to go. IoT is already saving money for some enterprises and boosting revenue for others, but it hasn’t hit the mainstream yet. That’s partly because it’s too complicated to deploy, some say.
For now, implementations, market growth and standards are mostly concentrated in specific sectors, according to several participants at the conference who would love to see IoT span the world.
Cisco Systems has estimated IoT will generate $14.4 trillion in economic value between last year and 2022. But Kevin Shatzkamer, a distinguished systems architect at Cisco, called IoT a misnomer, for now.
“I think we’re pretty far from envisioning this as an Internet,” Shatzkamer said. “Today, what we have is lots of sets of intranets.” Within enterprises, it’s mostly individual business units deploying IoT, in a pattern that echoes the adoption of cloud computing, he said.
In the past, most of the networked machines in factories, energy grids and other settings have been linked using custom-built, often local networks based on proprietary technologies. IoT links those connected machines to the Internet and lets organizations combine those data streams with others. It’s also expected to foster an industry that’s more like the Internet, with horizontal layers of technology and multivendor ecosystems of products.
What’s holding back the Internet of Things
The good news is that cities, utilities, and companies are getting more familiar with IoT and looking to use it. The less good news is that they’re talking about limited IoT rollouts for specific purposes.
“You can’t sell a platform, because a platform doesn’t solve a problem. A vertical solution solves a problem,” Shatzkamer said. “We’re stuck at this impasse of working toward the horizontal while building the vertical.”
“We’re no longer able to just go in and sort of bluff our way through a technology discussion of what’s possible,” said Rick Lisa, Intel’s group sales director for Global M2M. “They want to know what you can do for me today that solves a problem.”
One of the most cited examples of IoT’s potential is the so-called connected city, where myriad sensors and cameras will track the movement of people and resources and generate data to make everything run more efficiently and openly. But now, the key is to get one municipal project up and running to prove it can be done, Lisa said.
The conference drew stories of many successful projects: A system for tracking construction gear has caught numerous workers on camera walking off with equipment and led to prosecutions. Sensors in taxis detect unsafe driving maneuvers and alert the driver with a tone and a seat vibration, then report it to the taxi company. Major League Baseball is collecting gigabytes of data about every moment in a game, providing more information for fans and teams.
But for the mass market of small and medium-size enterprises that don’t have the resources to do a lot of custom development, even targeted IoT rollouts are too daunting, said analyst James Brehm, founder of James Brehm & Associates.
There are software platforms that pave over some of the complexity of making various devices and applications talk to each other, such as the Omega DevCloud, which RacoWireless introduced on Tuesday. The DevCloud lets developers write applications in the language they know and make those apps work on almost any type of device in the field, RacoWireless said. Thingworx, Xively and Gemalto also offer software platforms that do some of the work for users. But the various platforms on offer from IoT specialist companies are still too fragmented for most customers, Brehm said. There are too many types of platforms—for device activation, device management, application development, and more. “The solutions are too complex.”
He thinks that’s holding back the industry’s growth. Though the past few years have seen rapid adoption in certain industries in certain countries, sometimes promoted by governments—energy in the U.K., transportation in Brazil, security cameras in China—the IoT industry as a whole is only growing by about 35 percent per year, Brehm estimates. That’s a healthy pace, but not the steep “hockey stick” growth that has made other Internet-driven technologies ubiquitous, he said.
What lies ahead
Brehm thinks IoT is in a period where customers are waiting for more complete toolkits to implement it—essentially off-the-shelf products—and the industry hasn’t consolidated enough to deliver them. More companies have to merge, and it’s not clear when that will happen, he said.
“I thought we’d be out of it by now,” Brehm said. What’s hard about consolidation is partly what’s hard about adoption, in that IoT is a complex set of technologies, he said.
And don’t count on industry standards to simplify everything. IoT’s scope is so broad that there’s no way one standard could define any part of it, analysts said. The industry is evolving too quickly for traditional standards processes, which are often mired in industry politics, to keep up, according to Andy Castonguay, an analyst at IoT research firm Machina.
Instead, individual industries will set their own standards while software platforms such as Omega DevCloud help to solve the broader fragmentation, Castonguay believes. Even the Industrial Internet Consortium, formed earlier this year to bring some coherence to IoT for conservative industries such as energy and aviation, plans to work with existing standards from specific industries rather than write its own.
Ryan Martin, an analyst at 451 Research, compared IoT standards to human languages.
“I’d be hard pressed to say we are going to have one universal language that everyone in the world can speak,” and even if there were one, most people would also speak a more local language, Martin said.
Via The Register
Google is attempting to shunt users away from old browsers by intentionally serving up a stale version of the ad giant's search homepage to those holdouts.
The tactic appears to be falling in line with Mountain View's policy on its other Google properties, such as Gmail, which the company declines to fully support on aged browsers.
However, it was claimed on Friday in a Google discussion thread that the multinational had unceremoniously dumped a past its sell-by-date version of the Larry Page-run firm's search homepage on those users who have declined to upgrade their Opera and Safari browsers.
A user with the moniker DJSigma wrote on the forum:
In a later post, DJSigma added that there seemed to be a glitch on Google search.
The Opera user then said that the problem appeared to be "intermittent". Others flagged up similar issues on the Google forum and said they hoped it was just a bug.
While someone going by the name MadFranko008 added:
Some Safari 5.1.x and Opera 12.x netizens were able to fudge the system by customising their browser's user agent. But others continued to complain about Google's "clunky", old search homepage.
A Google employee, meanwhile, said that the tactic was deliberate in a move to flush out stick-in-the-mud types who insisted on using older versions of browsers.
"Thanks for the reports. I want to assure you this isn't a bug, it's working as intended," said a Google worker going by the name nealem. She added:
In a separate thread, as spotted by a Reg reader who brought this sorry affair to our attention, user MadFranko008 was able to show that even modern browsers - including the current version of Chrome - were apparently spitting out glitches on Apple Mac computers.
Google then appeared to have resolved the search "bug" spotted in Chrome.
(Page 1 of 47, totaling 466 entries) » next page
Show tagged entries