It turns out that a vital missing ingredient in the long-sought after
 goal of getting machines to think like humans—artificial 
intelligence—has been lots and lots of data.
 
Last week, at the O’Reilly Strata + Hadoop World Conference in New York, Salesforce.com’s head of artificial intelligence, Beau Cronin, asserted that AI has gotten a shot in the arm from the big data movement.
 “Deep learning on its own, done in academia, doesn’t have the [same] 
impact as when it is brought into Google, scaled and built into a new 
product,” Cronin said.
 
In the week since Cronin’s talk, we saw a 
whole slew of companies—startups mostly—come out of stealth mode to 
offer new ways of analyzing big data, using machine learning, natural 
language recognition and other AI techniques that those researchers have
 been developing for decades.
 
One such startup, Cognitive Scale,
 applies IBM Watson-like learning capabilities to draw insights from 
vast amount of what it calls “dark data,” buried either in the Web—Yelp 
reviews, online photos, discussion forums—or on the company network, 
such as employee and payroll files, noted KM World.
 
Cognitive
 Scale offers a set of APIs (application programming interfaces) that 
businesses can use to tap into cognitive-based capabilities designed to 
improve search and analysis jobs running on cloud services such as IBM’s Bluemix, detailed the Programmable Web.
 
Cognitive Scale was founded by Matt Sanchez, who headed up IBM’s Watson Labs,
 helping bring to market some of the first e-commerce applications based
 on the Jeopardy-winning Watson technology, pointed out CRN.
 
Sanchez,
 now chief technology officer for Cognitive Scale, is not the only 
Watson alumnus who has gone on to commercialize cognitive technologies.
 
Alert reader Gabrielle Sanchez pointed out that another Watson ex-alum, engineer Pete Bouchard, recently joined the team of another cognitive computing startup Zintera
 as the chief innovation office. Sanchez, who studied cognitive 
computing in college, found a demonstration of the company’s “deep 
learning” cognitive computing platform to be “pretty impressive.”
 
AI-based deep learning with big data was certainly on the mind of senior Google executives. This week the company snapped up two Oxford University technology spin-off companies that focus on deep learning, Dark Blue Labs and Vision Factory.
 
The teams will work on image recognition and natural language understanding, Sharon Gaudin reported in Computerworld.
 
Sumo Logic
 has found a way to apply machine learning to large amounts machine 
data. An update to its analysis platform now allows the software to 
pinpoint casual relationships within sets of data, Inside Big Data concluded.
 
A company could, for instance, use the Sumo Logic cloud service to analyze log data to troubleshoot a faulty application, for instance.
 
While companies such as Splunk have long offered search engines for machine data, Sumo Logic moves that technology a step forward, the company claimed.
 
“The
 trouble with search is that you need to know what you are searching 
for. If you don’t know everything about your data, you can’t by 
definition, search for it. Machine learning became a fundamental part of
 how we uncover interesting patterns and anomalies in data,” explained 
Sumo Logic chief marketing officer Sanjay Sarathy, in an interview.
 
For
 instance, the company, which processes about 5 petabytes of customer 
data each day, can recognize similar queries across different users, and
 suggest possible queries and dashboards that others with similar setups
 have found useful.
 
“Crowd-sourcing intelligence around different 
infrastructure items is something you can only do as a native cloud 
service,” Sarathy said.
 
With Sumo Logic, an e-commerce company 
could ensure that each transaction conducted on its site takes no longer
 than three seconds to occur. If the response time is lengthier, then an
 administrator can pinpoint where the holdup is occurring in the transactional flow.
 
One existing Sumo Logic customer, fashion retailer Tobi, plans to use the new capabilities to better understand how its customers interact with its website.
 
One-upping IBM on the name game is DataRPM, which crowned its own big data-crunching natural language query engine Sherlock (named after Sherlock Holmes who, after all, employed Watson to execute his menial tasks).
 
Sherlock
 is unique in that it can automatically create models of large data 
sets. Having a model of a data set can help users pull together 
information more quickly, because the model describes what the data is 
about, explained DataRPM CEO Sundeep Sanghavi.
 
DataRPM can analyze
 a staggeringly wide array of structured, semi-structured and 
unstructured data sources. “We’ll connect to anything and everything,” 
Sanghavi said.
 
The service company can then look for ways that different data sets could be combined to provide more insight.
 
“We
 believe that data warehousing is where data goes to die. Big data is 
not just about size, but also about how many different sources of data 
you are processing, and how fast you can process that data,” Sanghavi 
said, in an interview.
 
For instance, Sherlock can pull together 
different sources of data and respond with a visualization to a query 
such as “What was our revenue for last year, based on geography?” The 
system can even suggest other possible queries as well.
 
Sherlock 
has a few advantages over Watson, Sanghavi claimed. The training period 
is not as long, and the software can be run on-premise, rather than as a
 cloud service from IBM, for those shops that want to keep their 
computations in-house. “We’re far more affordable than Watson,” Sanghavi
 said.
 
Initially, DataRPM is marketing to the finance, telecommunications, manufacturing, transportation and retail sectors.
 
One company that certainly does not think data warehousing is going to die is a recently unstealth’ed startup run by Bob Muglia, called Snowflake Computing.
 
Publicly
 launched this week, Snowflake aims “to do for the data warehouse what 
Salesforce did for CRM—transforming the product from a piece of 
infrastructure that has to be maintained by IT into a service operated 
entirely by the provider,” wrote Jon Gold at Network World.
 
Founded
 in 2012, the company brought in Muglia earlier this year to run the 
business. Muglia was the head of Microsoft’s server and tools division, 
and later, head of the software unit at Juniper Networks.
 
While Snowflake could offer its software as a product, it chooses to do so as a service, noted Timothy Prickett Morgan at Enterprise Tech.
 
“Sometime
 either this year or next year, we will see more data being created in 
the cloud than in an on-premises environment,” Muglia told Morgan. 
“Because the data is being created in the cloud, analysis of that data 
in the cloud is very appropriate.”