Welcome to the Moral Machine! A platform for gathering a human perspective on moral decisions made by machine intelligence, such as self-driving cars.
We show you moral dilemmas, where a driverless car must choose the lesser of two evils, such as killing two passengers or five pedestrians. As an outside observer, you
judge
which outcome you think is more acceptable. You can then see how your responses compare with those of other people.
If you’re feeling creative, you can also
design
your own scenarios, for you and other users to
browse
, share, and discuss.
Remember the movie Surrogates, where everyone lives in at home plugged in to virtual reality screens and robot versions of them run around and do their bidding? Or Her, where Siri’s personality is so attractive and real, that it’s possible to fall in love with an AI?
Replika may be the seedling of such a world, and there’s no denying the future implications of that kind of technology.
Replika is a personal chatbot that you can raise through SMS. By chatting with it, you teach it your personality and gain in-app points. Through the app it can chat with your friends and learn enough about you so that maybe one day it will take over some of your responsibilities like social media, or checking in with your mom. It’s in beta right now, so we haven’t gotten our hands on one, and there is little information available about it, but color me fascinated and freaked out based on what we already know. AI is just that – artificial
I have never been a fan of technological advances that replace human interaction. Social media, in general, seems to replace and enhance traditional communication, which is fine. But robots that hug you when you’re grieving a lost child, or AI personalities that text your girlfriend sweet nothings for you don’t seem like human enhancement, they feel like human loss.
You know that feeling when you get a text message from someone you care about, who cares about you? It’s that little dopamine rush, that little charge that gets your blood pumping, your heart racing. Just one ding, and your day could change. What if a robot’s text could make you feel that way?
It’s a real boy?
Replika began when one of its founders lost her roommate, Roman, to a car accident. Eugenia Kyuda created Luka, a restaurant recommending chatbot, and she realized that with all of her old text messages from Roman, she could create an AI that texts and chats just like him. When she offered the use of his chatbot to his friends and family, she found a new business.
People were communicating with their deceased friend, sibling, and child, like he was still there. They wanted to tell him things, to talk about changes in their lives, to tell him they missed him, to hear what he had to say back.
This is human loss and grieving tempered by an AI version of the dead, and the implications are severe. Your personality could be preserved in servers after you die. Your loved ones could feel like they’re talking to you when you’re six feet under. Doesn’t that make anyone else feel uncomfortable? Bringing an X-file to life
If you think about your closest loved one dying, talking to them via chatbot may seem like a dream come true. But in the long run, continuing a relationship with a dead person via their AI avatar is dangerous. Maybe it will help you grieve in the short term, but what are we replacing? And is it worth it?
Imagine texting a friend, a parent, a sibling, a spouse instead of Replika. Wouldn’t your interaction with them be more valuable than your conversation with this personality? Because you’re building on a lifetime of friendship, one that has value after the conversation is over. One that can exist in real tangible life. One that can actually help you grieve when the AI replacement just isn’t enough. One that can give you a hug.
“One day it will do things for you,” Kyuda said in an interview with Bloomberg, “including keeping you alive. You talk to it, and it becomes you.” Replacing you is so easy
This kind of rhetoric from Replika’s founder has to make you wonder if this app was intended as a sort of technological fountain of youth. You never have to “die” as long as your personality sticks around to comfort your loved ones after you pass. I could even see myself trying to cope with a terminal diagnosis by creating my own Replika to assist family members after I’m gone.
But it’s wrong isn’t it? Isn’t it? Psychologically and socially wrong?
It all starts with a chatbot. That replicates your personality. It begins with a woman who was just trying to grieve. This is a taste of the future, and a scary one too. One of clones, downloaded personalities, and creating a life that sticks around after you’re gone.
SAN FRANCISCO — Uber has for years engaged in a worldwide program to deceive the authorities in markets where its low-cost ride-hailing service was resisted by law enforcement or, in some instances, had been banned.
The program, involving a tool called Greyball, uses data collected from the Uber app and other techniques to identify and circumvent officials who were trying to clamp down on the ride-hailing service. Uber used these methods to evade the authorities in cities like Boston, Paris and Las Vegas, and in countries like Australia, China and South Korea.
Greyball was part of a program called VTOS, short for “violation of terms of service,” which Uber created to root out people it thought were using or targeting its service improperly. The program, including Greyball, began as early as 2014 and remains in use, predominantly outside the United States. Greyball was approved by Uber’s legal team.
Greyball and the VTOS program were described to The New York Times by four current and former Uber employees, who also provided documents. The four spoke on the condition of anonymity because the tools and their use are confidential and because of fear of retaliation by Uber.
Uber’s use of Greyball was recorded on video in late 2014, when Erich England, a code enforcement inspector in Portland, Ore., tried to hail an Uber car downtown in a sting operation against the company.
At the time, Uber had just started its ride-hailing service in Portland without seeking permission from the city, which later declared the service illegal. To build a case against the company, officers like Mr. England posed as riders, opening the Uber app to hail a car and watching as miniature vehicles on the screen made their way toward the potential fares.
But unknown to Mr. England and other authorities, some of the digital cars they saw in the app did not represent actual vehicles. And the Uber drivers they were able to hail also quickly canceled. That was because Uber had tagged Mr. England and his colleagues — essentially Greyballing them as city officials — based on data collected from the app and in other ways. The company then served up a fake version of the app, populated with ghost cars, to evade capture.
At a time when Uber is already under scrutiny for its boundary-pushing workplace culture, its use of the Greyball tool underscores the lengths to which the company will go to dominate its market. Uber has long flouted laws and regulations to gain an edge against entrenched transportation providers, a modus operandi that has helped propel it into more than 70 countries and to a valuation close to $70 billion.
Yet using its app to identify and sidestep the authorities where regulators said Uber was breaking the law goes further toward skirting ethical lines — and, potentially, legal ones. Some at Uber who knew of the VTOS program and how the Greyball tool was being used were troubled by it.
In a statement, Uber said, “This program denies ride requests to users who are violating our terms of service — whether that’s people aiming to physically harm drivers, competitors looking to disrupt our operations, or opponents who collude with officials on secret ‘stings’ meant to entrap drivers.”
The mayor of Portland, Ted Wheeler, said in a statement, “I am very concerned that Uber may have purposefully worked to thwart the city’s job to protect the public.”
Uber, which lets people hail rides using a smartphone app, operates multiple types of services, including a luxury Black Car offering in which drivers are commercially licensed. But an Uber service that many regulators have had problems with is the lower-cost version, known in the United States as UberX.
UberX essentially lets people who have passed a background check and vehicle inspection become Uber drivers quickly. In the past, many cities have banned the service and declared it illegal.
That is because the ability to summon a noncommercial driver — which is how UberX drivers using private vehicles are typically categorized — was often unregulated. In barreling into new markets, Uber capitalized on this lack of regulation to quickly enlist UberX drivers and put them to work before local regulators could stop them.
After the authorities caught on to what was happening, Uber and local officials often clashed. Uber has encountered legal problems over UberX in cities including Austin, Tex., Philadelphia and Tampa, Fla., as well as internationally. Eventually, agreements were reached under which regulators developed a legal framework for the low-cost service.
That approach has been costly. Law enforcement officials in some cities have impounded vehicles or issued tickets to UberX drivers, with Uber generally picking up those costs on the drivers’ behalf. The company has estimated thousands of dollars in lost revenue for every vehicle impounded and ticket received.
This is where the VTOS program and the use of the Greyball tool came in. When Uber moved into a new city, it appointed a general manager to lead the charge. This person, using various technologies and techniques, would try to spot enforcement officers.
One technique involved drawing a digital perimeter, or “geofence,” around the government offices on a digital map of a city that Uber was monitoring. The company watched which people were frequently opening and closing the app — a process known internally as eyeballing — near such locations as evidence that the users might be associated with city agencies.
Other techniques included looking at a user’s credit card information and determining whether the card was tied directly to an institution like a police credit union.
Enforcement officials involved in large-scale sting operations meant to catch Uber drivers would sometimes buy dozens of cellphones to create different accounts. To circumvent that tactic, Uber employees would go to local electronics stores to look up device numbers of the cheapest mobile phones for sale, which were often the ones bought by city officials working with budgets that were not large.
In all, there were at least a dozen or so signifiers in the VTOS program that Uber employees could use to assess whether users were regular new riders or probably city officials.
If such clues did not confirm a user’s identity, Uber employees would search social media profiles and other information available online. If users were identified as being linked to law enforcement, Uber Greyballed them by tagging them with a small piece of code that read “Greyball” followed by a string of numbers.
When someone tagged this way called a car, Uber could scramble a set of ghost cars in a fake version of the app for that person to see, or show that no cars were available. Occasionally, if a driver accidentally picked up someone tagged as an officer, Uber called the driver with instructions to end the ride.
Uber employees said the practices and tools were born in part out of safety measures meant to protect drivers in some countries. In France, India and Kenya, for instance, taxi companies and workers targeted and attacked new Uber drivers.
“They’re beating the cars with metal bats,” the singer Courtney Love posted on Twitter from an Uber car in Paris at a time of clashes between the company and taxi drivers in 2015. Ms. Love said that protesters had ambushed her Uber ride and had held her driver hostage. “This is France? I’m safer in Baghdad.”
Uber has said it was also at risk from tactics used by taxi and limousine companies in some markets. In Tampa, for instance, Uber cited collusion between the local transportation authority and taxi companies in fighting ride-hailing services.
In those areas, Greyballing started as a way to scramble the locations of UberX drivers to prevent competitors from finding them. Uber said that was still the tool’s primary use.
But as Uber moved into new markets, its engineers saw that the same methods could be used to evade law enforcement. Once the Greyball tool was put in place and tested, Uber engineers created a playbook with a list of tactics and distributed it to general managers in more than a dozen countries on five continents.
At least 50 people inside Uber knew about Greyball, and some had qualms about whether it was ethical or legal. Greyball was approved by Uber’s legal team, led by Salle Yoo, the company’s general counsel. Ryan Graves, an early hire who became senior vice president of global operations and a board member, was also aware of the program.
Ms. Yoo and Mr. Graves did not respond to requests for comment.
Outside legal specialists said they were uncertain about the legality of the program. Greyball could be considered a violation of the federal Computer Fraud and Abuse Act, or possibly intentional obstruction of justice, depending on local laws and jurisdictions, said Peter Henning, a law professor at Wayne State University who also writes for The New York Times.
“With any type of systematic thwarting of the law, you’re flirting with disaster,” Professor Henning said. “We all take our foot off the gas when we see the police car at the intersection up ahead, and there’s nothing wrong with that. But this goes far beyond avoiding a speed trap.”
On Friday, Marietje Schaake, a member of the European Parliament for the Dutch Democratic Party in the Netherlands, wrote that she had written to the European Commission asking, among other things, if it planned to investigate the legality of Greyball.
To date, Greyballing has been effective. In Portland on that day in late 2014, Mr. England, the enforcement officer, did not catch an Uber, according to local reports.
And two weeks after Uber began dispatching drivers in Portland, the company reached an agreement with local officials that said that after a three-month suspension, UberX would eventually be legally available in the city.
HyperFace is being developed for Hyphen Labs NeuroSpeculative AfroFeminism
project at Sundance Film Festival and is a collaboration with Hyphen
Labs members Ashley Baccus-Clark, Carmen Aguilar y Wedge, Ece Tankal,
Nitzan Bartov, and JB Rubinovitz.
NeuroSpeculative AfroFeminism
is a transmedia exploration of black women and the roles they play in
technology, society and culture—including speculative products,
immersive experiences and neurocognitive impact research. Using fashion,
cosmetics and the economy of beauty as entry points, the project
illuminates issues of privacy, transparency, identity and perception.
HyperFace
is a new kind of camouflage that aims to reduce the confidence score of
facial detection and recognition by providing false faces that distract computer vision algorithms. HyperFace development began in 2013 and was first presented at 33c3 in Hamburg, Germany on December 30th, 2016. HyperFace will launch as a textile print at Sundance Film Festival on January 16, 2017.
Together HyperFace and NeuroSpeculative AfroFeminism will explore an Afrocentric countersurveillance aesthetic.
For more information about NeuroSpeculative AfroFeminism visit nsaf.space
How Does HyperFace Work?
HyperFace
works by providing maximally activated false faces based on ideal
algorithmic representations of a human face. These maximal activations
are targeted for specific algorithms. The prototype above is specific to
OpenCV’s default frontalface profile. Other patterns target
convolutional nueral networks and HoG/SVM detectors. The technical
concept is an extension of earlier work on CV Dazzle.
The difference between the two projects is that HyperFace aims to alter
the surrounding area (ground) while CV Dazzle targets the facial area
(figure). In camouflage, the objective is often to minimize the
difference between figure and ground. HyperFace reduces the confidence score of the true face (figure) by redirecting more attention to the nearby false face regions (ground).
Conceptually, HyperFace
recognizes that completely concealing a face to facial detection
algorithms remains a technical and aesthetic challenge. Instead of
seeking computer vision anonymity through minimizing the confidence
score of a true face (i.e. CV Dazzle), HyperFace offers a
higher confidence score for a nearby false face by exploiting a common
algorithmic preference for the highest confidence facial region. In
other words, if a computer vision algorithm is expecting a face, give it
what it wants.
How Well Does This Work?
The
patterns are still under development and are expected to change. Please
check back towards the end of January for more information.
Product Photos
Please check back towards the end of January for product photos
Notifications
If
you’re interested in purchasing one of the first commercially available
HyperFace textiles, please add yourself to my mailing list at Undisclosed.studio
Notes
Designs subject to change
Displayed patterns are prototypes and are currently undergoing testing.
First prototype designed for OpenCV Haarcascade. Future iterations will include patterns for HoG/SVM and CNN detectors
Will not make you invisible
Please credit image as HyperFace Prototype by “Adam Harvey / ahprojects.com”
Please credit scarf rendering prototype as “Rendering by Ece Tankal / hyphen-labs.com”
Not affilliated with the IARPA funded Hyperface algorithm for pose and gender recognition
In 2011, when Stanford computer scientists Sebastian Thrun and Peter Norvig came up with the bright idea of streaming their robotics lectures over the Internet,
they knew it was an inventive departure from the usual college course.
For hundreds of years, professors had lectured to groups of no more than
a few hundred students. But MOOCs—massive open online courses—made it
possible to reach many thousands at once. Through the extraordinary
reach of the Internet, learners could log on to lectures streamed to
wherever they happened to be. To date, about 58 million people have signed up for a MOOC.
Familiar with the technical elements required for a MOOC—video
streaming, IT infrastructure, the Internet—MOOC developers put code
together to send their lectures into cyberspace. When more than 160,000
enrolled in Thrun and Norvig’s introduction to artificial intelligence
MOOC, the professors thought they held a tiger by the tail. Not long
after, Thrun cofounded Udacity to commercialize MOOCs. He predicted that in 50 years, streaming lectures would so subvert face-to-face education that only 10 higher-education institutions would remain.
Our quaint campuses would become obsolete, replaced by star faculty
streaming lectures on computer screens all over the world. Thrun and
other MOOC evangelists imagined they had inspired a revolution,
overthrowing a thousand years of classroom teaching.
These MOOC pioneers were therefore stunned when their online courses
didn’t perform anything like they had expected. At first, the average
completion rate for MOOCs was less than 7 percent. Completion rates have since gone up a bit, to a median of about 12.6 percent, although there’s considerable variation from course to course. While
a number of factors contribute to the completion rate, my own
observation is that students who have to pay a fee to enroll tend to be
more committed to finishing the course.
Looking closer at students’ MOOC habits, researchers found that some
people quit watching within the first few minutes. Many others were
merely “grazing,” taking advantage of the technology to quickly log in,
absorb just the morsel they were hunting for, and then log off as soon
as their appetite was satisfied. Most of those who did finish a MOOC
were accomplished learners, many with advanced degrees.
What accounts for MOOCs’ modest performance? While the technological
solution they devised was novel, most MOOC innovators were unfamiliar
with key trends in education. That is, they knew a lot about computers
and networks, but they hadn’t really thought through how people learn.
It’s unsurprising then that the first MOOCs merely replicated the
standard lecture, an uninspiring teaching style but one with which the
computer scientists were most familiar. As the education technology consultant Phil Hill recently observed in the Chronicle of Higher Education,
“The big MOOCs mostly employed smooth-functioning but basic video
recording of lectures, multiple-choice quizzes, and unruly discussion
forums. They were big, but they did not break new ground in pedagogy.”
Indeed, most MOOC founders were unaware that a pedagogical revolution
was already under way at the nation’s universities: The traditional
lecture was being rejected by many scholars, practitioners, and, most
tellingly, tech-savvy students. MOOC advocates also failed to appreciate
the existing body of knowledge about learning online, built over the
last couple of decades by adventurous faculty who were attracted to
online teaching for its innovative potential, such as peer-to-peer
learning, virtual teamwork, and interactive exercises. These modes of
instruction, known collectively as “active” learning, encourage student
engagement, in stark contrast to passive listening in lectures. Indeed,
even as the first MOOCs were being unveiled, traditional lectures were
on their way out.
The impact of active learning can be significant. In a 2014 meta-analysis published in Proceedings of the National Academy of Sciences
[PDF], researchers looked at 225 studies in which standard lectures
were compared with active learning for undergraduate science, math, and
engineering. The results were unambiguous: Average test scores went up
about 6 percent in active-learning sections, while students in
traditional lecture classes were 1.5 times more likely to fail than
their peers in active-learning classes.
Even lectures by “star” faculty were no match for active-learning
sections taught by novice instructors: Students still performed better
in active classes. “We’ve yet to see any evidence that celebrated
lecturers can help students more than even first-generation active
learning does,” Scott Freeman, the lead author of the study, told Wired.
Unfortunately, early MOOCs failed to incorporate active learning
approaches or any of the other innovations in teaching and learning
common in other online courses. The three principal MOOC
providers—Coursera, Udacity, and edX—wandered into a territory they
thought was uninhabited. Yet it was a place that was already well
occupied by accomplished practitioners who had thought deeply and
productively over the last couple of decades about how students learn
online. Like poor, baffled Columbus, MOOC makers believed they had
“discovered” a new world. It’s telling that in their latest offerings,
these vendors have introduced a number of active-learning innovations.
To be sure, MOOCs have been wildly successful in giving millions of
people all over the world access to a wide range of subjects presented
by eminent scholars at the world’s elite schools. Some courses attract
so many students that a 7 percent completion rate still translates into
several thousand students finishing—greater than the total enrollment of
many colleges.
But MOOC pioneers were presumptuous to imagine they could not only
topple the university—an institution that has successfully withstood
revolutions far more devastating than the Web—but also ignore common
experience. They erroneously assumed they could open the minds of
millions who were unprepared to tackle sophisticated curriculum. MOOCs
will never sweep away face-to-face classrooms, nor can they take the
place of more intensive and intimate online degree programs. The real
contribution of MOOCs is likely to be much more modest, as yet another
digital education option.
As humanity debates the threats and opportunities of advanced artificialintelligence,
we are simultaneously enabling that technology through the increasing
use of personalization that is understanding and anticipating our needs
through sophisticated machine learning solutions.
In effect, while using personalization technologies in our everyday
lives, we are contributing in a real way to the development of the intelligent systems we purport to fear.
Perhaps uncovering the currently inaccessible personalization systems
is crucial for creating a sustainable relationship between humans and super–intelligent machines?
From Machines Learning About You…
Industry giants are currently racing to develop more intelligent and lucrative AI
solutions. Google is extending the ways machine learning can be applied
in search, and beyond. Facebook’s messenger assistant M is
combining deep learning and human curators to achieve the next level in
personalization.
With your iPhone you’re carrying Apple’s digital assistant Siri with you everywhere; Microsoft’s counterpart Cortana can live in your
smartphone, too. IBM’s Watson has highlighted its diverse features,
varying from computer vision and natural language processing to cooking
skills and business analytics.
At the same time, your data and personalized
experiences are used to develop and train the machine learning systems
that are powering the Siris, Watsons, Ms and Cortanas. Be it a speech
recognition solution or a recommendation algorithm, your actions and personal data affect how these sophisticated systems learn more about you and the world around you.
The less explicit fact is that your diverse interactions — your
likes, photos, locations, tags, videos, comments, route
selections, recommendations and ratings — feed learning systems that
could someday transform into super–intelligent AIs with unpredictable consequences.
As of today, you can’t directly affect how your personal data is used in these systems.
In these times, when we’re starting to use serious resources to contemplate the creation of ethical frameworks for super–intelligent AIs-to-be,
we also should focus on creating ethical terms for the use of personal
data and the personalization technologies that are powering the
development of such systems.
To make sure that you as an individual continue to have a meaningful agency in the emerging algorithmic reality, we need learning algorithms that are on your side and solutions that augment and extend your abilities. How could this happen?
…To Machines That Learn For You
Smart devices extend and augment your memory (no forgotten birthdays) and brain processing power (no calculating in your head anymore). And they augment your senses by letting you experience things beyond your immediate environment (think AR and VR).
The web itself gives you access to a huge amount of diverse
information and collective knowledge. The next step would be that smart
devices and systems enhance and expand your abilities even more. What is required for that to happen in a human-centric way?
Data Awareness And Algorithmic Accountability
Algorithmic systems and personal data are too
often seen as something abstract, incomprehensible and uncontrollable.
Concretely, how many really stopped using Facebook or Google after PRISM
came out in the open? Or after we learned that we are exposed
to continuous A/B testing that is used to develop even more powerful
algorithms?
More and more people are getting interested in data ethics and algorithmic accountability. Academics are already analyzing the effects of current data policies and algorithmic systems. Educational organizations are starting to emphasize the importance of coding and digital literacy.
Initiatives such as VRM, Indie Web and MyData are raising
awareness on alternative data ecosystems and data management practices.
Big companies like Apple and various upcoming startups are bringing
personal data issues to the mainstream discussion.
Yet we still need new tools and techniques to become more data aware
and to see how algorithms can be more beneficial for us as
unique individuals. We need apps and data visualizations with great user
experience to illuminate the possibilities of more human-centric
personalization.
It’s time to create systems that evaluate algorithmic
biases and keep them in check. More accessible algorithms and
transparent data policies are created only through wider collaboration
that brings together companies, developers, designers, users and
scientists alike.
Personal Machine Learning Systems
Personalization technologies are already augmenting your decision making and future thinking by learning from you and recommending to you what to see and do next. However, not on your own terms. Rather than letting someone else and their motives and values dictate how the algorithms work and affect your life, it’s time to create solutions, such as algorithmic angels, that let you develop and customize your own algorithms and choose how they use your data.
When you’re in control, you can let your personal learning system access previously hidden data and surface intimate insights about your own behavior, thus increasing yourself-awareness in an actionable way.
Personal learners could help you develop skills related to work or personal life, augmenting and expanding your abilities. For example, learning languages, writing or playing new games. Fitness or mediation apps powered by your personal algorithms would know you better than any personal trainer.
Google’s experiments with deep learning and image manipulation showed
us how machine learning could be used to augment creative output.
Systems capable of combining your data with different materials like images, text, sound and video could expand your abilities to see and utilize new and unexpected connections around you.
In effect, your personal algorithm can take a mind-expanding “trip” on your
behalf, letting you see music or sense other dimensions beyond normal
human abilities. By knowing you, personal algorithms can expose you to
new diverse information, thus breaking your existing filter bubbles.
Additionally, people tinkering with their personal algorithms would create more “citizen algorithm
experts,” like “citizen scientists,” coming up with new ideas,
solutions and observations, stemming from real live situations and
experiences.
However, personally adjustable algorithms for the general public are
not happening overnight, even though Google recently open-sourced parts
of its machine learning framework. But it’s possible to see how today’s
personalization experiences can someday evolve into customizable
algorithms that strengthen your agency and capacity to deal with other algorithmic systems.
AlgorithmicSelf
The next step is that your personal algorithms become a more concrete part of you, continuously evolving with you by learning from your interactions both in digital and physical environments. Youralgorithmicself combines your personal abilities and knowledge with machine learning systems that adapt to you and work for you. Be it your smartwatch, self-driving car or an intelligent home system, they can all be spirited by youralgorithmicself.
Youralgorithmicself also can connect with other algorithmic selves, thus empowering you with the accumulating collective knowledge and intelligence. To expand your existing skills and faculties, youralgorithmicself also starts to learn and act on its own, filtering information, making online transactions and comparing best options on your behalf. It makes you more resourceful, and even a better person, when you can concentrate on things that really require your human presence and attention.
Partly algorithmic humans are not bound by existing human capabilities; new skills and abilities emerge when human intelligence is extended with algorithmic selves. For example, youralgorithmicself can multiply to execute different actions simultaneously. Algorithmic selves could also create simple simulations by playing out different scenarios involving your real-life choices and their consequences, helping you to make better decisions in the future.
Algorithmic selves — tuned by your
data and personal learners — also could be the key when creating
invasive human-computer interfaces that connect digital systems directly
to your brain, expanding human brain concretely beyond the “wetware.”
But to ensure that youralgorithmicself works for your benefit, could you trust someone building that for you without you participating in the process?
Machine learning expert Pedro Domingos says in his new book “The Master Algorithm” that “[m]achine learning will not single-handedly determine the future… it’s what we decide to do with it that counts.”
Machines are still far from human intelligence. No one knows exactly when super–intelligent AIs will become concrete reality. But developing personal machine learning systems could enable us to interact with any algorithmic entities, be it an obtrusive recommendation algorithm or a super–intelligent AI.
In general, being more transparent on how learning algorithms work
and use our data could be crucial for creating ethical and sustainable artificial intelligence. And potentially, maybe we wouldn’t need to fear being overpowered by our own creations.
According to the Labor Department, the U.S. economy is in
its strongest stretch in corporate hiring since 1997. Given the rapidly
escalating competition for talent, it is important for employers, job
seekers, and policy leaders to understand the dynamics behind some of
the fastest growing professional roles in the job market.
For adults with a bachelor’s degree or above, the unemployment rate stood at just 2.7 percent in May 2015.
The national narrative about “skills gaps” often focuses on
middle-skill jobs that rely on shorter-term or vocational training – but
the more interesting pressure point is arguably at the professional
level, which has accounted for much of the wage and hiring growth in the
U.S. economy in recent years. Here, the reach and impact of technology
into a range of professional occupations and industry sectors is
impressive.
Software is eating the world
In 2011, Netscape and Andreessen Horowitz co-founder Marc Andreessen coined the phrase “software is eating the world” in an article
outlining his hypothesis that economic value was increasingly being
captured by software-focused businesses disrupting a wide range of
industry sectors. Nearly four years later, it is fascinating that around
1 in every 20 open job postings in the U.S. job market relates to
software development/engineering.
The shortage of software developers is well-documented and
increasingly discussed. It has spawned an important national dialogue
about economic opportunity and encouraged more young people, women, and
underrepresented groups to pursue computing careers – as employers seek
individuals skilled in programming languages such as Python, JavaScript, and SQL.
Discussion about the robust demand and competition for software
developers in the job market is very often focused around high-growth
technology firms such Uber, Facebook, and the like. But from the
“software is eating the world” perspective, it is notable that
organizations of all types are competing for this same talent – from
financial firms and hospitals to government agencies. The demand for
software skills is remarkably broad.
For example, the top employers with the greatest number of developer
job openings over the last year include JP Morgan Chase, UnitedHealth,
Northrup Gruman, and General Motors, according to job market database
firm Burning Glass Technologies.
Data science is just the tip of the iceberg
Another surge of skills need related to technology is analytics and
the ability to work with, process, and interpret insights from big data.
Far more than just a fad or buzzword, references to analytical and
data-oriented skills appeared in 4 million postings over the last year –
and data analysis is one of the most demanded skills by U.S. employers,
according to Burning Glass data.
The Harvard Business Review famously labeled data scientist roles “the sexiest job of the 21st century”
– but while this is a compelling new profession by any measure, data
scientists sit at the top of the analytics food chain and likely only
account for tens of thousands of positions in a job market of 140
million.
What often goes unrecognized is that similar to and even more so than
software development, the demand for analytical skills cuts across all
levels and functions in an organization, from financial analysts and web
developers to risk managers. Further, a wide range of industries is
hungry for analytics skills – ranging from the nursing field and public
health to criminal justice and even the arts and cultural sector.
As suggested by analytics experts such as Tom Davenport,
organizations that are leveraging analytics in their strategy have not
only world-class data scientists – but they also support “analytical
amateurs” and embed analytics throughout all levels of their
organization and culture. For this reason, the need for analytics skills
is exploding within a variety of employers, and analytics and
data-related themes top many corporate strategy agendas.
Analytics: Digital marketing demands experienced talent
Change is also afoot as digital and mobile channels are disrupting the marketing landscape. According to the CMO Council, spending on mobile marketing is doubling each year,
and two-thirds of the growth in consumer advertising is in digital. In
an economic expansion cycle, awareness-building and customer acquisition
is where many companies are investing. For these reasons, marketing
managers are perhaps surprisingly hard to find.
For example, at high-growth tech companies such as Amazon and
Facebook, the highest volume job opening after software
developer/engineer is marketing manager. These individuals are
navigating new channels, as well as approaches to customer acquisition,
and they are increasingly utilizing analytics. The marketing manager is
an especially critical station in the marketing and sales career ladder
and corporate talent bench – with junior creative types aspiring to it
and senior product and marketing leadership coming from it.
The challenge is that marketing management requires experience: Those
with a record of results in the still nascent field of digital
marketing will be especially in demand.
Social media: not just a marketing and communications skill
Traditionally thought of in a marketing context, social media skills
represent a final “softer” area that is highly in demand and spans a
range of functional silos and levels in the job market — as social media
becomes tightly woven into the fabric of how we live, work, consume and
play.
While many organizations are, of course, hiring for social
media-focused marketing roles, a quick search of job listings at an
aggregator site such as Indeed.com reveals 50,000 job openings referencing social media.
These range from privacy officers in legal departments that need to
account for social media in policy and practice, to technologists who
need to integrate social media APIs with products, and project managers
and chiefs of staff to CEOs who will manage and communicate with
internal and external audiences through social media.
Just as skills in Microsoft Office have become a universal foundation
for most professional roles, it will be important to monitor how the
use of social media platforms, including optimization and analytics,
permeates the job market.
The aforementioned in-demand skills areas represent more of a
structural shift than an issue du jour or passing trend. It is precisely
the rapid, near daily change in software- and technology-related skills
needs that necessitates new approaches to human capital development.
While traditional long-term programs such as college degrees remain
meaningful, new software platforms, languages, apps and tools rise
annually. Who in the mainstream a few years ago had heard of Hadoop or
Ruby?
Each month, new partnerships and business models are being formed
between major employers, educational institutions and startups – all
beginning to tackle novel approaches to skills development in these
areas. Certificate programs, boot camps, new forms of executive
education, and credentialing are all targeting the problem of producing
more individuals with acumen in these areas.
As technology continues to extend its reach and reshape the
workforce, it will be important to monitor these issues and explore new
solutions to talent development.
GitHub has been hammered by a continuous DDoS attack for three days. It's the "largest DDoS attack in github.com's history." The attack is aimed at anti-censorship GreatFire and CN-NYTimes
projects, but affected all of GitHub. The traffic is reportedly coming
from China, as attackers are using the Chinese search engine Baidu for
the purpose of "HTTP hijacking."
According to tweeted GitHub status messages,
GitHub has been the victim of a Distributed Denial of Service (DDoS)
attack since Thursday March 26. 24 hours later, GitHub said it had "all
hands on deck" working to mitigate the continuous attack. After GitHub
later deployed "volumetric attack defenses," the attack morphed to
include GitHub pages and then "pages and assets." Today, GitHub said it
was 71 hours into defending against the attack.
These days, there are plenty of people out there
who like to code, either for money or for fun or for both. But,
apparently, there’s also a growing interest in watching other people
code. How else to explain the new site Watch People Code, which lets you, well, watch people code, live?
The
Watch People Code site simply makes it a little easier to watch streams
from the WatchPeopleCode subreddit that are currently live or coming
up. It’s generated some chatter among developers, and the feedback has
generally been positive, with most people feeling that it’s a good way
to learn and improve your own coding. For example, here are a few
comments written by developers on discussion forums such as Hacker News.
“I like this idea. It isn't the most common thing to shadow
someone while they code, you can learn a lot from how people ‘flow’.” nirkalimi
“I am probably going to actually use this a lot. I find there's no better way to pick up new things than from pair coding.” Itamar Kestenbaum
“Watching someone code teaches many subtle techniques that
people don't even realize they do. Especially the holistic set of
techniques an individual uses and how they interact.” endergen
Live streams of people coding isn’t a totally new thing. Twitch.TV offers live streams of game development, as does Ludum Dare. Plus, you can also go directly to YouTube’s collection of live streams and find people programming.
It’s
not only coders who may enjoy watching people program. A non-programmer
friend of mine, for example, after I brought Watch People Code to her
attention, found herself oddly mesmerized by watching someone
code. “This is oddly interesting,” she said.
SAN DIEGO, Jan. 5, 2015 /PRNewswire/ -- Qualcomm Incorporated (NASDAQ: QCOM) today announced that its subsidiary, Qualcomm Life, Inc.,
has been selected by Novartis, a global pharmaceutical leader, as a
global digital health collaborator for its Trials of The Future program.
Qualcomm Life's 2net™ Platform will serve as a global connectivity
platform for collecting and aggregating medical device data during
clinical trials to improve the convenience and speed of capturing study
participant data and test results to ultimately gain more trial
efficiencies and connected experiences for participants.
The
Trials of The Future program is designed to leverage health care
technology to improve the experience of clinical trial participants and
patients using Novartis products, and provide connectivity with future
products marketed by Novartis. Novartis will combine the 2net Platform,
2net Hub and 2net Mobile technologies with designated medical devices to
automate the collection of vital patient data at patient's homes during
clinical trials.
"Novartis is a pioneer in putting technology to use in advancing pharmaceutical innovation," said Rick Valencia,
senior vice president and general manager, Qualcomm Life.
"Standardizing on the tech-agnostic 2net Platform and accessing the
robust ecosystem of integrated medical devices will provide them a great
range of flexibility and scalability, ultimately accelerating their
efforts to design more efficient, cost-effective clinical trials."
Novartis
is using the 2net Platform in a recently-launched clinical study,
evaluating the use of mobile devices with chronic lung disease patients.
The study, which is observational in nature and does not involve any
Novartis pharmaceutical product, leverages 2net Mobile-enabled
smartphones and 2net Hubs to seamlessly collect and aggregate biometric
data from medical devices and transmits this data to the cloud-based
2net Platform, which securely sends the data to the study coordinator.
Qualcomm Life to Have Significant Presence at CES 2015
Qualcomm Life will showcase chronic care and transitional care management demos at booth #8525 in Central Hall, Las Vegas Convention Center. James Mault,
MD, F.A.C.S., vice president and chief medical officer, Qualcomm Life,
will participate in a fireside chat on what is new in Quantified Self
from 3:30-4:30 p.m. in the Las Vegas Convention Center Room, N621 on Monday, January 5. Rick Valencia, senior vice president and general manager, Qualcomm Life, will participate in a fireside chat with Corinne Savill, head of business development and licensing, Novartis, titled "Pharma goes Techy and Consumers Score" from 9:40-10:04 a.m. at the Digital Health Summit at the Venetian level 2 Bellini Room 2004 on Thursday, January 8.
About Qualcomm Incorporated
Qualcomm Incorporated (NASDAQ: QCOM)
is a world leader in 3G, 4G and next-generation wireless technologies.
Qualcomm Incorporated includes Qualcomm's licensing business, QTL, and
the vast majority of its patent portfolio. Qualcomm Technologies, Inc., a
wholly-owned subsidiary of Qualcomm Incorporated, operates, along with
its subsidiaries, substantially all of Qualcomm's engineering, research
and development functions, and substantially all of its products and
services businesses, including its semiconductor business, QCT. For more
than 25 years, Qualcomm ideas and inventions have driven the evolution
of digital communications, linking people everywhere more closely to
information, entertainment and each other. For more information, visit
Qualcomm's website, OnQ blog, Twitter and Facebook pages.
Qualcomm and 2net are trademarks of QUALCOMM Incorporated, registered in the United States and other countries. Other product and brand names may be trademarks or registered trademarks of their respective owners.