More and more, governments are using powerful spying software to target human rights activists and journalists, often the forgotten victims of cyberwar. Now, these victims have a new tool to protect themselves.
Called Detekt, it scans a person's computer for traces of surveillance software, or spyware. A coalition of human rights organizations, including Amnesty International and the Electronic Frontier Foundation launched Detekt on Wednesday, with the goal of equipping activists and journalists with a free tool to discover if they've been hacked.
"Our ultimate aim is for human rights defenders, journalists and civil society groups to be able to carry out their legitimate work without fear of surveillance, harassment, intimidation, arrest or torture," Amnesty wroteThursday in a statement.
The open-source tool was developed by security researcher Claudio Guarnieri, a security researcher who has been investigating government abuse of spyware for years. He often collaborates with other researchers at University of Toronto's Citizen Lab.
During their investigations, Guarnieri and his colleagues discovered, for example, that the Bahraini government used software created by German company FinFisher to spy on human rights activists. They also found out that the Ethiopian government spied on journalists in the U.S. and Europe, using software developed by Hacking Team, another company that sells off-the-shelf surveillance tools.
Guarnieri developed Detekt from software he and the other researchers used during those investigations.
--------------------------
"I decided to release it to the public because keeping it private made no sense," he told Mashable. "It's better to give more people as possible the chance to test and identify the problem as quickly as possible, rather than keeping this knowledge private and let it rot."
Detekt only works with Windows, and it's designed to discover malware developed both by commercial firms, as well as popular spyware used by cybercriminals, such as BlackShades RAT (Remote Access Tool) and Gh0st RAT.
The tool has some limitations, though: It's only a scanner, and doesn't remove the malware infection, which is why Detekt's official site warns that if there are traces of malware on your computer, you should stop using it "immediately," and and look for help. It also might not detect newer versions of the spyware developed by FinFisher, Hacking Team and similar companies.
"If Detekt does not find anything, this unfortunately cannot be considered a clean bill of health," the software's "readme" file warns.
For some, given these limitations, Detekt won't help much.
"The tool appears to be a simple signature-based black list that does not promise it knows all the bad files, and admits that it can be fooled," John Prisco, president and CEO of security firm Triumfant, said. "Given that, it seems worthless to me, but that’s probably why it can be offered for free."
Joanna Rutkowska, a researcher who develops the security-minded operating system Qubes, said computers with traditional operating systems are inherently insecure, and that tools like Detekt can't help with that.
"Releasing yet another malware scanner does nothing to address the primary problem," she told Mashable. "Yet, it might create a false sense of security for users."
But Guarnieri disagrees, saying that Detekt is not a silver-bullet solution intended to be used in place of commercial anti-virus software or other security tools.
"Telling activists and journalists to spend 50 euros a year for some antivirus license in emergency situations isn't very helpful," he said, adding that Detekt is not "just a tool," but also an initiative to spark discussion around the government use of intrusive spyware, which is extremely unregulated.
For Mikko Hypponen, a renowned security expert and chief research officer for anti-virus vendor F-Secure, Detekt is a good project because its target audience — activists and journalists — don't often have access to expensive commercial tools.
“Since Detekt only focuses on detecting a handful of spy tools — but detecting them very well — it might actually outperform traditional antivirus products in this particular area,” he told Mashable.
Many of you probably think that the National Security Agency (NSA) and open-source software get along like a house on fire. That's to say, flaming destruction. You would be wrong.
In partnership with the Apache Software Foundation, the NSA announced on Tuesday that it is releasing the source code for Niagarafiles (Nifi). The spy agency said that Nifi "automates data flows among multiple computer networks, even when data formats and protocols differ".
Details on how Nifi does this are scant at this point, while the ASF continues to set up the site where Nifi's code will reside.
In a statement, Nifi's lead developer Joseph L Witt said the software "provides a way to prioritize data flows more effectively and get rid of artificial delays in identifying and transmitting critical information".
The NSA is making this move because, according to the director of the NSA's Technology Transfer Program (TPP) Linda L Burger, the agency's research projects "often have broad, commercial applications".
"We use open-source releases to move technology from the lab to the marketplace, making state-of-the-art technology more widely available and aiming to accelerate U.S. economic growth," she added.
The NSA has long worked hand-in-glove with open-source projects. Indeed, Security-Enhanced Linux (SELinux), which is used for top-level security in all enterprise Linux distributions — Red Hat Enterprise Linux, SUSE Linux Enterprise Server, and Debian Linux included — began as an NSA project.
More recently, the NSA created Accumulo, a NoSQL database store that's now supervised by the ASF.
More NSA technologies are expected to be open sourced soon. After all, as the NSA pointed out: "Global reviews and critiques that stem from open-source releases can broaden a technology's applications for the US private sector and for the good of the nation at large."
Following
broad security scares like that caused by the Heartbleed bug, it can be
frustratingly difficult to find out if a site you use often still has
gaping flaws. But a little known community of software developers is
trying to change that, by creating a searchable, public index of
websites with known security issues.
Think of Project Un1c0rn as
a Google for site security. Launched on May 15th, the site's creators
say that so far it has indexed 59,000 websites and counting. The goal,
according to its founders, is to document open leaks caused by the
Heartbleed bug, as well as "access to users' databases" in Mongo DB and
MySQL.
According
to the developers, those three types of vulnerabilities are most
widespread because they rely on commonly used tools. For example, Mongo
databases are used by popular sites like LinkedIn, Expedia, and
SourceForge, while MySQL powers applications such as WordPress, Drupal
or Joomla, and are even used by Twitter, Google and Facebook.
Having
a website’s vulnerability indexed publicly is like advertising that you
leave your front doors unlocked and your flat screen in the basement.
But Un1c0rn’s founder sees it as teaching people the value of security.
And his motto is pretty direct. “Raising total awareness by ‘kicking in
the nuts’ is our target,” said the founder, who goes by the alias
SweetCorn.
“The
exploits and future exploits that will be added are just exploiting
people's stupidity or misconception about security from a company
selling or buying bullshit protections,” he said. SweetCorn thinks
Project Un1c0rn is exposing what is already visible without a lot of
effort.
While the Heartbleed bug alerted
the general public to how easily hackers can exploit widely used code,
clearly vulnerabilities don’t begin and end with the bug. Just last week
the CCS Injection vulnerability was discovered, and the OpenSSL foundation posted a security advisory.
“Billions of people are leaving information and trails in
billions of different databases, some just left with default
configurations that can be found in a matter of seconds for whoever has
the resources,” SweetCorn said. Changing and updating passwords is a
crucial practice.
I
reached out to José Fernandez, a computer security expert and professor
at the Polytechnique school in Montreal, to get his take on Project
Un1c0rn. "The (vulnerability) tests are quite objective," he said.
"There are no reasons not to believe the vulnerabilities listed."
Fernandez
added that the only caveat for the search engine was that a listed
server could have been patched after the vulnerability scan had been
run.
The
project is still in its very early stages, with some indexed websites
not yet updated, which means not all of the 58,000 websites listed are
currently vulnerable to the same weaknesses.
“The Un1c0rn is still weak”, admitted SweetCorn. “We
did this with 0.4 BitCoin, I just can't imagine what someone having
enough money to spend on information mining could do.” According to
SweetCorn, those funds were used to buy the domain name and rent
servers.
SweetCorn
is releasing few details about the backend of the project, although he
says it relies heavily on the Tor network. Motherboard couldn’t
independently confirm what kind of search functions SweetCorn is
operating or whether they are legal. In any case, he has bigger plans
for his project: making it the first peer-to-peer decentralized exploit
system, where individuals could host their own scanning nodes.
“We
took some easy steps, Disqus is one of them, we would love to see
security researchers going on Un1c0rn, leave comments and help (us) fix
stuff,” he said.
He hopes that the attention raised by his project will make people understand “what their privacy really (looks like).”
A
quick scan through Un1c0rn’s database brings up some interesting
results. McGill University in Montreal had some trouble with one of
their MySQL databases. The university has since been notified, and their IT team told me the issue had been addressed.
The UK’s cyberspies at the GHCQ
probably forgot they had a test database open (unless it’s a honeypot),
though requests for comments were not answered. A search for “credit
card” retrieves 573 websites, some of which might just host card data if
someone digs enough.
In an example of how bugs can pervade all corners of the
web, the IT team in charge of the VPN for the town of Mandurah in
Australia were probably napping while the rest of the world was patching their broken version of OpenSSL. Tests run with the Qualys SSL Lab and filippo.io tools confirmed the domain was indeed vulnerable to Heartbleed.
While tools to scan for vulnerabilities across the Internet already exist. Last year, the project critical.io did a mass scan of the Internet to look for vulnerabilities, for research purposes. The data was released online and further analyzed by security experts.
But Project Un1c0rn is certainly one of the first to publicly index
the vulnerabilities found. Ultimately, if Project Un1c0rn or something
like it is successful and open sourced, checking if your bank or online
dating site is vulnerable to exploits will be a click away.
A new hacker-developed drone can lift your smartphone’s private data
from your GPS location to mobile applications’ usernames and passwords —
without you ever knowing. The drone’s power lies with a new software,
Snoopy, which can turn a benign video-capturing drone into a nefarious
data thief.
Snoopy intercepts Wi-Fi signals when mobile devices try to find a
network connection. London researchers have been testing the drone and
plan to debut it at the Black Hat Asia cybersecurity conference in
Singapore next week. Snoopy-equipped drones can aid in identity theft
and pose a security threat to mobile device users.
Despite its capabilities, the drone software project was built to
raise awareness and show consumers how vulnerable their data is to
theft, Glenn Wilkinson, a Sensepost security researcher and Snoopy’s
co-creator, told CNN Money.
As a part of its controversial surveillance programs, the U.S. National Security Agency already uses similar technology
to tap into Wi-Fi connections and control mobile devices. And even
though Snoopy hasn’t hit the market, phone-hacking drones could become a
reality in the United States now that a federal judge recently overturned
the U.S. Federal Aviation Administration’s commercial drone ban.
Because the ban was lifted, filmmakers and tech companies such as
Facebook and Amazon are now allowed to fly drones — be it to increase
Web access or deliver packages — for profit.
Before latching onto a Wi-Fi signal, mobile devices first check to see if any previously connected are nearby. The Snoopy software
picks up on this and pretends to be one of those old network
connections. Once attached, the drone can access all of your phone’s
Internet activity. That information can tell hackers your spending
habits and where you work.
With the right tools, Wi-Fi hacks are relatively simple to pull off, and are becoming more common. Personal data can even be sapped from your home’s Wi-Fi router. And because the number of Wi-Fi hotspots keeps growing, consumers must take steps, such as using encrypted sites, to protect their data.
Data breaches overall are happening more often. Customers are still feeling the effect of Target’s breach
last year that exposed more than 100 million customers’ personal data.
But as smartphones increasingly becoming the epicenter of personal data
storage, hacks targeting the device rather than individual apps pose a
greater privacy and security threat.
According to a recent Pew Research study,
about 50 percent of Web users publicly post their birthday, email or
place of work — all of which can be used in ID theft. Nearly 25 percent
of people whose credit card information is stolen also suffer identity
theft, according to a study
published by Javelin Strategy & Research of customers who received
data breach notifications in 2012. Moreover, most people manage about 25
online accounts and only use six passwords, quadrupling the potential
havoc from one account’s password breach.
The Twitter logo displayed on a smart phonePhoto: PA
Scientists have developed the ultimate lie detector for social media – a
system that can tell whether a tweeter is telling the truth.
The creators of the system called Pheme, named after the Greek mythological
figure known for scandalous rumour, say it can judge instantly between truth
and fiction in 140 characters or less.
Researchers across Europe are joining forces to analyse the truthfulness of
statements that appear on social media in “real time” and hope their system
will prevent scurrilous rumours and false statements from taking hold, the Times
reported.
The creators believe that the system would have proved useful to the police
and authorities during the London Riots of 2011. Tweeters spread false
reports that animals had been released from London Zoo and landmarks such as
the London Eye and Selfridges had been set on fire, which caused panic and
led to police being diverted.
Kalina Bontcheva, from the University of Sheffield’s engineering department,
said that the system would be able to test information quickly and trace its
origins. This would enable governments, emergency services, health agencies,
journalists and companies to respond to falsehoods.
NSA spying, as
revealed by the whistleblower Edward Snowden, may cause countries to
create separate networks and break up the experts, according to experts.
Photograph: Alex Milan Tracy/NurPhoto/NurPhoto/Corbis
The vast scale of online surveillance revealed by Edward Snowden is leading to the breakup of the internet
as countries scramble to protect private or commercially sensitive
emails and phone records from UK and US security services, according to
experts and academics.
They say moves by countries, such as Brazil and Germany,
to encourage regional online traffic to be routed locally rather than
through the US are likely to be the first steps in a fundamental shift
in the way the internet works. The change could potentially hinder
economic growth.
"States may have few other options than to follow
in Brazil's path," said Ian Brown, from the Oxford Internet Institute.
"This would be expensive, and likely to reduce the rapid rate of
innovation that has driven the development of the internet to date … But
if states cannot trust that their citizens' personal data – as well as
sensitive commercial and government information – will not otherwise be
swept up in giant surveillance operations, this may be a price they are
willing to pay."
Since the Guardian's revelations about the scale
of state surveillance, Brazil's government has published ambitious plans
to promote Brazilian networking technology, encourage regional internet
traffic to be routed locally, and is moving to set up a secure national
email service.
In India, it has been reported that government employees are being advised not to use Gmail
and last month, Indian diplomatic staff in London were told to use
typewriters rather than computers when writing up sensitive documents.
In Germany, privacy commissioners have called for a review of whether Europe's internet traffic can be kept within the EU – and by implication out of the reach of British and US spies.
Surveillance dominated last week's Internet Governance Forum 2013,
held in Bali. The forum is a UN body that brings together more than
1,000 representatives of governments and leading experts from 111
countries to discuss the "sustainability, robustness, security,
stability and development of the internet".
Debates on child
protection, education and infrastructure were overshadowed by widespread
concerns from delegates who said the public's trust in the internet was
being undermined by reports of US and British government surveillance.
Lynn
St Amour, the Internet Society's chief executive, condemned government
surveillance as "interfering with the privacy of citizens".
Johan
Hallenborg, Sweden's foreign ministry representative, proposed that
countries introduce a new constitutional framework to protect digital
privacy, human rights and to reinforce the rule of law.
Meanwhile,
the Internet Corporation for Assigned Names and Numbers – which is
partly responsible for the infrastructure of the internet – last week
voiced "strong concern over the undermining of the trust and confidence
of internet users globally due to recent revelations of pervasive
monitoring and surveillance".
Daniel Castro, a senior analyst at
the Information Technology & Innovation Foundation in Washington,
said the Snowden revelations were pushing the internet towards a tipping
point with huge ramifications for the way online communications worked.
"We
are certainly getting pushed towards this cliff and it is a cliff we do
not want to go over because if we go over it, I don't see how we stop.
It is like a run on the bank – the system we have now works unless
everyone decides it doesn't work then the whole thing collapses."
Castro
said that as the scale of the UK and US surveillance operations became
apparent, countries around the globe were considering laws that would
attempt to keep data in-country, threatening the cloud system – where
data stored by US internet firms is accessible from anywhere in the
world.
He said this would have huge implications for the way large companies operated.
"What
this would mean is that any multinational company suddenly has lots of
extra costs. The benefits of cloud computing that have given us
flexibility, scaleability and reduced costs – especially for large
amounts of data – would suddenly disappear."
Large internet-based firms, such as Facebook and Yahoo, have already raised concerns about the impact of the NSA
revelations on their ability to operate around the world. "The
government response was, 'Oh don't worry, we're not spying on any
Americans'," said Facebook founder Mark Zuckerberg. "Oh, wonderful:
that's really helpful to companies trying to serve people around the
world, and that's really going to inspire confidence in American
internet companies."
Castro wrote a report for Itif in August
predicting as much as $35bn could be lost from the US cloud computing
market by 2016 if foreign clients pull out their businesses. And he said
the full economic impact of the potential breakup of the internet was
only just beginning to be recognised by the global business community.
"This
is changing how companies are thinking about data. It used to be that
the US government was the leader in helping make the world more secure
but the trust in that leadership has certainly taken a hit … This is
hugely problematic for the general trust in the internet and e-commerce
and digital transactions."
Brown said that although a localised
internet would be unlikely to prevent people in one country accessing
information in another area, it may not be as quick and would probably
trigger an automatic message telling the user that they were entering a
section of the internet that was subject to surveillance by US or UK
intelligence.
"They might see warnings when information is about
to be sent to servers vulnerable to the exercise of US legal powers – as
some of the Made in Germany email services that have sprung up over the summer are."
He
said despite the impact on communications and economic development, a
localised internet might be the only way to protect privacy even if, as
some argue, a set of new international privacy laws could be agreed.
"How
could such rules be verified and enforced? Unlike nuclear tests,
internet surveillance cannot be detected halfway around the world."
Abstract While playing around with the Nmap Scripting Engine
(NSE) we discovered an amazing number of open embedded devices on the
Internet. Many of them are based on Linux and allow login to standard
BusyBox with empty or default credentials. We used these devices to
build a distributed port scanner to scan all IPv4 addresses. These scans
include service probes for the most common ports, ICMP ping, reverse
DNS and SYN scans. We analyzed some of the data to get an estimation of
the IP address usage.
All data gathered during our research is released into the public domain for further study.
Security cameras that watch you, and predict what you'll do next, sound like science fiction. But a team from Carnegie Mellon University says their computerized surveillance software will be capable of "eventually predicting" what you're going to do.
Computerized surveillance can predict what
people will do next -- it's called "activity forecasting" -- and
eventually sound the alarm if the action is not permitted. Click for
larger image.
(Credit:
Carnegie Mellon University)
Computer software programmed to detect and report illicit behavior could
eventually replace the fallible humans who monitor surveillance
cameras.
The U.S. government has funded the development of so-called automatic
video surveillance technology by a pair of Carnegie Mellon University
researchers who disclosed details about their work this week --
including that it has an ultimate goal of predicting what people will do
in the future.
"The main applications are in video surveillance, both civil and military," Alessandro Oltramari, a postdoctoral researcher at Carnegie Mellon who has a Ph.D. from Italy's University of Trento, told CNET yesterday.
Oltramari and fellow researcher Christian Lebiere
say automatic video surveillance can monitor camera feeds for
suspicious activities like someone at an airport or bus station
abandoning a bag for more than a few minutes. "In this specific case,
the goal for our system would have been to detect the anomalous
behavior," Oltramari says.
Think of it as a much, much smarter version of a red light camera: the
unblinking eye of computer software that monitors dozens or even
thousands of security camera feeds could catch illicit activities that
human operators -- who are expensive and can be distracted or sleepy --
would miss. It could also, depending on how it's implemented, raise
similar privacy and civil liberty concerns.
Alessandro Oltramari, left, and Christian
Lebiere say their software will "automatize video-surveillance, both in
military and civil applications."
(Credit:
Carnegie Mellon University)
A paper (PDF)
the researchers presented this week at the Semantic Technology for
Intelligence, Defense, and Security conference outside of Washington,
D.C. -- today's sessions are reserved
only for attendees with top secret clearances -- says their system aims
"to approximate human visual intelligence in making effective and
consistent detections."
Their Army-funded research, Oltramari and Lebiere claim, can go further
than merely recognizing whether any illicit activities are currently
taking place. It will, they say, be capable of "eventually predicting"
what's going to happen next.
This approach relies heavily on advances by machine vision researchers,
who have made remarkable strides in last few decades in recognizing
stationary and moving objects and their properties. It's the same vein
of work that led to Google's self-driving cars, face recognition software used on Facebook and Picasa, and consumer electronics like Microsoft's Kinect.
When it works well, machine vision can detect objects and people -- call
them nouns -- that are on the other side of the camera's lens.
But to figure out what these nouns are doing, or are allowed to do, you
need the computer science equivalent of verbs. And that's where
Oltramari and Lebiere have built on the work of other Carnegie Mellon
researchers to create what they call a "cognitive engine" that can
understand the rules by which nouns and verbs are allowed to interact.
Their cognitive engine incorporates research, called activity
forecasting, conducted by a team led by postdoctoral fellow Kris Kitani,
which tries to understand what humans will do by calculating which
physical trajectories are most likely. They say their software "models
the effect of the physical environment on the choice of human actions."
Both projects are components of Carnegie Mellon's Mind's Eye
architecture, a DARPA-created project that aims to develop smart cameras
for machine-based visual intelligence.
Predicts Oltramari: "This work should support human operators and automatize
video-surveillance, both in military and civil applications."
Stuxnet proved
that any actor with sufficient know-how in terms of cyber-warfare can
physically inflict serious damage upon any infrastructure in the world,
even without an internet connection. In the words of former CIA Director
Michael Hayden:
“The rest of the world is looking at this and saying, ‘Clearly someone
has legitimated this kind of activity as acceptable international
conduct’.”
Governments are now alert to the enormous uncertainty created by
cyber-instruments and especially worried about cyber-sabotage against
critical infrastructure. As US Secretary of Defense Leon Panetta
warned in front of the Senate Armed Services Committee in June 2011;
“the next Pearl Harbor we confront could very well be a cyber-attack
that cripples our power systems, our grid, our security systems, our
financial systems, our governmental systems.” On the other hand, a lack
of understanding about instances of cyber-warfare such as Stuxnet has
led to confused expectations about what cyber-attacks can achieve. Some,
however, remain excited about the possibilities of this new form of
warfare. For example, retired US Air Force Colonel Phillip Meilinger
expects that “[a]… bloodless yet potentially devastating new method of
war” is coming. However, under current technological conditions,
instruments of cyber-warfare are not sophisticated enough yet to deliver
a decisive blow to the adversary. As a result, cyber-capabilities still
need to be used alongside kinetic instruments of war.
Advantages of Cyber-Capabilities
Cyber-capabilities provide three principal advantages to those actors
that possess them. First, they can deny or degrade electronic devices
including telecommunications and global positioning systems in a
stealthy manner irrespective of national borders. This means potentially
crippling an adversary’s intelligence, surveillance, and reconnaissance
capabilities, delaying an adversary’s ability to retaliate (or even
identify the source of an attack), and causing serious dysfunction in an
adversary’s command and control and radar systems.
Second, precise and timely attribution is particularly challenging in
cyberspace because skilled perpetrators can obfuscate their identity.
This means that responsibility for attacks needs to be attributed
forensically which not only complicates retaliatory measures but also
compromises attempts to seek international condemnation of the attacks.
Finally, attackers can elude penalties because there is currently no
international consensus as to what actually constitutes an ‘armed
attack’ or ‘imminent threat’ (which can invoke a state’s right of
self-defense under Article 51 of the UN Charter) involving cyber-weapons. Moreover, while some countries - including the United States and Japan - insist that the principles of international law apply to the 'cyber' domain, others such as China argue that cyber-attacks “do not threaten territorial integrity or sovereignty.”
Disadvantages of Cyber-Capabilities
On the other hand, high-level cyber-warfare has three major
disadvantages for would-be attackers. First, the development of a
sophisticated ‘Stuxnet-style’ cyber-weapon for use against
well-protected targets is time- and resource- intensive. Only a limited
number of states possess the resources required to produce such weapons.
For instance, the deployment of Stuxnet required arduous reconnaissance and an elaborate testing phase in a mirrored environment. F-Secure Labs
estimates that Stuxnet took more than ten man-years of work to develop,
underscoring just how resource- and labor- intensive a sophisticated
cyber-weapon is to produce.
Second, sophisticated and costly cyber-weapons are unlikely to be adapted for generic use. As Thomas Rid argues in "Think Again: Cyberwar,”
different system configurations need to be targeted by different types
of computer code. The success of a highly specialized cyber-weapon
therefore requires the specific vulnerabilities of a target to remain in
place. If, on the contrary, a targeted vulnerability is ‘patched’, the
cyber-operation will be set back until new malware can be prepared.
Moreover, once the existence of malware is revealed, it will tend to be
quickly neutralized – and can even be reverse-engineered by the target to assist in future retaliation on their part.
Finally, it is difficult to develop precise, predictable, and controllable cyber-weapons for
use in a constantly evolving network environment. The growth of global
connectivity makes it difficult to assess the implications of malware
infection and challenging to predict the consequences of a given
cyber-attack. Stuxnet ,
for instance, was not supposed to leave Iran’s Natanz enrichment
facility, yet the worm spread to the World Wide Web, infecting other
computers and alerting the international community to its existence.
According to the Washington Post,
US forces contemplated launching cyber-attacks on Libya’s air defense
system before NATO’s airstrikes. But this idea was quickly abandoned due
to the possibility of unintended consequences for civilian
infrastructure such as the power grid or hospitals.
Implications for Military Strategy
Despite such disadvantages, cyber-attacks are nevertheless part of
contemporary military strategy including espionage and offensive
operations. In August 2012, US Marine Corps Lieutenant General Richard Mills confirmed
that operations in Afghanistan included cyber-attacks against the
adversary. Recalling an incident in 2010, he said, "I was able to get
inside his nets, infect his command-and-control, and in fact defend
myself against his almost constant incursions to get inside my wire, to
affect my operations."
His comments confirm that the US military now employs a combination of cyber- and traditional offensive measures in wartime. As Thomas Mahnken
points out in “Cyber War and Cyber Warfare,” cyber-attacks can produce
disruptive and surprising effects. And while cyber-attacks are not a
direct cause of death, their consequences may lead to injuries and loss of life. As Mahnken
argues, it would be inconceivable to directly cause Hiroshima-type
damage and casualties merely with cyber-attacks. While a “cyber Pearl
Harbor” might shock
the adversary on a similar scale as its namesake in 1941, the ability
to inflict a decisive, extensive, and foreseeable blow requires kinetic
support – at least under current technological conditions.
Implications for Critical Infrastructure
Nevertheless, the shortcomings of cyber-attacks will not discourage
malicious actors in peacetime. The anonymous, borderless, and stealthy
nature of the 'cyber' domain offers an extremely attractive asymmetrical
platform for inflicting physical and psychological damage to critical
infrastructure such as finance, energy, power and water supply,
telecommunication, and transportation as well as to society at large.
Since well before Stuxnet, successful cyber-attacks have been launched
against vulnerable yet important infrastructure systems. For example, in
2000 a cyber-attack against an Australian sewage control system resulted in millions of liters of raw sewage leaking into a hotel, parks, and rivers.
Accordingly, the safeguarding of cyber-security is an increasingly
important consideration for heads of state. In July 2012, for example, President Barack Obama published an op-ed in the Wall Street Journal
warning about the unique risk cyber-attacks pose to national security.
In particular, the US President emphasized that cyber-attacks have the
potential to severely compromise the increasingly wired and networked
lifestyle of the global community. Since the 1990s, the process control systems
of critical infrastructures have been increasingly connected to the
Internet. This has unquestionably improved efficiencies and lowered
costs, but has also left these systems alarmingly vulnerable to
penetration. In March 2012, McAfee and Pacific Northwest National
Laboratory released a report
which concluded that power grids are rendered vulnerable due to their
common computing technologies, growing exposure to cyberspace, and
increased automation and interconnectivity.
Despite such concerns, private companies may be tempted to prioritize
short-term profits rather than allocate more funds to (or accept more
regulation of) cyber-security, especially in light of prevailing
economic conditions. After all, it takes time and resources to probe
vulnerabilities and hire experts to protect them. Nevertheless,
leadership in both the public and private sectors needs to recognize
that such an attitude provides opportunities for perpetrators to take
advantage of security weaknesses to the detriment of economic and
national security. It is, therefore, essential for governments to
educate and encourage --- and if necessary, fund --- the private sector
to provide appropriate cyber-security in order to protect critical
infrastructure.
Implications for Espionage
The sophistication of the Natanz incident, in which Stuxnet was able to exploit Iranian vulnerabilities, stunned the world. Advanced Persistent Threats (APTs)
were employed to find weaknesses by stealing data which made it
possible to sabotage Iran’s nuclear program. Yet APTs can also be used
in many different ways, for example against small companies
in order to exploit larger business partners that may be possession of
valuable information or intellectual property. As a result, both the
public and private sectors must brace themselves for daily cyber-attacks
and espionage on their respective infrastructures. Failure to do so may
result in the theft of intellectual property as well as trade and
defense secrets that could undermine economic competitiveness and
national security.
In an age of cyber espionage, the public and private sectors must also reconsider what types of information should be deemed as “secret,”
how to protect that information, and how to share alerts with others
without the sensitivity being compromised. While realization of the need
for this kind of wholesale re-evaluation is growing, many actors remain
hesitant. Indeed, such hesitancy is driven often out of fears that
doing so may reveal their vulnerabilities, harm their reputations, and
benefit their competitors. Of course, there are certain types of
information that should remain unpublicized so as not to damage the
business, economy, and national security. However, such classifications
must not be abused against balance between public interest and security.
Cyber-Warfare Is Here to Stay
The Stuxnet incident is set to encourage the use of cyber-espionage
and sabotage in warfare. However, not all countries can afford to
acquire offensive cyber-capabilities as sophisticated as Stuxnet and a
lack of predictability and controllability continues to make the
deployment of cyber-weapons a risky business. As a result, many states
and armed forces will continue to combine both kinetic and 'cyber'
tactics for the foreseeable future. Growing interconnectivity also means
that the number of potential targets is set to grow. This, in turn,
means that national cyber-security strategies will need to confront the
problem of prioritization. Both the public and private sectors will have
to decide which information and physical targets need to be protected
and work together to share information effectively.
Facedeals - a new
camera that can recognise shoppers from their Facebook pictures as they
enter a shop, and then offer them discounts
A promotional video created to promote the concept shows drinkers
entering a bar, and then being offerend cheap drinks as they are
recognised.
'Facebook
check-ins are a powerful mechanism for businesses to deliver discounts
to loyal customers, yet few businesses—and fewer customers—have realized
it,' said Nashville-based advertising agency Redpepper.
They are already trialling the scheme in firms close to their office.
'A search for businesses with active deals in our area turned up a measly six offers.
'The
odds we’ll ever be at one of those six spots are low (a strip club and
photography studio among them), and the incentives for a check-in are
not nearly enticing enough for us to take the time.
'So we set out to evolve the check-in and sweeten the deal, making both irresistible.
'We call it Facedeals.'
The Facedeal camera can identify faces when
people walk in by comparing Facebook pictures of people who have signed
up to the service
Facebook recently hit the
headlines when it bought face.com, an Israeli firm that pioneered the
use of face recognition technology online.
The social networking giant uses the software to recognise people in uploaded pictures, allowing it to accurately spot friends.
The software uses a complex algorithm to find the correct person from their Facebook pictures
The Facebook camera requires people to have authorised the Facedeals app through their Facebook account.
This verifies your most recent photo tags and maps the biometric data of your face.
The system then learns what a user looks like as more pictures are approved.
This data is then used to identify you in the real world.
In a demonstration video, the firm behind the
camera showed it being used to offer free drinks to customers if they
signed up to the system.