For some time Facebook has studied your Likes, comments, and clicks
to help create better ads and new products, but soon, the company might
also track the location of your cursor on screen. Facebook analytics
chief Ken Rudin told The Wall Street Journal about several new measures
the company is testing meant to help improve its user-tracking, like
seeing how long you hover your cursor over an ad (and if you click it),
and evaluating if certain elements on screen are within view or are off
the page. New data gathered using these methods could help Facebook
create more engaging News Feed layouts and ads.
The Journal notes that
this kind of tracking is hardly uncommon, but until now, Facebook hadn't
gone this deep in its behavioral data measurement. Sites like
Shutterstock, for example, track how long users hover their cursors over
an image before deciding to buy it. Facebook is famous for its liberal use of A/B testing
to try out new products on consumers, but it's using the same method to
judge the efficacy of its new testing methods. "Facebook should know
within months whether it makes sense to incorporate the new data
collection into the business," reports the Journal.
Assuming Facebook's tests go
well, it shouldn't be long before our every flinch is tracked on the
site. So what might come next? Our eyeballs.
15.10.13 - Two EPFL
spin-offs, senseFly and Pix4D, have modeled the Matterhorn in 3D, at a
level of detail never before achieved. It took senseFly’s ultralight
drones just six hours to snap the high altitude photographs that were
needed to build the model.
They weigh less than a kilo each, but they’re as agile as eagles
in the high mountain air. These “ebees” flying robots developed by
senseFly, a spin-off of EPFL’s Intelligent Systems Laboratory (LIS),
took off in September to photograph the Matterhorn from every
conceivable angle. The drones are completely autonomous, requiring
nothing more than a computer-conceived flight plan before being launched
by hand into the air to complete their mission.
Three
of them were launched from a 3,000m “base camp,” and the fourth made
the final assault from the summit of the stereotypical Swiss landmark,
at 4,478m above sea level. In their six-hour flights, the completely
autonomous flying machines took more than 2,000 high-resolution
photographs. The only remaining task was for software developed by
Pix4D, another EPFL spin-off from the Computer Vision Lab (CVLab), to
assemble them into an impressive 300-million-point 3D model. The model
was presented last weekend to participants of the Drone and Aerial
Robots Conference (DARC), in New York, by Henri Seydoux, CEO of the
French company Parrot, majority shareholder in senseFly.
All-terrain and even in swarms
“We want above all to demonstrate what our devices are capable of
achieving in the extreme conditions that are found at high altitudes,”
explains Jean-Christophe Zufferey, head of senseFly. In addition to the
challenges of altitude and atmospheric turbulence, the drones also had
to take into consideration, for the first time, the volume of the object
being photographed. Up to this point they had only been used to survey
relatively flat terrain.
Last week the dynamic Swiss company –
which has just moved into new, larger quarters in Cheseaux-sur-Lausanne –
also announced that it had made software improvements enabling drones
to avoid colliding with each other in flight; now a swarm of drones can
be launched simultaneously to undertake even more rapid and precise
mapping missions.
Small cubes with no exterior moving parts can propel themselves forward,
jump on top of each other, and snap together to form arbitrary shapes.
A prototype of a new modular robot,
with its innards exposed and its
flywheel — which gives it the ability to move independently — pulled
out.
Photo: M. Scott Brauer
In 2011, when an MIT senior named John Romanishin proposed a new design
for modular robots to his robotics professor, Daniela Rus, she said,
“That can’t be done.”
Two years later, Rus showed her colleague
Hod Lipson, a robotics researcher at Cornell University, a video of
prototype robots, based on Romanishin’s design, in action. “That can’t
be done,” Lipson said.
In November, Romanishin — now a research
scientist in MIT’s Computer Science and Artificial Intelligence
Laboratory (CSAIL) — Rus, and postdoc Kyle Gilpin will establish once
and for all that it can be done, when they present a paper describing
their new robots at the IEEE/RSJ International Conference on Intelligent
Robots and Systems.
Known as M-Blocks, the robots are cubes with
no external moving parts. Nonetheless, they’re able to climb over and
around one another, leap through the air, roll across the ground, and
even move while suspended upside down from metallic surfaces.
Inside
each M-Block is a flywheel that can reach speeds of 20,000 revolutions
per minute; when the flywheel is braked, it imparts its angular momentum
to the cube. On each edge of an M-Block, and on every face, are
cleverly arranged permanent magnets that allow any two cubes to attach
to each other.
“It’s one of these things that the
[modular-robotics] community has been trying to do for a long time,”
says Rus, a professor of electrical engineering and computer science and
director of CSAIL. “We just needed a creative insight and somebody who
was passionate enough to keep coming at it — despite being discouraged.”
Embodied abstraction
As Rus explains,
researchers studying reconfigurable robots have long used an abstraction
called the sliding-cube model. In this model, if two cubes are face to
face, one of them can slide up the side of the other and, without
changing orientation, slide across its top.
The sliding-cube
model simplifies the development of self-assembly algorithms, but the
robots that implement them tend to be much more complex devices. Rus’
group, for instance, previously developed a modular robot called the Molecule,
which consisted of two cubes connected by an angled bar and had 18
separate motors. “We were quite proud of it at the time,” Rus says.
According
to Gilpin, existing modular-robot systems are also “statically stable,”
meaning that “you can pause the motion at any point, and they’ll stay
where they are.” What enabled the MIT researchers to drastically
simplify their robots’ design was giving up on the principle of static
stability.
“There’s a point in time when the cube is essentially
flying through the air,” Gilpin says. “And you are depending on the
magnets to bring it into alignment when it lands. That’s something
that’s totally unique to this system.”
That’s also what made Rus
skeptical about Romanishin’s initial proposal. “I asked him build a
prototype,” Rus says. “Then I said, ‘OK, maybe I was wrong.’”
Sticking the landing
To
compensate for its static instability, the researchers’ robot relies on
some ingenious engineering. On each edge of a cube are two cylindrical
magnets, mounted like rolling pins. When two cubes approach each other,
the magnets naturally rotate, so that north poles align with south, and
vice versa. Any face of any cube can thus attach to any face of any
other.
The cubes’ edges are also beveled, so when two cubes are
face to face, there’s a slight gap between their magnets. When one cube
begins to flip on top of another, the bevels, and thus the magnets,
touch. The connection between the cubes becomes much stronger, anchoring
the pivot. On each face of a cube are four more pairs of smaller
magnets, arranged symmetrically, which help snap a moving cube into
place when it lands on top of another.
As with any modular-robot
system, the hope is that the modules can be miniaturized: the ultimate
aim of most such research is hordes of swarming microbots that can
self-assemble, like the “liquid steel” androids in the movie “Terminator
II.” And the simplicity of the cubes’ design makes miniaturization
promising.
But the researchers believe that a more refined
version of their system could prove useful even at something like its
current scale. Armies of mobile cubes could temporarily repair bridges
or buildings during emergencies, or raise and reconfigure scaffolding
for building projects. They could assemble into different types of
furniture or heavy equipment as needed. And they could swarm into
environments hostile or inaccessible to humans, diagnose problems, and
reorganize themselves to provide solutions.
Strength in diversity
The
researchers also imagine that among the mobile cubes could be
special-purpose cubes, containing cameras, or lights, or battery packs,
or other equipment, which the mobile cubes could transport. “In the vast
majority of other modular systems, an individual module cannot move on
its own,” Gilpin says. “If you drop one of these along the way, or
something goes wrong, it can rejoin the group, no problem.”
“It’s
one of those things that you kick yourself for not thinking of,”
Cornell’s Lipson says. “It’s a low-tech solution to a problem that
people have been trying to solve with extraordinarily high-tech
approaches.”
“What they did that was very interesting is they
showed several modes of locomotion,” Lipson adds. “Not just one cube
flipping around, but multiple cubes working together, multiple cubes
moving other cubes — a lot of other modes of motion that really open the
door to many, many applications, much beyond what people usually
consider when they talk about self-assembly. They rarely think about
parts dragging other parts — this kind of cooperative group behavior.”
In
ongoing work, the MIT researchers are building an army of 100 cubes,
each of which can move in any direction, and designing algorithms to
guide them. “We want hundreds of cubes, scattered randomly across the
floor, to be able to identify each other, coalesce, and autonomously
transform into a chair, or a ladder, or a desk, on demand,” Romanishin
says.
Newly discovered mechanism could help researchers understand ageing process and lead to ways of slowing it down
Horvath looked at the
DNA of nearly 8,000 samples of 51 different healthy and cancerous cells
and tissues. Photograph: Zoonar GmbH/Alamy
A US scientist has discovered an internal body clock based on DNA that measures the biological age of our tissues and organs.
The
clock shows that while many healthy tissues age at the same rate as the
body as a whole, some of them age much faster or slower. The age of
diseased organs varied hugely, with some many tens of years "older" than
healthy tissue in the same person, according to the clock.
Researchers say that unravelling the mechanisms behind the clock will help them understand the ageing process and hopefully lead to drugs and other interventions that slow it down.
Therapies
that counteract natural ageing are attracting huge interest from
scientists because they target the single most important risk factor for
scores of incurable diseases that strike in old age.
"Ultimately, it would be very exciting to develop therapy interventions to reset the clock and hopefully keep us young," said Steve Horvath, professor of genetics and biostatistics at the University of California in Los Angeles.
Horvath
looked at the DNA of nearly 8,000 samples of 51 different healthy and
cancerous cells and tissues. Specifically, he looked at how methylation,
a natural process that chemically modifies DNA, varied with age.
Horvath
found that the methylation of 353 DNA markers varied consistently with
age and could be used as a biological clock. The clock ticked fastest in
the years up to around age 20, then slowed down to a steadier rate.
Whether the DNA changes cause ageing or are caused by ageing is an
unknown that scientists are now keen to work out.
"Does this
relate to something that keeps track of age, or is a consequence of age?
I really don't know," Horvath told the Guardian. "The development of
grey hair is a marker of ageing, but nobody would say it causes ageing,"
he said.
The clock has already revealed some intriguing results.
Tests on healthy heart tissue showed that its biological age – how worn
out it appears to be – was around nine years younger than expected.
Female breast tissue aged faster than the rest of the body, on average
appearing two years older.
Diseased tissues also aged at different rates, with cancers speeding up the clock by an average of 36 years. Some brain cancer tissues taken from children had a biological age of more than 80 years.
"Female
breast tissue, even healthy tissue, seems to be older than other
tissues of the human body. That's interesting in the light that breast
cancer is the most common cancer in women. Also, age is one of the
primary risk factors of cancer, so these types of results could explain
why cancer of the breast is so common," Horvath said.
Healthy
tissue surrounding a breast tumour was on average 12 years older than
the rest of the woman's body, the scientist's tests revealed.
Writing in the journal Genome Biology,
Horvath showed that the biological clock was reset to zero when cells
plucked from an adult were reprogrammed back to a stem-cell-like state.
The process for converting adult cells into stem cells, which can grow
into any tissue in the body, won the Nobel prize in 2012 for Sir John Gurdon at Cambridge University and Shinya Yamanaka at Kyoto University.
"It
provides a proof of concept that one can reset the clock," said
Horvath. The scientist now wants to run tests to see how
neurodegenerative and infectious diseases affect, or are affected by,
the biological clock.
"These data could prove valuable in
furthering our knowledge of the biological changes that are linked to
the ageing process," said Veryan Codd, who works on the effects of
biological ageing in cardiovascular disease at Leicester University. "It
will be important to determine whether the accelerated ageing, as
described here, is associated with other age-related diseases and if it
is a causal factor in, or a consequence of, disease development.
"As
more data becomes available, it will also be interesting to see whether
a similar approach could identify tissue-specific ageing signatures,
which could also prove important in disease mechanisms," she added.
Researchers at the MIT Media Lab and the Max Planck Institutes
have created a foldable, cuttable multi-touch sensor that works no
matter how you cut it, allowing multi-touch input on nearly any surface.
In traditional sensors the connectors are laid out in a grid and when
one part of the grid is damaged you lose sensitivity in a wide swathe
of other sensors. This system lays the sensors out like a star which
means that cut parts of the sensor only effect other parts down the
line. For example, you cut the corners off of a square and still get the
sensor to work or even cut all the way down to the main, central
connector array and, as long as there are still sensors on the surface,
it will pick up input.
The team that created it, Simon Olberding, Nan-Wei Gong, John Tiab, Joseph A. Paradiso, and Jürgen Steimle, write:
This very direct manipulation allows the end-user to easily make real-world objects and surfaces touch interactive,
to augment physical prototypes and to enhance paper craft. We contribute
a set of technical principles for the design of printable circuitry
that makes the sensor more robust against cuts, damages and removed
areas. This includes
novel physical topologies and printed forward error correction.
You can read the research paper here
but this looks to be very useful in the DIY hacker space as well as for
flexible, wearable projects that require some sort of multi-touch
input. While I can’t imagine we need shirts made of this stuff, I could
see a sleeve with lots of inputs or, say, a watch with a multi-touch
band.
Don’t expect this to hit the next iWatch any time soon – it’s still
very much in prototype stages but definitely looks quite cool.
Is liquid fuel the key to zettascale computing? Dr Patrick Ruch with IBM's test kit
IBM has unveiled a prototype of a new brain-inspired computer powered by what it calls "electronic blood".
The firm says it is learning from nature by building computers fuelled and cooled by a liquid, like our minds.
The human brain packs phenomenal computing power into a tiny
space and uses only 20 watts of energy - an efficiency IBM is keen to
match.
Its new "redox flow" system pumps an electrolyte "blood" through a computer, carrying power in and taking heat out.
A very basic model was demonstrated this week at the technology giant's Zurich lab by Dr Patrick Ruch and Dr Bruno Michel.
Their vision is that by 2060, a one petaflop computer that would fill half a football field today, will fit on your desktop.
"We want to fit a supercomputer inside a sugarcube. To do
that, we need a paradigm shift in electronics - we need to be motivated
by our brain," says Michel.
"The human brain is 10,000 times more dense and efficient than any computer today.
"That's possible because it uses only one - extremely
efficient - network of capillaries and blood vessels to transport heat
and energy - all at the same time."
The victory was hailed as a landmark for cognitive computing - machine had surpassed man.
The future of computing? IBM's model uses a liquid to deliver power and remove heat
But the contest was unfair, says Michel. The brains of Ken
Jennings and Brad Rutter ran on only 20 watts of energy, whereas Watson
needed 85,000 watts.
Energy efficiency - not raw computing power - is the guiding principle for the next generation of computer chips, IBM believes.
Our current 2D silicon chips, which for half a century have doubled in power through Moore's Law, are approaching a physical limit where they cannot shrink further without overheating.
Bionic vision
"The computer industry uses $30bn of energy and throws it out of the window. We're creating hot air for $30bn," says Michel.
"Ninety-nine per cent of a computer's volume is devoted to
cooling and powering. Only 1% is used to process information. And we
think we've built a good computer?"
"The brain uses 40% of its volume for functional performance - and only 10% for energy and cooling."
Michel's vision is for a new "bionic" computing architecture,
inspired by one of the laws of nature - allometric scaling - where an
animal's metabolic power increases with its body size.
An elephant, for example, weighs as much as a million mice.
But it consumes 30 times less energy, and can perform a task even a
million mice cannot accomplish.
The same principle holds true in computing, says Michel, whose bionic vision has three core design features.
The first is 3D architecture, with chips stacked high, and memory storage units interwoven with processors.
"It's the difference between a low-rise building, where
everything is spread out flat, and a high rise building. You shorten the
connection distances," says Matthias Kaiserswerth, director of IBM
Zurich.
But there is a very good reason today's chips are gridiron
pancakes - exposure to the air is critical to dissipate the intense heat
generated by ever-smaller transistors.
Piling chips on top of one another locks this heat inside - a major roadblock to 3D computing.
IBM's solution is integrated liquid cooling - where chips are interlayered with tiny water pipes.
The art of liquid cooling has been demonstrated by Aquasar and put to work inside the German supercomputer SuperMUC which - perversely - harnesses warm water to cool its circuits.
SuperMUC consumes 40% less electricity as a result.
Liquid engineering
But for IBM to truly match the marvels of the brain, there is a
third evolutionary step it must achieve - simultaneous liquid fuelling
and cooling.
Just as blood gives sugar in one hand and takes heat with another, IBM is looking for a fluid that can multitask.
Vanadium is the best performer in their current laboratory
test system - a type of redox flow unit - similar to a simple battery.
First a liquid - the electrolyte - is charged via electrodes,
then pumped into the computer, where it discharges energy to the chip.
SuperMUC uses liquid cooling instead of air - a model for future computer designs
Redox flow is far from a new technology, and neither is it especially complex.
But IBM is the first to stake its chips on this "electronic
blood" as the food of future computers - and will attempt to optimise it
over the coming decades to achieve zettascale computing.
"To power a zettascale computer today would take more electricity than is produced in the entire world," says Michel.
He is confident that the design hurdles in his bionic model
can be surmounted - not least that a whole additional unit is needed to
charge the liquid.
And while other labs are betting on spintronics, quantum computing, or photonics to take us beyond silicon, the Zurich team believes the real answer lies right behind our eyes.
"Just as computers help us understand our brains, if we
understand our brains we'll make better computers," says director
Matthias Kaiserswerth.
He would like to see a future Watson win Jeopardy on a level playing field.
A redox flow test system - the different coloured liquids have different oxidation states
Other experts in computing agree that IBM's 3D principles are sound.
But as to whether bionic computing will be the breakthrough technology,
the jury is out.
"The idea of using a fluid to both power and cool strikes me
as very novel engineering - killing two birds with one stone," says Prof
Alan Woodward, of the University of Surrey's computing department.
"But every form of future computing has its champions -
whether it be quantum computing, DNA computing or neuromorphic
computing.
"There is a long way to go from the lab to having one of these sitting under your desk."
Prof Steve Furber, leader of the SpiNNaker project agrees that "going into the third dimension" has more to offer than continually shrinking transistors.
"The big issue with 3D computing is getting the heat out -
and liquid cooling could be very effective if integrated into 3D systems
as proposed here," he told the BBC.
"But all of the above will not get electronics down to the energy-efficiency of the brain.
"That will require many more changes, including a move to analogue computation instead of digital.
"It will also involve breakthroughs in new non-Turing models
of computation, for example based on an understanding of how the brain
processes information."
Current wireless networks have a problem: The more popular they become, the slower they are. Researchers at Fudan University in Shanghai have just become the latest to demonstrate
a technology that transmits data as light instead of radio waves, which
gets around the congestion issue and could be ten times faster than
traditional Wi-Fi.
In
dense urban areas, the range within which Wi-Fi signals are transmitted
is increasingly crowded with noise—mostly, other Wi-Fi signals. What’s
more, the physics of electromagnetic waves sets an upper limit to the
bandwidth of traditional Wi-Fi. The short version: you can only transmit
so much data at a given frequency. The lower the frequency of the wave,
the less it can transmit.
Li-Fi doesn’t work in the dark or outdoors, but it only has to be a supplement to existing wireless networks to be valuable.AP Photo/Kin Cheung
But
what if you could transmit data using waves of much higher frequencies,
and without needing a spectrum license from your country’s telecoms
regulator? Light, like radio, is an electromagnetic wave, but it has
about 100,000 times the frequency of a Wi-Fi signal, and nobody needs a
license to make a light bulb. All you need is a way to make its
brightness flicker very rapidly and accurately so it can carry a signal.
First,
data are transmitted to an LED light bulb—it could be the one
illuminating the room in which you’re sitting now. Then the lightbulb is
flicked on and off very quickly, up to billions of times per second.
That flicker is so fast that the human eye cannot perceive it. (For
comparison, the average energy-saving compact fluorescent bulb already flickers between 10,000 and 40,000 times per second.)
Then a receiver on a computer or mobile device—basically, a little
camera that can see visible light—decodes that flickering into data. LED
bulbs can be flicked on and off quickly enough to transmit data around
ten times as fast the fastest Wi-Fi networks. (If they could be
manipulated faster, the bandwidth would be even higher.)
If you’ve ever used a solar-powered calculator, you already know how to connect to Li-Fi.Oledcomm
Li-Fi
has one big drawback compared to Wi-Fi: you, or rather your device,
need to be within sight of the bulb. It wouldn’t necessarily need to be a
special bulb; in principle, overhead lights at work or at home could be
wired to the internet. But it would mean that, unlike with Wi-Fi, you
couldn’t go into the next room unless there were wired bulbs there too.
However, a new generation of ultrafast Wi-Fi devices that we’re likely to start using soon
face a similar limitation. They use a higher range of radio
frequencies, which aren’t as crowded with other signals (at least for
now), and have a higher bandwidth, but, like visible light, cannot
penetrate walls.
Engineers and a handful of startups, like Oledcomm, have been experimenting with Li-Fi technology.
The Fudan University team unveiled an experimental Li-Fi network in
which four PCs were all connected to the same light bulb. Other
researchers are working on transmitting data via different colors of LED
lights—imagine, for example, transmitting different signals through
each of the the red, green and blue LEDs inside a multi-colored LED
light bulb.
Because
of its limitations, Li-Fi won’t do away with other wireless networks.
But it could supplement them in congested areas, and replace them in places where radio signals need to be kept to a minimum, like hospitals, or where they don’t work, such as underwater.
Google puts a lot of work into creating a virtual map of the world with Street View, sending cars and backpackers everywhere with huge cameras. But what if a computer program could do all that automatically? Well, there's one that can. All it needs is Wikipedia and Google Images.
Developed by Bryan Russell at
Intel Labs and some colleagues at the University of Washington in
Seattle, the program is almost deceptively simple. First, it trawls the
internet (mainly Flickr) for a wide variety of pictures of a location.
By looking at them from different angles, it's able to piece together a
pretty good idea of what it looks like in 3D space from the outside.
Like this:
Then, for
the interior of that 3D shell, the program cruises through Wikipedia,
making note of every single noun-phrase, since its dumb robot brain
can't tell what is important and what is not. Finally, it searches
Google Images for its big stack of phrases, pulls the relevant pictures
(if it can find any), and plasters them roughly where they belong in the
model's interior. When that's all said and done, it can then behave as a
procedurally generated 3D tour that guides you through a recreation of
whatever you're reading about on Wikipedia. Awesome!
It's a
crazy idea, but it seems to work pretty well for popular landmarks, at
least. The operating logic here is that if a thing is important, there
will be pictures of it on the internet, and text describing the
pictures. So long as that's true, it's possible to start piecing things
together.
For the moment, the program has only compiled complete
virtual representations of exceedingly popular and well-documented
sites like the Sistine Chapel. With less common places, there's less
data to pull. But with the advent of tech like Google Glass, and the
wide proliferation of smartphones that can take a half-decent picture,
the data-voids are slowly getting filled in—and they'll only fill in
faster as time goes on.
So if you ever needed a great reason to keep all your vacation pictures public and publicly indexed, here it is. [Bryan C. Russell via New Scientist]
Sina Weibo, launched in 2010, has more than 500 million registered users with 100 million messages posted daily
More than two million
people in China are employed by the government to monitor web activity,
state media say, providing a rare glimpse into how the state tries to
control the internet.
The Beijing News says the monitors, described as internet opinion analysts, are on state and commercial payrolls.
China's hundreds of millions of web users increasingly use microblogs to criticise the state or vent anger.
Recent research suggested Chinese censors actively target social media.
The report by the Beijing News said that these monitors were not required to delete postings.
They are "strictly to gather and analyse
public opinions on microblog sites and compile reports for
decision-makers", it said. It also added details about how some of these
monitors work.
Tang Xiaotao has been working as a monitor for less than six months, the report says, without revealing where he works.
"He sits in front of a PC every day, and opening up an application, he types in key words which are specified by clients.
"He then monitors negative opinions related to the clients,
and gathers (them) and compile reports and send them to the clients," it
says.
The reports says the software used in the office is even more
advanced and supported by thousands of servers. It also monitors
websites outside China.
China rarely reveals any details concerning the scale and sophistication of its internet police force.
It is believed that the two million internet monitors are
part of a huge army which the government relies on to control the
internet.
The government is also to organise training classes for them for the first time from 14 to 18 October, the paper says.
But it is not clear whether the training will be for existing monitors or for new recruits.
The training will have eight modules, and teach participants
how to analyse and judge online postings and deal with crisis
situations, it says.
The most popular microblogging site Sina Weibo, launched in
2010, now has more than 500 million registered users with 100 million
messages posted daily.
Topics cover a wide range - from personal hobbies, health to
celebrity gossip and food safety but they talso include politically
sensitive issues like official corruption.
Postings deemed to be politically incorrect are routinely deleted.
Hollywood likes to paint movie pirates as freeloaders without morals,
but maybe those so-called dastardly downloaders are simply under-served.
A new website called PiracyData.org tracks the most pirated movies of the week, as reported by TorrentFreak, and taps into Can I Stream It? to show whether file sharers could have bought or rented the movie online instead.
As PiracyData's chart below shows, four out of the 10 most pirated
movies cannot be purchased legally online. Out of the remaining six
movies, three are available for full-priced purchase, but not for rent.
None of the top 10 movies can be watched through subscription streaming
services such as Netflix.
(Click to enlarge.)
On its face, it looks like a missed opportunity for the movie industry.
Viewers may feel pushed toward piracy when they can't purchase or at
least rent the movies they want to watch.
But it's also worth noting the release timing of these movies. Every
single movie on the list came out this year, but is no longer showing in
major theaters. With the exception of three films, most of these movies
are now in an awkward stage where you can buy them on DVD, but cannot
rent them online.
DVD sales are plummeting.
People are growing accustomed to pressing a button and streaming a
movie instantly. The idea of withholding new movies from digital rental
just to juice DVD sales looks increasingly antiquated as people move
beyond the optical disc. For disposable films that people only want to
watch once, a mandatory $15 or $20 purchase is undesirable as well.
Of course, there are exceptions. People do have rental options for The Lone Ranger, After Earth, and This is the End,
yet they remain popular to download on BitTorrent. Unscrupulous
freeloaders do exist, and are not going away. But when the only legal
options are DVD or mandatory purchase, it's no surprise that potential
paying customers will turn to piracy instead.