15.10.13 - Two EPFL
spin-offs, senseFly and Pix4D, have modeled the Matterhorn in 3D, at a
level of detail never before achieved. It took senseFly’s ultralight
drones just six hours to snap the high altitude photographs that were
needed to build the model.
They weigh less than a kilo each, but they’re as agile as eagles
in the high mountain air. These “ebees” flying robots developed by
senseFly, a spin-off of EPFL’s Intelligent Systems Laboratory (LIS),
took off in September to photograph the Matterhorn from every
conceivable angle. The drones are completely autonomous, requiring
nothing more than a computer-conceived flight plan before being launched
by hand into the air to complete their mission.
Three
of them were launched from a 3,000m “base camp,” and the fourth made
the final assault from the summit of the stereotypical Swiss landmark,
at 4,478m above sea level. In their six-hour flights, the completely
autonomous flying machines took more than 2,000 high-resolution
photographs. The only remaining task was for software developed by
Pix4D, another EPFL spin-off from the Computer Vision Lab (CVLab), to
assemble them into an impressive 300-million-point 3D model. The model
was presented last weekend to participants of the Drone and Aerial
Robots Conference (DARC), in New York, by Henri Seydoux, CEO of the
French company Parrot, majority shareholder in senseFly.
All-terrain and even in swarms
“We want above all to demonstrate what our devices are capable of
achieving in the extreme conditions that are found at high altitudes,”
explains Jean-Christophe Zufferey, head of senseFly. In addition to the
challenges of altitude and atmospheric turbulence, the drones also had
to take into consideration, for the first time, the volume of the object
being photographed. Up to this point they had only been used to survey
relatively flat terrain.
Last week the dynamic Swiss company –
which has just moved into new, larger quarters in Cheseaux-sur-Lausanne –
also announced that it had made software improvements enabling drones
to avoid colliding with each other in flight; now a swarm of drones can
be launched simultaneously to undertake even more rapid and precise
mapping missions.
As a programmer who wants to write decent performing code, I am very
interested in understanding the architectures of CPUs and GPUs. However,
unlike desktop and server CPUs, mobile CPU and GPU vendors tend to do
very little architectural disclosure - a fact that we've been working
hard to change over the past few years. Often times all that's available
are marketing slides with fuzzy performance claims. This situation
frustrates me to no end personally. We've done quite a bit of low-level
mobile CPU analysis at AnandTech in pursuit of understanding
architectures where there is no publicly available documentation. In
this spirit, I wrote a few synthetic tests to better understand the
performance of current-gen ARM CPU cores without having to rely upon
vendor supplied information. For this article I'm focusing exclusively
on floating point performance.
We will look at 5 CPU cores today: the ARM Cortex A9, ARM Cortex A15,
Qualcomm Scorpion, Qualcomm Krait 200 and Qualcomm Krait 300. The test
devices are listed below.
Devices tested
Device
OS
SoC
CPU
Frequency
Number of cores
Samsung Galaxy SIIX (T989D)
Android 4.0
Qualcomm APQ8060
Scorpion
1.5GHz
2
Boundary devices BD-SL-i.mx6
Ubuntu Oneiric
Freescale i.mx6
Cortex-A9
1.0GHz
4
Blackberry Z10
Blackberry 10 (10.1)
Qualcomm MSM8960
Krait 200
1.5GHz
2
Google Nexus 10
Android 4.2.2
Samsung Exynos 5250
Cortex-A15
1.7GHz
2
HTC One
Android 4.1.2
Qualcomm Snapdragon 600
Krait 300
1.7GHz
4
I wanted to test the instruction throughput of various floating point
instructions. I wrote a simple benchmark consisting of a loop with a
large number of iterations. The loop body consisted of many (say 20)
floating point instructions with no data dependence between them. The
tests were written in C++ with gcc NEON intrisincs where required, and I
always checked the assembler to verify that the generated assembly was
as expected. There were no memory instructions inside the loop and thus
memory performance was not an issue. There were minimal dependencies in
the loop body. I tested the performance of scalar addition,
multiplication and multiply-accumulate for 32-bit and 64-bit floating
point datatypes. All the tested ARM processors also support the NEON
instruction set, which is a SIMD (single instruction multiple data)
instruction set for ARM for integer and floating point operations. I
tested the performance of 128-bit floating point NEON instructions for
addition, multiplication and multiply-accumulate.
Apart from testing throughput of individual instructions, I also wrote a
test for testing throughput of a program consisting of two types of
instructions: scalar addition and scalar multiplication instructions.
The instructions were interleaved, i.e. the program consisted of an
addition followed by a multiply, followed by another add, then another
multiply and so on. There were no dependencies between the additions and
following multiplies. You may be wondering the reasoning behind this
mixed test. Some CPU cores (such as AMD's K10 core) have two floating
point units but the two floating point units may not be identical. For
example, one floating point unit may only support addition while another
may only support multiplication. Thus, if we only test the additions
and multiplications separately, we will not see the peak throughput on
such a machine. We perform the mixed test to identify such cases.
All the tests mentioned above measure the amount of time taken for a
particular number of instructions and thus we get the instructions
executed per-second. We also need to know the frequency to get the
instructions executed per-cycle. Knowing the peak frequency of the
device is not enough because CPUs have multiple frequency states and the
tests may not be running at the advertised peak speeds. Thus, I also
wrote code to monitor the percentage of time spent in each frequency
state as reported by the kernel. The frequency was calculated as the
average of the frequency states weighted by percentage of time spent in
each state. The observed frequency on Scorpion (APQ8060) , Cortex A9
(i.mx6) and Cortex A15 (Exynos 5250) were 1.242 GHz, 992MHz and 1.7GHz
respectively on all tests except where noted in the results below.
However, as it turns out, the method I used for measuring the time
spent in each frequency state does not work on aSMP designs like the
Krait 200 based Snapdragon S4 and Krait 300 based Snapdragon 600. For
Krait 200, the results reported here are for MSM8960 which shouldn't
really have thermal throttling issues. My results on the MSM8960 also
line up quite neatly with the assumption that the CPU spent most or all
of its time in the test in the peak frequency state. Brian also ran the
test on a Nexus 4 and the results were essentially identical as both
have the same peak, which is additional confirmation that our results
are likely correct. Thus I will assume a frequency of 1.5 GHz while
discussing Krait 200 results. Results on Krait 300 (Snapdragon 600)
however are more mixed. I am not sure if it is reaching peak frequency
on all the tests and thus I am less sure of the per-cycle estimates on
this chip. Brian also ran the tests on another handset (LG Optimus G
Pro) with the same Snapdragon 600, and the results were qualitatively
very similar.
Now the results. First up, the raw data collected from the tests in gigaflops:
Performance of each CPU in GFlops on different tests
Scorpion
(APQ8060)
Cortex-A9
(i.mx6)
Krait 200
(MSM8960)
Cortex-A15
(Exynos 5250)
Krait 300
(Snapdragon 600)
Add (fp64)
1.23
0.99
1.33
1.55 @ 1.55 GHz
1.6
Add (fp32)
1.19
0.99
1.46
1.69
1.72
Mul (fp64)
0.61
0.50
1.48
1.69
1.72
Mul (fp32)
1.22
0.99
1.49
1.69
1.72
Mixed (fp64)
0.82
0.99
1.48
1.63
1.72
Mixed (fp32)
1.23
0.99
1.47
1.69
1.72
MAC (fp64)
1.23
0.99
1.48
3.35
2.65
MAC (fp32)
2.47
1.98
1.47
3.39
3.13
Add (fp32 NEON)
4.94
1.99
5.86
6.77
6.89
Mul (fp32 NEON)
4.89
1.99
5.76
6.77
6.89
MAC (fp32 NEON)
9.88
3.98
5.91
13.55
12.5
Before we discuss the results, it is important to keep in mind that the
results and per-cycle timing estimates reported are what I observed
from the tests. I did my best to ensure that the design of the tests was
very conducive to achieving high throughput. However, it is possible
there may be some cases where an architecture can achieve higher
performance than what what I was able to get out of my tests. With that
out of the way, lets look at the results.
In the data, we need to distinguish between number of instructions and
number of flops. I count scalar addition and multiply as one flop and
scalar MACs as two flops. I count NEON addition and multiply as four
flops and NEON MACs are counted as eight flops. Thus, we get the
following per-cycle instruction throughput estimates:
Estimated floating point instruction throughput per cycle
Scorpion
Cortex A9
Krait 200
Cortex A15
Krait 300
Add (fp64)
1
1
1
1
1
Add (fp32)
1
1
1
1
1
Mul (fp64)
1/2
1/2
1
1
1
Mul (fp32)
1
1
1
1
1
Mixed (fp64)
2/3
1
1
1
1
Mixed (fp32)
1
1
1
1
1
MAC (fp64)
1/2
1/2
1/2
1
7/9
MAC (fp32)
1
1
1/2
1
10/11
Add (fp32 NEON)
1
1/2
1
1
1
Mul (fp32 NEON)
1
1/2
1
1
1
MAC (fp32 NEON)
1
1/2
1/2
1
10/11
We start with the Cortex A9. Cortex A9 achieves throughput of 1
operation/cycle for most scalar instructions, except for fp64 MUL and
fp64 MAC, which can only be issued once every two cycles. The mixed test
reveals that though fp64 muls can only be issued every two cycles,
Cortex A9 can issue a fp64 add in the otherwise empty pipeline slot.
Thus, in the mixed test it was able to achieve throughput of 1
instruction/cycle. NEON implementation in Cortex A9 has a 64-bit
datapath and all NEON instructions take 2 cycles. Qualcomm's Scorpion
implementation of scalar implementations is similar to Cortex A9 except
that it seems unable to issue fp64 adds immediately after fp64 muls in
the mixed test. Scorpion uses a full 128-bit datapath for NEON and has
twice the throughput of Cortex A9.
Krait 200 features an improved multiplier, and offers 1
instruction/cycle throughput for most scalar and NEON instructions.
Interestingly, Krait 200 has half the per-cycle throughput for MAC
instructions, which is a regression compared to Scorpion. Krait 300
improves the MAC throughput compared to Krait 200, but still appears to
be unable to reach throughput of 1 instruction/cycle possibly revealing
some issues in the pipeline. An alternate explanation is that Snapdragon
600 reduced the frequency in the MAC tests for some unknown reason.
Without accurate frequency information, currently it is difficult to
make that judgment. Cortex A15 is the clear leader here, and offers
throughput of 1 FP instruction/cycle in all our tests.
In the big picture, readers may want to know how the the floating point
capabilities of these cores compares to x86 cores. I consider Intel's
Ivy Bridge and Haswell as datapoints for big x86 cores, and AMD Jaguar
as a datapoint for a small x86 core. For double-precision (fp64),
current ARM cores appear to be limited to 2 flops/cycle for FMAC-heavy
workloads and 1 flops/cycle for non-FMAC workloads. Ivy Bridge can have a
throughput of up to 8 flops/cycle and Haswell can do 16 flops/cycle
with AVX2 instructions. Jaguar can execute up to 3 flops/cycle. Thus,
current ARM cores are noticeably behind in this case. Apart from the
usual reasons (power and area constraints, very client focused designs),
current ARM cores also particularly lag behind in this case because
currently NEON does not have vector instructions for fp64. ARMv8 ISA
adds fp64 vector instructions and high performance implementations of
the ISA such as Cortex A57 should begin to reduce the gap.
For fp32, Ivy Bridge can execute up to 16 fp32 flops/cycle, Haswell can
do up to 32 fp32 flops/cycle and AMD's Jaguar can perform 8 fp32
flops/cycle. Current ARM cores can do up to 8 flops/cycle using NEON
instructions. However, ARM NEON instructions are not IEEE 754 compliant,
whereas SSE and AVX floating point instructions are IEEE 754 compliant.
Thus, comparing flops obtained in NEON instructions to SSE instructions
is not apples-to-apples comparison. Applications that require IEEE 754
compliant arithmetic cannot use NEON but more consumer oriented
applications such as multimedia applications should be able to use NEON.
Again, ARMv8 will fix this issue and will bring fully IEEE
754-compliant fp32 vector instructions.
To conclude, Cortex A15 clearly leads amongst the CPUs tested today
with Krait 300 very close behind. It is also somewhat disappointing that
none of the CPU cores tested displayed a throughput of more than 1 FP
instruction/cycle in these tests. I end at a cautionary note that the
tests here are synthetic tests that only stress the FP units. Floating
point ALU peaks are only a part of a microarchitecture. Performance of
real-world applications will depend upon rest of the microarchitecture
such as cache hierarchy, out of order execution capabilities and so on.
We will continue to make further investigations into these CPUs to
understand them better.
Ever since the introduction of the Apple TV there has been a
lot of discussion and speculation about apps for the device. I think
those discussions have missed some important technical aspects.
My Basic Assertion
Apple has sold over 13 million Apple TV boxes. This is a good market
size for attracting developers to the platform. It avoids the chicken
and egg problem where nobody wants to buy new hardware until there are
apps for it, and developers don’t want to invest in a new platform until
there are enough potential customers.
Apple TV customers are purchasing over 800,000 TV episodes and
350,000 movies per day. And Apple is continuously adding new services to
the current generation Apple TV, also indicating that this is not a
product that is about to be replaced.
Therefore, my basic assertion which the rest of this article builds upon is that an Apple TV SDK and subsequently apps for the Apple TV need to work on the current generation Apple TV hardware.
An Actual TV from Apple
For years there have been speculation that Apple is just about to
launch a flat screen TV with the Apple logo on it; to revolutionize our
living rooms. For the purposes of this article I will just posit that
any app capable hardware built into an Apple TV set will have to be
compatible with the current Apple TV box, per my basic assertion above.
The Apple TV SDK
The 3rd generation Apple already runs iOS, so “all” that’s missing is
an App Store, some people say. Oh, and a way to control apps other than
with the anemic Apple TV remote.
The solution to the latter problem is the new game controller API
introduced with iOS 7. I’m speculating that compatible game controllers
can come from third party accessory manufacturers as snap-ons to your
existing iDevices, and as low cost freestanding devices similar in form
factor to Wii remotes and other game console controllers. A minor
complication is that the existing Apple TV owners don’t have game
controllers, so if an App Store is introduced, I will not “just work”
for them.
More problematic is where purchased apps will be saved on the Apple
TV. The “black puck” generation Apple TV officially does not have any
internal storage. However iFixit’s tear down
showed that the device does have a 8 GB flash memory chip. Allegedly
this memory is used for caching streaming movies to improve the watching
experience.
8 GB seems a bit excessive for just a cache, so say that we allocate
half to storing apps. Remember back in the day when we only had 4 GB
storage on the original iPhone? How many high quality iOS games would
fit into 4 GB today?
So why not stream the apps too? Movies and music are great candidates
for streaming since you typically consume them linearly. Compiled code
is unfortunately not so predictable. There are other systems out there
that stream software, so it’s not an impossible problem. But it doesn’t
seem like a trivial thing to add on top of iOS when it was not initially
designed for this.
For this reason I think it’s unlikely that there will be an Apple TV SDK anytime soon.
Future Apple TV Hardware
Apple is no stranger to releasing new hardware that replaces and
obsoletes their current models. Releasing a new Apple TV that has
built-in storage would be easy for them. But wait, they already did
that. The first generation Apple TV had a built-in 40 or 160 GB hard
drive. Flip-flopping back to the hard drive design after they finally
found success with the current model, would be a strange product
evolution path.
What about flash memory? Even though Apple is the world’s largest
buyer of flash memory, it’s not cheap. The main technical differences
between the various iPhone/iPad/iPod models is the amount of flash
memory included. Take a look at the price differences to get a feel for
how expensive flash memory is. At the current $99 price the Apple TV
would be a stand-out in the game console market. At $199 it would be in a
crowd of low powered game machines.
AirPlay
The Apple TV can act as an AirPlay receiver for both audio and video.
iOS apps have been able to send streams over AirPlay since iOS 4.3 and
AirPlay mirroring is available in iDevices starting with iPhone 4S. I’ve
written
about the AirPlay potentials for app developers before. And there are
several games on the App Store that make use of AirPlay. What is new
this time around is the game controller API. This enhances game play in
several ways, including: Significant screen areas no longer need to be
dedicated to touch areas for your fingers to control the game. This
makes even less sense when you’re viewing the action on your TV and
(hopefully) not touching your TV to control the game. Also, with
physical buttons on a game controller you can keep your eyes on the big
TV screen instead of having to look down on your iDevice screen to see
where your fingers are.
I this regard agree with Kyle Richter
that the “Apple TV SDK” has already been launched. You will use the
iDevice you already own to purchase and play games on, and then use the
current Apple TV to display the action on your big TV screen so that
your friends and family can be part of the fun.
The game controller API will certainly enhance game play and raise
the awareness of gaming with your Apple TV. But it’s not a requirement,
as all games that support the game controller API presumably have to
work without a game controller connected.
New game console generations are launched about every 5-6 years.
People just don’t upgrade components in their entertainment system as
often as they upgrade their mobile phones. With this upgrade cycle Apple
can take advantage of newer gaming hardware much quicker than the
competitors if the games actually run on iDevices instead of on the
Apple TV.
AirPlay has a drawback in that there is a lag between the bits being
drawn on the screen on the iDevice and the image shows up on the Apple
TV. This could be irritating for some fast paced games. But this could
be countered in the app with some clever delay handling and by designing
your game mechanics with this in mind. When this is not possible, the
active player can use the iDevice screen and friends watching would look
at the TV not caring that there is a slight delay.
Multiplayer
iDevices can already communicate with each other, so a multiplayer
game can be done by having one device be the master that renders the
screen for all players, and the other devices just send the movements of
their players to the master.
With stand-alone game controllers (i.e. those that don’t snap on to
the device) you could connect multiple controllers to one iDevice for
multiplayer capability. This is even easier to handle from a programming
perspective.
What Does This Mean for Your App Business?
If you don’t already own an Apple TV go buy one. Also get that new
fancier flat screen TV you’ve been wanting. Write them off as business
expenses since you of course need these new toys to properly test your
apps.
If you are developing games, you should definitely add support for
the game controller API when you update your apps for iOS 7. Remember
that Apple loves to feature apps that make good use of new technologies
and APIs.
You should also consider supporting AirPlay. This is very easy to do.
The next level is to consider the Apple TV environment when you
design a new game. I’m sure there are many new and exciting game ideas
that will be invented over the next several months.
Associating
numbers with specific characters has proved necessary to allow
automated telegraph printers (teleprinters) and then computers to
represent text. The most widely used mapping between numbers and letters
was that approved on June 17, 1963, by the American Standards
Association. It is the American Standard Code for Information
Interchange, better known as ASCII. -- From "What is ASCII?" (The Economist, 2013). Full story: @ American Standards Association document (from www.wps.com): @ "1963: The debut of ASCII" (CNN, 1999): @ www.asciitable.com: @ www.ascii-code.com: @
Entry from www.cryptomuseum.com: @ Bob Bemer's website (Bemer helped create and standardize ASCII): @
We’ve been hearing a lot about Google‘s
self-driving car lately, and we’re all probably wanting to know how
exactly the search giant is able to construct such a thing and drive
itself without hitting anything or anyone. A new photo has surfaced that
demonstrates what Google’s self-driving vehicles see while they’re out
on the town, and it looks rather frightening.
The image was tweeted
by Idealab founder Bill Gross, along with a claim that the self-driving
car collects almost 1GB of data every second (yes, every second). This
data includes imagery of the cars surroundings in order to effectively
and safely navigate roads. The image shows that the car sees its
surroundings through an infrared-like camera sensor, and it even can
pick out people walking on the sidewalk.
Of course, 1GB of data every second isn’t too surprising when you
consider that the car has to get a 360-degree image of its surroundings
at all times. The image we see above even distinguishes different
objects by color and shape. For instance, pedestrians are in bright
green, cars are shaped like boxes, and the road is in dark blue.
However, we’re not sure where this photo came from, so it could
simply be a rendering of someone’s idea of what Google’s self-driving
car sees. Either way, Google says that we could see self-driving cars
make their way to public roads in the next five years or so, which actually isn’t that far off, and Tesla Motors CEO Elon Musk is even interested in developing self-driving cars as well. However, they certainly don’t come without their problems, and we’re guessing that the first batch of self-driving cars probably won’t be in 100% tip-top shape.
At SXSW this afternoon, Google provided developers with a first
glance at the Google Glass Mirror API, the main interface between Google
Glass, Google’s servers and the apps that developers will write for
them. In addition, Google showed off a first round of applications that
work on Glass, including how Gmail works on the device, as well as
integrations from companies like the New York Times, Evernote, Path and
others.
The Mirror API is essentially a REST API,
which should make developing for it very easy for most developers. The
Glass device essentially talks to Google’s servers and the developers’
applications then get the data from there and also push it to Glass
through Google’s APIs. All of this data is then presented on Glass
through what Google calls “timeline cards.” These cards can include
text, images, rich HTML and video. Besides single cards, Google also
lets developers use what it calls bundles, which are basically sets of
cards that users can navigate using their voice or the touchpad on the
side of Glass.
It looks like sharing to Google+ is a built-in feature of the Mirror
API, but as Google’s Timothy Jordan noted in today’s presentation,
developers can always add their own sharing options, as well. Other
built-in features seem to include voice recognition, access to the
camera and a text-to-speech engine.
Glass Rules
Because Glass is a new and unique form factor, Jordan also noted,
Google is setting a few rules for Glass apps. They shouldn’t, for
example, show full news stories but only headlines, as everything else
would be too distracting. For longer stories, developers can always just
use Glass to read text to users.
Essentially, developers should make sure that they don’t annoy users
with too many notifications, and the data they send to Glass should
always be relevant. Developers should also make sure that everything
that happens on Glass should be something the user expects, said Jordan.
Glass isn’t the kind of device, he said, where a push notification
about an update to your app makes sense.
Using Glass With Gmail, Evernote, Path and Others
As
part of today’s presentation, Jordan also detailed some Glass apps
Google has been working on itself, and apps that some of its partners
have created. The New York Times app, for example, shows headlines and
then lets you listen to a summary of the article by telling Glass to
“read aloud.” Google’s own Gmail app uses voice recognition to answer
emails (and it obviously shows you incoming mail, as well). Evernote’s
Skitch can be used to take and share photos, and Jordan also showed a
demo of social network Path running on Glass to share your location.
So far, there is no additional information about the Mirror API on
any of Google’s usual sites, but we expect the company to release more
information shortly and will update this post once we hear more.
The Museum of Modern Art in New York (MoMA) last week announced that
it is bolstering its collection of work with 14 videogames, and plans to
acquire a further 26 over the next few years. And that’s just for
starters. The games will join the likes of Hector Guimard’s Paris Metro
entrances, the Rubik’s Cube, M&Ms and Apple’s first iPod in the
museum’s Architecture & Design department.
The move recognises the design achievements behind each creation, of
course, but despite MoMA’s savvy curatorial decision, the institution
risks becoming a catalyst for yet another wave of awkward ‘are games
art?’ blog posts. And it doesn’t exactly go out of its way to avoid that
particular quagmire in the official announcement.
“Are video games art? They sure are,” it begins, worryingly, before
switching to a more considered tack, “but they are also design, and a
design approach is what we chose for this new foray into this universe.
The games are selected as outstanding examples of interaction design — a
field that MoMA has already explored and collected extensively, and one
of the most important and oft-discussed expressions of contemporary
design creativity.”
Jason Rohrer
MoMA worked with scholars, digital conservation and legal experts,
historians and critics to come up with its criteria and final list of
games, and among the yardsticks the museum looked at for inclusion are
the visual quality and aesthetic experience of each game, the ways in
which the game manipulates or stimulates player behaviour, and even the
elegance of its code.
That initial list of 14 games makes for convincing reading, too:
Pac-Man, Tetris, Another World, Myst, SimCity 2000, Vib-Ribbon, The
Sims, Katamari Damacy, Eve Online, Dwarf Fortress, Portal, flOw, Passage
and Canabalt.
But the wishlist also extends to Spacewar!, a selection of Magnavox
Odyssey games, Pong, Snake, Space Invaders, Asteroids, Zork, Tempest,
Donkey Kong, Yars’ Revenge, M.U.L.E, Core War, Marble Madness, Super
Mario Bros, The Legend Of Zelda, NetHack, Street Fighter II, Chrono
Trigger, Super Mario 64, Grim Fandango, Animal Crossing, and, of course,
Minecraft.
Art, design or otherwise, MoMA’s focused collection is an uncommonly
informed and well-considered list. And their inclusion within MoMA’s
hallowed walls, and the recognition of their cultural and historical
relevance that is implied, is certainly a boon for videogames on the
whole. But reactions to the move have been mixed. The Guardian’s
Jonathan Jones posted a blog
last week titled Sorry MoMA, Videogames Are Not Art, in which he
suggests that exhibiting Pac-Man and Tetris alongside work by Picasso
and Van Gogh will mean “game over for any real understanding of art”.
Canabalt
“The worlds created by electronic games are more like playgrounds
where experience is created by the interaction between a player and a
programme,” he writes. “The player cannot claim to impose a personal
vision of life on the game, while the creator of the game has ceded that
responsibility. No one ‘owns’ the game, so there is no artist, and
therefore no work of art.”
While he clearly misunderstands the capacity of a game to manifest
the personal – and singular – vision of its creator, he nonetheless
raises valid fears that the creative motivations behind many videogames’
– predominantly commercially-driven entertainment – are incompatible
with those of serious art and that their inclusion in established
museums risks muddying its definition. But while many commentators have
fallen into the same trap of invoking comparisons with cubist and
impressionist painters, MoMA has drawn no such parallels.
“We have to keep in mind it’s the design collection that is
snapping up video games,” Passage creator Jason Rorher tells us when we
put the question to him. “This is the same collection that houses Lego,
teapots, and barstools. I’m happy with that, because I primarily think
of myself as a designer. But sadly, even the mightiest games in this
acquisition look silly when stood up next to serious works of art. I
mean what’s the artistic payload of, Passage? ‘You’re gonna die someday.’ You can’t find a sentiment that’s more artistically worn out than that.”
Adam Saltsman
But while he doesn’t see these games’ inclusion as a significant
landmark – in fact, he even raises concerns over bandwagon-hopping –
he’s still elated to have been included.
“I’m shocked to see my little game standing there next to landmarks
like Pac-Man, Tetris, Another World, and… all of them really, all the
way up to Canabalt,” he says. “The most pleasing aspect of it, for me,
is that something I have made will be preserved and maintained into the
future, after I croak. The ephemeral nature of digital-download video
games has always worried me. Heck, the Mac version of Passage has
already been broken by Apple’s updates, and it’s only been five years!”
Talking of Canabalt, creator Adam Saltsman echoes Rohrer’s sentiment:
“Obviously it is a pretty huge honour, but I think it’s also important
to note that these selections are part of the design wing of the museum,
so Tetris won’t exactly be right next to Dali or Picasso! That doesn’t
really diminish the excitement for me though. The MoMA is an incredible
institution, and to have my work selected for archival alongside obvious
masterpieces like Tetris is pretty overwhelming. “
MoMA’s not the only art institution with an interest in videogames,
of course. The Smithsonian American Art Museum ran an exhibition titled
The Art of Video Games earlier this year, while the Barbican has put its
weight behind all manner of events, including 2002?s The History,
Culture and Future of Computer Games, Ear Candy: Video Game Music, and
the touring Game On exhibition.
Eve Online
Chris Melissinos, who was one of the guest curators who put the
Smithsonian exhibition together and subsequently acted as an adviser to
MoMA as it selected its list, doesn’t think such interest is damaging to
art, or indeed a sign of out-of-step institutions jumping on the
bandwagon. It’s simply, he believes, a reaction to today’s culture.
“This decision indicates that videogames have become an important cultural, artistic form of expression in society,” he told the Independent.
“It could become one of the most important forms of artistic
expression. People who apply themselves to the craft view themselves as
[artists], because they absolutely are. This is an amalgam of many
traditional forms of art.”
Of the initial selection, Eve is arguably the most ambitious, and
potentially divisive, selection, but perhaps also the best placed to
challenge Jones’ predispositions on experiential ownership and creative
limitation. It is, after all, renowned for its vociferous,
self-governing player community.
“Eve’s been around for close to a decade, is still growing, and
through its lifetime has won several awards and achievements, but being
acquired into the permanent collection of a world leading contemporary
art and design museum is a tremendous honour for us,” Eve Online
creative director Torfi Frans Ólafsson tells us. “Eve is born out of a
strong ideology of player empowerment and sandbox openness, which
especially in our earlier days was often at the cost of accessibility
and mainstream appeal.
Torfi Frans Ólafsson
“Sitting up there along with industrial design like the original
iPod, and fancy, unergonomic lemon presses tells us that we were right
to stand by our convictions, so in that sense, it’s somewhat of a
vindication of our efforts.”
But how do you present an entire universe to an audience that is
likely to spend a few short minutes looking at each exhibit? Developer
CCP is turning to its many players for help.
“We’ve decided to capture a single day of Eve: Sunday the 9th of
December,” explains Ólafsson. “Through a variety of player made videos,
CCP videos, massive data analysis and info graphics.”
In presenting Eve in this way, CCP and the games players are
collaborating on a strong, coherent vision of the alternative reality
they’ve collectively helped to build, and more importantly, reinforcing
and redefining the notion of authorship. It doesn’t matter whether
you’re an apologist for videogames’ entitlement to the status of art, or
someone who appreciates the aesthetics of their design, the important
thing here is that their cultural importance is recognised. Sure, the
notion of a game exhibit that doesn’t include gameplay might stick in
the craw of some, but MoMA’s interest is clearly broader. Ólafsson isn’t
too worried, either.
“Even if we don’t fully succeed in making the 3.5 million people that
visit the MoMA every year visually grok the entire universe in those
few minutes they might spend checking Eve out, I can promise you it sure
will look pretty there on the wall.”
Personal Comments:
Passage is still available here, a game developed during Gamma256.
Canabalt is available here, while mobile versions are available for few bucks (android, iOS).
If you’re a software developer—or if you follow the work of software developers—you’ve probably heard of TouchDevelop,
a Microsoft Research app that enables you to write code for your phone
using scripts on your phone. Its ability to bring the excitement of
programming to Windows Phone 7 has reaped lots of enthusiasm from the
development community over the past year or so.
Now, the team behind TouchDevelop has taken things a step further, with a web app that can work on any Windows 8
device with a touchscreen. You can write Windows Store apps simply by
tapping on the screen of your device. The web app also works with a
keyboard and mouse, but the touchscreen capability means that the
keyboard is not required. To learn more, watch this video.
This reimplementation of TouchDevelop went live just in time for Build,
Microsoft’s annual conference that helps developers learn how to take
advantage of Windows 8. The conference is being held Oct. 30-Nov. 2 in
Redmond, Wash.
The TouchDevelop web app, which requires Internet Explorer 10,
enables developers to publish their scripts so they can be shared with
others using TouchDevelop. As with the Windows Phone version, a
touchdevelop.com cloud service enables scripts to be published and
queried, and when you log in with the same credentials, all of your
scripts are synchronized between all your platforms and devices.
While
in the TouchDevelop web app, users can navigate to the properties of an
installed script already created. Videos describing editor operation of
the TouchDevelop web app are available on the project’s webpage.
TouchDevelop shipped as a Windows Phone app about a year and a half ago and has seen strong downloads and reviews in the Windows Phone Store.
“Our
TouchDevelop app for Windows Phone has been downloaded more than
200,000 times,” Tillmann says, “and more than 20,000 users have logged
in with a Windows Live ID or via Facebook.”
Since
the app became available, Tillmann and his RiSE colleagues have been
astounded by the creativity the user base has demonstrated. Further
Windows 8 developer excitement will be on display during Build, which is being streamed to audiences worldwide.
We've been writing a lot recently about how the private space industry
is poised to make space cheaper and more accessible. But in general,
this is for outfits such as NASA, not people like you and me.
Today, a company called NanoSatisfi is launching a Kickstarter project to send an Arduino-powered satellite into space, and you can send an experiment along with it.
Whether it's private industry or NASA or the ESA or anyone else,
sending stuff into space is expensive. It also tends to take
approximately forever to go from having an idea to getting funding to
designing the hardware to building it to actually launching something.
NanoSatisfi, a tech startup based out of NASA's Ames Research Center
here in Silicon Valley, is trying to change all of that (all of
it) by designing a satellite made almost entirely of off-the-shelf (or
slightly modified) hobby-grade hardware, launching it quickly, and then
using Kickstarter to give you a way to get directly involved.
ArduSat is based on the CubeSat platform, a standardized satellite
framework that measures about four inches on a side and weighs under
three pounds. It's just about as small and cheap as you can get when it
comes to launching something into orbit, and while it seems like a very
small package, NanoSatisfi is going to cram as much science into that
little cube as it possibly can.
Here's the plan: ArduSat, as its name implies, will run on Arduino
boards, which are open-source microcontrollers that have become wildly
popular with hobbyists. They're inexpensive, reliable, and packed with
features. ArduSat will be packing between five and ten individual
Arduino boards, but more on that later. Along with the boards, there
will be sensors. Lots of sensors, probably 25 (or more), all compatible with the Arduinos and all very tiny and inexpensive. Here's a sampling:
Yeah, so that's a lot of potential for science, but the entire
Arduino sensor suite is only going to cost about $1,500. The rest of the
satellite (the power system, control system, communications system,
solar panels, antennae, etc.) will run about $50,000, with the launch
itself costing about $35,000. This is where you come in.
NanoSatisfi is looking for Kickstarter funding to pay for just the
launch of the satellite itself: the funding goal is $35,000. Thanks to
some outside investment, it's able to cover the rest of the cost itself.
And in return for your help, NanoSatisfi is offering you a chance to
use ArduSat for your own experiments in space, which has to be one of the coolest Kickstarter rewards ever.
For a $150 pledge, you can reserve 15 imaging slots on ArduSat.
You'll be able to go to a website, see the path that the satellite will
be taking over the ground, and then select the targets you want to
image. Those commands will be uploaded to the ArduSat, and when it's in
the right spot in its orbit, it'll point its camera down at Earth and
take a picture which will be then emailed right to you. From space.
For $300, you can upload your own personal message to ArduSat, where it will be broadcast back to Earth from space
for an entire day. ArduSat is in a polar orbit, so over the course of
that day, it'll circle the Earth seven times and your message will be
broadcast over the entire globe.
For $500, you can take advantage of the whole point of ArduSat and
run your very own experiment for an entire week on a selection of
ArduSat's sensors. You know, in space. Just to be
clear, it's not like you're just having your experiment run on data
that's coming back to Earth from the satellite. Rather, your experiment
is uploaded to the satellite itself, and it's actually running on one of
the Arduino boards on ArduSat real time, which is why there are so many
identical boards packed in there.
Now, NanoSatisfi itself doesn't really expect to get involved with a
lot of the actual experiments that the ArduSat does: rather, it's saying
"here's this hardware platform we've got up in space, it's got all
these sensors, go do cool stuff." And if the stuff that you can do with
the existing sensor package isn't cool enough for you, backers of the
project will be able to suggest new sensors and new configurations, even
for the very first generation ArduSat.
To make sure you don't brick the satellite with buggy code,
NanSatisfi will have a duplicate satellite in a space-like environment
here on Earth that it'll use to test out your experiment first. If
everything checks out, your code gets uploaded to the satellite, runs in
whatever timeslot you've picked, and then the results get sent back to
you after your experiment is completed. Basically, you're renting time
and hardware on this satellite up in space, and you can do (almost)
whatever you want with that.
ArduSat has a lifetime of anywhere from six months to two years. None
of this payload stuff (neither the sensors nor the Arduinos) are
specifically space-rated or radiation-hardened or anything like that,
and some of them will be exposed directly to space. There will be some
backups and redundancy, but partly, this will be a learning experience
to see what works and what doesn't. The next generation of ArduSat will
take all of this knowledge and put it to good use making a more capable
and more reliable satellite.
This, really, is part of the appeal of ArduSat: with a fast,
efficient, and (relatively) inexpensive crowd-sourced model, there's a
huge potential for improvement and growth. For example, If this
Kickstarter goes bananas and NanoSatisfi runs out of room for people to
get involved on ArduSat, no problem, it can just build and launch
another ArduSat along with the first, jammed full of (say) fifty more
Arduinos so that fifty more experiments can be run at the same time. Or
it can launch five more ArduSats. Or ten more. From the decision to
start developing a new ArduSat to the actual launch of that ArduSat is a
period of just a few months. If enough of them get up there at the same
time, there's potential for networking multiple ArduSats together up in
space and even creating a cheap and accessible global satellite array.
If this sounds like a lot of space junk in the making, don't worry: the
ArduSats are set up in orbits that degrade after a year or two, at which
point they'll harmlessly burn up in the atmosphere. And you can totally
rent the time slot corresponding with this occurrence and measure
exactly what happens to the poor little satellite as it fries itself to a
crisp.
Longer term, there's also potential for making larger ArduSats with
more complex and specialized instrumentation. Take ArduSat's camera:
being a little tiny satellite, it only has a little tiny camera, meaning
that you won't get much more detail than a few kilometers per pixel. In
the future, though, NanoSatisfi hopes to boost that to 50 meters (or
better) per pixel using a double or triple-sized satellite that it'll
call OptiSat. OptiSat will just have a giant camera or two, and in
addition to taking high resolution pictures of Earth, it'll also be able
to be turned around to take pictures of other stuff out in space. It's
not going to be the next Hubble, but remember, it'll be under your control.
NanoSatisfi's Peter Platzer holds a prototype ArduSat board,
including the master controller, sensor suite, and camera. Photo: Evan
Ackerman/DVICE
Assuming the Kickstarter campaign goes well, NanoSatisfi hopes to
complete construction and integration of ArduSat by about the end of the
year, and launch it during the first half of 2013. If you don't manage
to get in on the Kickstarter, don't worry- NanoSatisfi hopes that there
will be many more ArduSats with many more opportunities for people to
participate in the idea. Having said that, you should totally get
involved right now: there's no cheaper or better way to start doing a
little bit of space exploration of your very own.
Check out the ArduSat Kickstarter video below, and head on through the link to reserve your spot on the satellite.
Here(@American Scientist) is an interesting and rather complete article on programming language [evolution,war] with some infographics on programming history and methods. It is not that technical, making the reading accessible to any one.