If you follow the world of Android
tablets and phones, you may have heard a lot about Tegra 3 over the
last year. Nvidia's chip currently powers many of the top Android
tablets, and should be found in a few Android smartphones by the end of
the year. It may even form the foundation of several upcoming Windows 8
tablets and possibly future phones running Windows Phone 8. So what is
the Tegra 3 chip, and why should you care whether or not your phone or
tablet is powered by one?
Nvidia's system-on-chip
Tegra is the brand for Nvidia's line of system-on-chip (SoC) products
for phones, tablets, media players, automobiles, and so on. What's a
system-on-chip? Essentially, it's a single chip that combines all the
major functions needed for a complete computing system: CPU cores,
graphics, media encoding and decoding, input-output, and even cellular
or Wi-Fi communcations and radios. The Tegra series competes with chips
like Qualcomm's Snapdragon, Texas Instruments' OMAP, and Samsung's
Exynos.
The first Tegra chip was a flop. It was used in very few products,
notably the ill-fated Zune HD and Kin smartphones from Microsoft. Tegra
2, an improved dual-core processor, was far more successful but still
never featured in enough devices to become a runaway hit.
Tegra 3 has been quite the success so far. It is found in a number of popular Android tablets like the Eee Pad Transformer Prime, and is starting to find its way into high-end phones like the global version of the HTC One X
(the North American version uses a dual-core Snapdragon S4 instead, as
Tegra 3 had not been qualified to work with LTE modems yet). Expect to
see it in more Android phones and tablets internationally this fall.
4 + 1 cores
Tegra 3 is based on the ARM processor design and architecture, as are
most phone and tablet chips today. There are many competing ARM-based
SoCs, but Tegra 3 was one of the first to include four processor cores.
There are now other quad-core SoCs from Texas Instruments and Samsung,
but Nvidia's has a unique defining feature: a fifth low-power core.
All five of the processor cores are based on the ARM Cortex-A9
design, but the fifth core is made using a special low-power process
that sips battery at low speeds, but doesn't scale up to high speeds
very well. It is limited to only 500MHz, while the other cores run up to
1.4GHz (or 1.5GHz in single-core mode).
When your phone or tablet is in sleep mode, or you're just performing
very simple operations or using very basic apps, like the music player,
Tegra 3 shuts down its four high-power cores and uses only the
low-power core. It's hard to say if this makes it far more efficient
than other ARM SoCs, but battery life on some Tegra 3 tablets has been
quite good.
Tegra 3 under a microscope. You can see the five CPU cores in the center.
Good, not great, graphics
Nvidia's heritage is in graphics processors. The company's claim to
fame has been its GPUs for traditional laptops, desktops, and servers.
You might expect Tegra 3 to have the best graphics processing power of
any tablet or phone chip, but that doesn't appear to be the case. Direct
graphics comparisons can be difficult, but there's a good case to be
made that the A5X processor in the new iPad has a far more powerful
graphics processor. Still, Tegra 3 has plenty of graphics power, and
Nvidia works closely with game developers to help them optimize their
software for the platform. Tegra 3 supports high-res display output (up
to 2560 x 1600) and improved video decoding capabilities compared to
earlier Tegra chips.
Do you need one?
The million-dollar question is: Does the Tegra 3 chip provide a truly
better experience than other SoCs? Do you need four cores, or even "4 +
1"? The answer is no. Most smartphone and tablet apps don't make great
use of multiple CPU cores, and making each core faster can often do more
for the user experience than adding more cores. That said, you
shouldn't avoid a product because it has a Tegra 3 chip, either. Its
performance and battery life appear to be quite competitive in today's
tablet and phone market. Increasingly, the overall quality of a product
is determined by its design, size, weight, display quality, camera
quality, and other features more than mere processor performance.
Consider PCWorld's review of the North American HTC One X; with the dual-core Snapdragon S4 instead of Tegra 3, performance was still very impressive.
IN THE classic science-fiction film “2001”, the ship’s computer, HAL,
faces a dilemma. His instructions require him both to fulfil the ship’s
mission (investigating an artefact near Jupiter) and to keep the
mission’s true purpose secret from the ship’s crew. To resolve the
contradiction, he tries to kill the crew.
As robots become more autonomous, the notion of computer-controlled
machines facing ethical decisions is moving out of the realm of science
fiction and into the real world. Society needs to find ways to ensure
that they are better equipped to make moral judgments than HAL was.
A bestiary of robots
Military technology, unsurprisingly, is at the forefront of the march towards self-determining machines (see Technology Quarterly).
Its evolution is producing an extraordinary variety of species. The
Sand Flea can leap through a window or onto a roof, filming all the
while. It then rolls along on wheels until it needs to jump again. RiSE,
a six-legged robo-cockroach, can climb walls. LS3, a dog-like robot,
trots behind a human over rough terrain, carrying up to 180kg of
supplies. SUGV, a briefcase-sized robot, can identify a man in a crowd
and follow him. There is a flying surveillance drone the weight of a
wedding ring, and one that carries 2.7 tonnes of bombs.
Robots are spreading in the civilian world, too, from the flight deck to the operating theatre (see article).
Passenger aircraft have long been able to land themselves. Driverless
trains are commonplace. Volvo’s new V40 hatchback essentially drives
itself in heavy traffic. It can brake when it senses an imminent
collision, as can Ford’s B-Max minivan. Fully self-driving vehicles are
being tested around the world. Google’s driverless cars have clocked up
more than 250,000 miles in America, and Nevada has become the first
state to regulate such trials on public roads. In Barcelona a few days
ago, Volvo demonstrated a platoon of autonomous cars on a motorway.
As they become smarter and more widespread, autonomous machines are
bound to end up making life-or-death decisions in unpredictable
situations, thus assuming—or at least appearing to assume—moral agency.
Weapons systems currently have human operators “in the loop”, but as
they grow more sophisticated, it will be possible to shift to “on the
loop” operation, with machines carrying out orders autonomously.
As that happens, they will be presented with ethical dilemmas. Should
a drone fire on a house where a target is known to be hiding, which may
also be sheltering civilians? Should a driverless car swerve to avoid
pedestrians if that means hitting other vehicles or endangering its
occupants? Should a robot involved in disaster recovery tell people the
truth about what is happening if that risks causing a panic? Such
questions have led to the emergence of the field of “machine ethics”,
which aims to give machines the ability to make such choices
appropriately—in other words, to tell right from wrong.
One way of dealing with these difficult questions is to avoid them
altogether, by banning autonomous battlefield robots and requiring cars
to have the full attention of a human driver at all times. Campaign
groups such as the International Committee for Robot Arms Control have
been formed in opposition to the growing use of drones. But autonomous
robots could do much more good than harm. Robot soldiers would not
commit rape, burn down a village in anger or become erratic
decision-makers amid the stress of combat. Driverless cars are very
likely to be safer than ordinary vehicles, as autopilots have made
planes safer. Sebastian Thrun, a pioneer in the field, reckons
driverless cars could save 1m lives a year.
Instead, society needs to develop ways of dealing with the ethics of
robotics—and get going fast. In America states have been scrambling to
pass laws covering driverless cars, which have been operating in a legal
grey area as the technology runs ahead of legislation. It is clear that
rules of the road are required in this difficult area, and not just for
robots with wheels.
The best-known set of guidelines for robo-ethics are the “three laws
of robotics” coined by Isaac Asimov, a science-fiction writer, in 1942.
The laws require robots to protect humans, obey orders and preserve
themselves, in that order. Unfortunately, the laws are of little use in
the real world. Battlefield robots would be required to violate the
first law. And Asimov’s robot stories are fun precisely because they
highlight the unexpected complications that arise when robots try to
follow his apparently sensible rules. Regulating the development and use
of autonomous robots will require a rather more elaborate framework.
Progress is needed in three areas in particular.
Three laws for the laws of robotics
First, laws are needed to determine whether the designer, the
programmer, the manufacturer or the operator is at fault if an
autonomous drone strike goes wrong or a driverless car has an accident.
In order to allocate responsibility, autonomous systems must keep
detailed logs so that they can explain the reasoning behind their
decisions when necessary. This has implications for system design: it
may, for instance, rule out the use of artificial neural networks,
decision-making systems that learn from example rather than obeying
predefined rules.
Second, where ethical systems are embedded into robots, the judgments
they make need to be ones that seem right to most people. The
techniques of experimental philosophy, which studies how people respond
to ethical dilemmas, should be able to help. Last, and most important,
more collaboration is required between engineers, ethicists, lawyers and
policymakers, all of whom would draw up very different types of rules
if they were left to their own devices. Both ethicists and engineers
stand to benefit from working together: ethicists may gain a greater
understanding of their field by trying to teach ethics to machines, and
engineers need to reassure society that they are not taking any ethical
short-cuts.
Technology has driven mankind’s progress, but each new advance has
posed troubling new questions. Autonomous machines are no different. The
sooner the questions of moral agency they raise are answered, the
easier it will be for mankind to enjoy the benefits that they will
undoubtedly bring.
Of all the noises that my children will not understand, the one that is
nearest to my heart is not from a song or a television show or a jingle.
It's the sound of a modem connecting with another modem across the
repurposed telephone infrastructure. It was the noise of being part of
the beginning of the Internet.
I heard that sound again this week on Brendan Chillcut's simple and wondrous site: The Museum of Endangered Sounds.
It takes technological objects and lets you relive the noises they
made: Tetris, the Windows 95 startup chime, that Nokia ringtone,
television static. The site archives not just the intentional sounds --
ringtones, etc -- but the incidental ones, like the mechanical noise a
VHS tape made when it entered the VCR or the way a portable CD player
sounded when it skipped. If you grew up at a certain time, these sounds
are like technoaural nostalgia whippets. One minute, you're browsing the
Internet in 2012, the next you're on a bus headed up I-5 to an 8th
grade football game against Castle Rock in 1995.
The noises our technologies make, as much as any music, are the soundtrack to an era. Soundscapes
are not static; completely new sets of frequencies arrive, old things
go. Locomotives rumbled their way through the landscapes of 19th century
New England, interrupting Nathaniel Hawthorne-types' reveries in Sleepy
Hollows. A city used to be synonymous with the sound of horse hooves
and the clatter of carriages on the stone streets. Imagine the people
who first heard the clicks of a bike wheel or the vroom of a car engine.
It's no accident that early films featuring industrial work often
include shots of steam whistles, even though in many (say, Metropolis)
we can't hear that whistle.
When I think of 2012, I will think of the overworked fan of my laptop
and the ding of getting a text message on my iPhone. I will think of
the beep of the FastTrak in my car as it debits my credit card so I can
pass through a toll onto the Golden Gate Bridge. I will think of Siri's
uncanny valley voice.
But to me, all of those sounds -- as symbols of the era in which I've
come up -- remain secondary to the hissing and crackling of the modem
handshake. I first heard that sound as a nine-year-old. To this day, I
can't remember how I figured out how to dial the modem of our old
Zenith. Even more mysterious is how I found the BBS number to call or
even knew what a BBS was. But I did. BBS were dial-in communities, kind
of like a local AOL.
You could post messages and play games, even chat with people on the
bigger BBSs. It was personal: sometimes, you'd be the only person
connected to that community. Other times, there'd be one other person,
who was almost definitely within your local prefix.
When we moved to Ridgefield, which sits outside Portland, Oregon, I had a summer with no
friends and no school: The telephone wire became a lifeline. I
discovered Country Computing, a BBS I've eulogized before,
located in a town a few miles from mine. The rural Washington BBS world
was weird and fun, filled with old ham-radio operators and
computer nerds. After my parents' closed up
shop for the work day, their "fax line" became my modem line, and I
called across the I-5 to play games and then, slowly, to participate in
the
nascent community.
In the beginning of those sessions, there was the sound, and the sound was data.
Fascinatingly, there's no good guide to the what the beeps and hisses
represent that I could find on the Internet. For one, few people care
about the technical details of 1997's hottest 56k modems. And for
another, whatever good information exists out there predates the popular
explosion of the web and the all-knowing Google.
So, I asked on Twitter and was rewarded with an accessible and elegant explanation from another user whose nom-de-plume is Miso Susanowa.
(Susanowa used to run a BBS.) I transformed it into the annotated
graphic below, which explains the modem sound part-by-part. (You can
click it to make it bigger.)
This is a choreographed sequence that allowed these digital devices to
piggyback on an analog telephone network. "A phone line carries only the small range of frequencies in
which most human conversation takes place: about 300 to 3,300 hertz," Glenn Fleishman explained in the Times back in 1998. "The
modem works within these limits in creating sound waves to carry data
across phone lines." What you're hearing is the way 20th century technology tunneled through a 19th century network;
what you're hearing is how a network designed to send the noises made
by your muscles as they pushed around air came to transmit anything, or
the almost-anything that can be coded in 0s and 1s.
The frequencies of the modem's sounds represent
parameters for further communication. In the early going, for example,
the modem that's been dialed up will play a note that says, "I can go
this fast." As a wonderful old 1997 website explained, "Depending on the speed the modem is trying to talk at, this tone will have a
different pitch."
That is to say, the sounds weren't a sign that data was being
transferred: they were the data being transferred. This noise was the
analog world being bridged by the digital. If you are old enough to
remember it, you still knew a world that was analog-first.
Long before I actually had this answer in hand, I could sense that the
patterns of the beats and noise meant something. The sound would move
me, my head nodding to the beeps that followed the initial connection.
You could feel two things trying to come into sync: Were they computers
or me and my version of the world?
As I learned again today, as I learn every day, the answer is both.
Microsoft has let it be known that their final release of the Internet Explorer 10
web browser software will have “Do Not Track” activated right out of
the box. This information has upset advertisers across the board as web
ad targeting – based on your online activities – is one of the current
mainstays of big-time advertiser profits. What Do Not Track, or DNT does
is to send out signal from your web browser, Internet Explorer 10 in
this case, to websites letting them know that the user refuses to be
seen in such a way.
A very similar Do Not Track feature currently exists on Mozilla’s Firefox browser
and is swiftly becoming ubiquitous around the web as a must-have
feature for web privacy. This will very likely bring about a large
change in the world of online advertising specifically as, again,
advertisers rely on invisible tracking methods so heavily. Tracking in
place today also exists on sites such as Google where your search
history will inform Google on what you’d like to see for search results, News posts, and advertisement content.
The Digital Advertising Aliance, or DAA, has countered Microsoft’s
announcement saying that the IE10 browser release would oppose
Microsoft’s agreement with the White House earlier this year. This
agreement had the DAA agreeing to recognize and obey the Do Not Track
signals from IE10 just so long as the option to have DNT activated was
not turned on by default. Microsoft Chief Privacy Officer Brendan Lynch
spoke up this week on the situation this week as well.
“In a world where consumers live a large part of their
lives online, it is critical that we build trust that their personal
information will be treated with respect, and that they will be given a
choice to have their information used for unexpected purposes.
While there is still work to do in agreeing on an industry-wide
definition of DNT, we believe turning on Do Not Track by default in IE10
on Windows 8 is an important step in this process of establishing
privacy by default, putting consumers in control and building trust
online.” – Lynch
Bigger may be better if you're from Texas, but
it's becoming increasingly clear to the rest of us that it really is a
small world after all.
Case in point? None other than what one might reasonably call the invasion of tiny Linux PCs going on all around us.
We've got the Raspberry Pi,
we've got the Cotton Candy. Add to those the
Mele A1000, the
VIA APC, the
MK802 and more, and it's becoming increasingly difficult not to compute like a Lilliputian.
Where's it all going? That's what Linux bloggers have been pondering
in recent days. Down at the seedy Broken Windows Lounge the other night,
Linux Girl got an earful.
It's 'Fantastic'
"Linux has been heading towards one place for many years now: complete and total world domination!" quipped
Thoughts on Technology blogger and
Bodhi Linux lead developer Jeff Hoogland.
"All joking aside, these new devices simply further showcase Linux's
unmatched ability to be flexible across an array of different devices of
all sizes and power," Hoogland added.
"Having a slew of devices that are powerful enough for users to
browse the web -- which, let's be honest, is all a good deal of people
do these days -- for under 100 USD is fantastic," he concluded.
'It Only Gets More Exciting'
Similarly, "I believe that the medley of tiny Linux PCs we're seeing
hitting the market lately is the true sign of the Post PC Era,"
suggested Google+ blogger
Linux Rants.
"The smartphone started it, but the Post PC Era will begin in earnest
when the functionality that we currently see in the home computer is
replaced by numerous small appliance-type devices," Linux Rants
explained. "These tiny Linux PCs are the harbinger of those appliances
-- small, low-cost, programmable devices that can be made into virtually
anything the owner desires."
Other devices we're already seeing include "the oven that you can
turn on with a text message, the espresso machine that you can control
with a text message, home security
systems and cars that can be controlled from your smartphone," he
added. "This is where the Post PC Era begins, and it only gets more
exciting from here."
'There Is No Reason Not to Do It'
This is "definitely the wave of the future," agreed Google+ blogger Kevin O'Brien.
"Devices of all kinds are getting smaller and smaller, while
simultaneously increasing their power," O'Brien explained. "Exponential
growth does that over time.
"My phone in my pocket right now has more computing power than the
rockets that went to the moon," he added. "And if you look ahead, a few
more turns of exponential growth means we'll have the equivalent of a
full desktop computer the size of an SD card within a few years."
At that point, "everything starts to be computerized, because adding a
little intelligence is so cheap there is no reason not to do it,"
O'Brien concluded.
'We're Approaching That Future'
Indeed, "many including myself have long been harping on the fact that
today's computers are orders of magnitude faster than early systems on
which we ran graphic interfaces and got work done, and yet are dismissed
as toys,"
Hyperlogos blogger Martin Espinoza told Linux Girl.
"A friend suggested to me once that eventually microwave ovens would
contain little Unix servers 'on a chip' because that would be basically
all you could get, because it would actually be cheaperto use
such a system when given the cost of developing an alternative," he
said. "Seeing the cost of these new products it looks to me like we're
approaching that future rapidly.
"There has always been demand for low-cost computers, and the massive
proliferation of low-cost, low-power cores has pushed their price down
to the point where we can finally have them," Espinoza concluded.
"Even adjusted for inflation," he said, "many of these computers are
an order of magnitude cheaper than the cheapest useful home computers
from the time when personal computing began to gain popularity, and yet
they are certainly powerful enough to serve many roles including many
people's main or even only 'computer.'"
'It Gives Me Hope'
Consultant and Slashdot blogger Gerhard Mack was similarly enthusiastic.
"I love it," Mack told Linux Girl.
"When I was a child, my parents brought home all sorts of fun things
to tinker with, and I learned while doing it," he explained. "But these
last few years it seems like the learning electronics and their
equivalents have disappeared into a mass of products that are only for
what the manufacturer designed them for and nothing else.
"I am loving the return of my ability to tinker," Mack concluded. "It
gives me hope that there can be a next generation of kids who can love
the enjoyment of simply creating things."
'They Look a Bit Expensive'
Not everyone was thrilled, however.
"Okay, I am not all that excited about the invasion of the tiny PC," admitted Roberto Lim, a lawyer and blogger on
Mobile Raptor.
"With 7-inch Android tablets with capacitive displays, running
Android 4.0 and with access to Google (Nasdaq: GOOG) Play's Android app
market, 8 GB or storage expandable via a Micro SD card, 1080p video
playback, a USB port and HDMI out and 3000 to 4000 mAh batteries starting at US$90, it is a bit hard to get excited about these tiny PCs," Lim explained.
"Despite the low prices of the tiny PCs, they all look a bit
expensive when compared to what is already in the market," he opined.
'These Devices Have a Niche'
The category really isn't even all that new, Slashdot blogger hairyfeet opined.
"There have been mini ARM-based Linux boxes for several years now,"
he explained. "From portable media players to routers to set-top boxes,
there are a ton of little bitty boxes running embedded Linux."
It's not even quite right to call such devices PCs "because PC has an
already well-defined meaning: it was originally 'IBM PC compatible,'"
hairyfeet added. "Even if you give them the benefit of the doubt, PCs
have always been general use computers, and these things are FAR from
general use."
Rather, "they are designed with a very specific and narrow job in
mind," he said. "Trying to use them as general computers would just be
painful."
So, "in the end these devices have a niche, just as routers and
beagleboards and the pi does, but that niche is NOT general purpose in
any way, shape, or form," hairyfeet concluded.
'Opportunities for Specialists'
Chris Travers, a Slashdot blogger who works on the Ledger SMB project,
considered the question through the lens of evolutionary ecology.
"In any ecological system an expanding niche allows for
differentiation, and a contracting niche requires specialization,"
Travers pointed out. 'So, for example, if a species of moth undergoes a
population explosion, predators of that moth will often specialize and
be more picky as to what prey they go after."
The same thing happens with markets, Travers suggested.
"When a market expands, it provides opportunities for specialists,
but when it contracts, only the generalists can survive," he told Linux
Girl.
'Niche Environments'
The tiny new devices are "replacements for desktop and laptop systems in niche environments," Travers opined.
"In many environments these may be far more capable than traditional systems," he added.
The bottom line, though, "is that the Linux market is growing at a healthy rate," Travers concluded.
'The Right Way to Do IT'
"Moore's Law allows the world to do more with less hardware and so does FLOSS," blogger
Robert Pogson told Linux Girl. "It's the right way to do IT rather than paying a bunch for the privilege of running the hardware we own."
Last year was a turning point, Pogson added.
"More people bought small, cheap computers running Linux than that
other OS, and the world saw that things were fine without Wintel," he
explained. "2012 will bring more of the same." By the end of this year, in fact, "the use of GNU/Linux on small
cheap computers doing what we used to do with huge hair-drying Wintel
PCs will be mainstream in many places on Earth," Pogson predicted. "In
2012 we will see a major decline in the number of PCs running that other
OS. We will see major shelf-space given to Linux PCs at retail."
Korean Emart recently placed 3D QR code sculptures throughout the city of Seoul that could only be scanned between noon and 1 pm each day — consumers were given discounts at the store during those quiet shopping hours.
Periodic lulls in business are a fact of life for most retailers, and we’ve already seen solutions including daily deals that are valid only during those quiet times. Recently, however, we came across a concept that takes such efforts even further. Specifically, Korean Emart recently placed 3D QR code sculptures throughout the city of Seoul that could only be scanned between noon and 1 pm each day — consumers who succeeded were rewarded with discounts at the store during those quiet shopping hours.
Dubbed “Sunny Sale,” Emart’s effort involved setting up a series of what it calls “shadow” QR codes that depend on peak sunlight for proper viewing and were scannable only between 12 and 1 pm each day. Successfully scanning a code took consumers to a dedicated home page with special offers including a coupon worth USD 12. Purchases could then be made via smartphone for delivery direct to the consumer’s door. The video below explains the campaign in more detail:
As a result of its creative promotion, Emart reportedly saw membership increase by 58 percent in February over the previous month, they also observed a 25 percent increase in sales during lunch hours. Retailers around the globe: One for inspiration?
Small
satellites capable of docking in orbit could be used as "space building
blocks" to create larger spacecraft, says UK firm Surrey Satellite
Technology. Not content with putting a smartphone app in space, the company now plans to launch a satellite equipped with a Kinect depth camera, allowing it to locate and join with other nearby satellites.
SpaceX's Dragon spacecraft is the latest of many large-scale vehicles to dock in space,
but joining small and low-cost craft has not been attempted before.
Surrey Satellite Technology's Strand-2 mission will launch two 30cm-long
satellites on the same rocket, then attempt to dock them by using the
Kinect sensor to align together in 3D space.
"Once
you can launch low cost nanosatellites that dock together, the
possibilities are endless - like space building blocks," says project
leader Shaun Kenyon. For example, it might be possible to launch
the components for a larger spacecraft one piece at a time, then have
them automatically assemble in space.
Even your most advanced toaster won't ask that much of you these
days. No matter what you're browning, it all boils down to lowering that
lever and knowing that something is about to get toasty.
So, how do you make a complex piece of technology such as a 3D
printer easy enough for everyone to use, like a toaster? Well, to start,
you focus around a one-button design. There are 3D printers on the way
that want you to be able to start fabricating cool stuff just like that — just with one button. For the most part, it really can be that easy.
Here we preview 3D System's forthcoming Cube 3D printer, which is
looking toward a nearer-than-you-think future where 3D fabrication is
commonplace and something anyone can do.
Photo Credit: Kevin Hall/DVICE
With One Press Of This Button
If there really is a 3D printing at home revolution waiting to
happen, then 3D printers need to sort out two big barriers to entry: 1)
the steep learning curve one must overcome to use the technology and 2)
the capability of easily providing people with useful stuff to print.
While there are a number of options available
and on the way, the trailblazer for 3D printing at home was Makerbot's
Thing-O-Matic, followed up by the group's more versatile Replicator.
The Thing-O-Matic epitomizes the 3D printer as geared toward hobbyists:
it's industrial looking, requires technical know-how to get started and
— though you could buy them fully assembled — the Thing-O-Matic was
designed to be put together by someone who can solder. With the
Replicator, Makerbot hasn't left its hobbyists behind, offering a bigger
build space and two-color printing, but the platform now comes fully
assembled and tested, and Makerbot's robust and growing Thingiverse makes finding designs to print easy and free.
That same thinking — making 3D printing easier out of the box — is
shared by 3D Systems but taken along a little farther with the Cube. The
Cube comes in a box like any old gadget on a shelf. It also doesn't
look industrial and tinkery like other 3D printers, appearing a lot more
like a desktop PC or a sewing machine. Where the Replicator would be at
home in your workshop or garage, the Cube can sit on a kitchen counter
next to your toaster. The Cube also connects to your home network via
Wi-Fi, meaning you can use your PC to push new designs over to it,
although you don't need a PC to get it to work.
The Cube has a build space that's 5.5-inches all around (length,
width and height), which makes it perfect for action figures, cups,
jewelry and anything small. Larger objects can also be made, you just
need to print them out in smaller pieces and put them together. The Cube
will come paired with different apps and software to help you design
specific objects. For instance, one app we saw was like Build-A-Bear, but you were putting together your own robot instead.
In our video below, 3D Systems Social Media Manager Adam Reichental
walks us through just how easy it is to fire the Cube up.
Screencap: Cubify.com
If Apple's App Store Sold 3D Objects
The real difference between the Cube and its hobbyist competitors is
how you discover objects to print. For 3D printing enthusiasts,
Thingiverse represents the easiest go-to. Outside of Thingiverse and
Google-fishing for objects, you're really only left with the option of
making your own designs.
3D System's solution? Cubify, a Thingiverse-like site with some crucial differences.
Whereas objects on Thingiverse are free, on Cubify they aren't. Think
of Cubify as the Apple App Store of the 3D printing world — it's
curated. Designs uploaded to the site are checked out individually
before they're approved, and any obscene or copyright infringing
templates won't get through. This also allows 3D Systems to test the
designs and make sure they're ready to print with the Cube, taking out
some of the guesswork on your end.
You buy the 3D models you want to print, which range from a few bucks to this $155 oil rig design,
which is the most expensive model we could find. Cubify also lets folks
who don't own 3D printers buy objects, and 3D Systems will print it out
for them and mail it over. That service also starts out cheap, and goes
all the way up to this $8,799 table,
which would take quite a while to print out in small chunks using the
Cube; 3D Systems also operates an industrial printing arm for heavier
duty print jobs. One upside to charging for 3D objects: you support the
designers. Like app makers, 3D modelers will get a cut of the cash for
objects sold.
Using the Cube doesn't mean you have to use Cubify, however. Any
printable 3D model that conforms to the Cube's build area should work.
That said, paired up with Cubify, the Cube promises an experience that
is as easy as browsing for a design on your computer, sending it to the
Cube via Wi-Fi and then printing that object out with the touch of a
button.
How To Get One
The 3D Systems Cube 3D printer is available for a $1,299 pre-order now and starts shipping this Friday, May 25.
Last September, during the f8 Developers’ Conference, Facebook CTO Bret Taylor said that the company had no plans for a “central app repository” – an app store. Today, Facebook is changing its tune. The social giant has announced App Center,
a section of Facebook dedicated to discovering and deploying
high-quality apps on the company’s platform. The App Center will push
apps to iPhone, Android and the mobile Web, giving Facebook its first
true store for mobile app discovery.
The departure from Facebook’s previous company line
comes as the social platform ramps up its mobile offerings to make money
from its hundreds of millions of mobile users. This is not your
father's app store, though.
Let's start with the requirements. Facebook has announced a strict
set of style and quality guidelines to get apps placed in App
Center. Apps that are considered high-quality, as decided by Facebook’s
Insights analytics platform, will get prominent placement. Quality is
determined by user ratings and app engagement. Apps that receive poor
ratings or do not meet Facebook’s quality guidelines won't be listed.
Whether or not an app is a potential Facebook App Center candidate hinges on several factors. It must
• have a canvas page (a page that sets the app's permissions on Facebook’s platform)
• be built for iOS, Android or the mobile Web
• use a Facebook Login or be a website that uses a Facebook Login.
Facebook is in a tricky spot with App Center. It will house not only
apps that are specifically run through its platform but also iOS and
Android apps. Thus it needs to achieve a balance between competition and
cooperation with some of the most powerful forces in the tech universe.
If an app in App Center requires a download, the download link on the
app’s detail page will bring the user to the appropriate app repository,
either Apple's App Store or Android’s Google Play.
One of the more interesting parts of App Center is that Facebook will
allow paid apps. This is a huge move for Facebook as it provides a
boost to its Credits payment service. One of the benefits of having a
store is that whoever controls the store also controls transactions
arising from the items in it, whether payments per download or in-app
purchases. This will go a long way towards Facebook’s goal of monetizing
its mobile presence without relying on advertising.
Facebook App Center Icon Guidelines
Developers interested in publishing apps to Facebook’s App Center should take a look at both the guidelines and the tutorial
that outlines how to upload the appropriate icons, how to request
permissions, how to use Single Sign On (SSO, a requirement for App
Center) and the app detail page.
This is a good move for Facebook. It will give the company several
avenues to start making money off of mobile but also strengthen its
position as one of the backbones of the Web. For instance, App Center is
both separate from iOS and Android but also a part of it. Through App
Center, Facebook can direct traffic to its apps, monitor who and how
users are downloading applications and keep itself at the center of the
user experience.
A paper-based touch pad on an alarmed cardboard box
detects the change in capacitance associated with the touch of a finger
to one of its buttons.
The keypad requires the appropriate sequence of
touches to disarm the system. Image credit: Mazzeo, et al.
The touch pads are made of metallized paper, which is paper coated in
aluminum and transparent polymer. The paper can function as a capacitor, and a laser can be used to cut several individual capacitors in the paper, each corresponding to a key on the touch pad.
When a person touches a key, the key’s capacitance is increased. Once
the keys are linked to external circuitry and a power source, the system
can detect when a key is touched by detecting the increased
capacitance.
According to lead researcher Aaron Mazzeo of Harvard University, the
next steps will be finding a power source and electronics that are
cheap, flexible, and disposable.
Among the applications, inexpensive touch pads could be used for
security purposes. The researchers have already developed a box with an
alarm and keypad that requires a code to allow authorized access.
Disposable touch pads could also be useful in sterile or contaminated
medical environments.