Entries tagged as technology
Related tags3d camera flash game hardware headset history mobile mobile phone software tracking virtual reality web wiki www 3d printing 3d scanner crowd-sourcing diy evolution facial copy food innovation&society medecin microsoft physical computing piracy programming rapid prototyping recycling robot virus 3g gsm lte network ai android apple arduino automation data mining data visualisation neural network sensors siri ad app google htc ios linux os sdk super collider tablet usb windows 8 amazon cloud iphone ar augmented reality army drone artificial intelligence algorythm big data car cloud computing privacy program super computer chrome browser firefox ie laptop computing farm open source security sustainability cpu amd ibm intel nvidia qualcomm cray data center energy facebook art flickr gui internet maps photos qr code
Thursday, May 02. 2013
Via Slash Gear
We’ve been hearing a lot about Google‘s self-driving car lately, and we’re all probably wanting to know how exactly the search giant is able to construct such a thing and drive itself without hitting anything or anyone. A new photo has surfaced that demonstrates what Google’s self-driving vehicles see while they’re out on the town, and it looks rather frightening.
The image was tweeted by Idealab founder Bill Gross, along with a claim that the self-driving car collects almost 1GB of data every second (yes, every second). This data includes imagery of the cars surroundings in order to effectively and safely navigate roads. The image shows that the car sees its surroundings through an infrared-like camera sensor, and it even can pick out people walking on the sidewalk.
Of course, 1GB of data every second isn’t too surprising when you consider that the car has to get a 360-degree image of its surroundings at all times. The image we see above even distinguishes different objects by color and shape. For instance, pedestrians are in bright green, cars are shaped like boxes, and the road is in dark blue.
However, we’re not sure where this photo came from, so it could simply be a rendering of someone’s idea of what Google’s self-driving car sees. Either way, Google says that we could see self-driving cars make their way to public roads in the next five years or so, which actually isn’t that far off, and Tesla Motors CEO Elon Musk is even interested in developing self-driving cars as well. However, they certainly don’t come without their problems, and we’re guessing that the first batch of self-driving cars probably won’t be in 100% tip-top shape.
Tuesday, December 11. 2012
Via Slash Gear
IBM has developed a light-based data transfer system delivering more than 25Gbps per channel, opening the door to chip-dense slabs of processing power that could speed up server performance, the internet, and more. The company’s research into silicon integrated nanophotonics addresses concerns that interconnects between increasingly powerful computers, such as mainframe servers, are unable to keep up with the speeds of the computers themselves. Instead of copper or even optical cables, IBM envisages on-chip optical routing, where light blasts data between dense, multi-layer computing hubs.
Optical interconnects are increasingly being used to link different server nodes, but by bringing the individual nodes into a single stack the delays involved in communication could be pared back even further. Off-chip optical communications would also be supported, to link the data-rich hubs together.
Although the photonics system would be considerably faster than existing links – it supports multiplexing, joining multiple 25Gbps+ connections into one cable thanks to light wavelength splitting – IBM says it would also be cheaper thanks to straightforward manufacturing integration:
Technologies like the co-developed Thunderbolt from Intel and Apple have promised affordable light-based computing connections, but so far rely on more traditional copper-based links with optical versions further down the line. IBM says its system is “primed for commercial development” though warns it may take a few years before products could actually go on sale.
Friday, November 16. 2012
Spoiler alert: a reoccurring cast member bids farewell in the latest James Bond flick. When the production of Skyfall called for the complete decimation of a classic 1960s era Aston Martin DB5, filmmakers opted for something a little more lifelike than computer graphics. The movie studio contracted the services of Augsburg-based 3D printing company Voxeljet to make replicas of the vintage ride. Skipping over the residential-friendly MakerBot Replicator, the company used a beastly industrial VX4000 3D printer to craft three 1:3 scale models of the car with a plot to blow them to smithereens. The 18 piece miniatures were shipped off to Propshop Modelmakers in London to be assembled, painted, chromed and outfitted with fake bullet holes. The final product was used in the film during a high-octane action sequence, which resulted in the meticulously crafted prop receiving a Wile E. Coyote-like sendoff. Now, rest easy knowing that no real Aston Martins were harmed during the making of this film. Head past the break to get a look at a completed model prior to its untimely demise.
Wednesday, October 24. 2012
What happens when you combine advances in 3D printing with biosynthesis and molecular construction? Eventually, it might just lead to printers that can manufacture vaccines and other drugs from scratch: email your doc, download some medicine, print it out and you're cured.
This concept (which is surely being worked on as we speak) comes from Craig Venter, whose idea of synthesizing DNA on Mars we posted about last week. You may remember a mention of the possibility of synthesizing Martian DNA back here on Earth, too: Venter says that we can do that simply by having the spacecraft email us genetic information on whatever it finds on Mars, and then recreate it in a lab by mixing together nucleotides in just the right way. This sort of thing has already essentially been done by Ventner, who created the world's first synthetic life form back in 2010.
Vetner's idea is to do away with complex, expensive and centralized vaccine production and instead just develop one single machine that can "print" drugs by carefully combining nucleotides, sugars, amino acids, and whatever else is needed while u wait. Technology like this would mean that vaccines could be produced locally, on demand, simply by emailing the appropriate instructions to your closes drug printer. Pharmacies would no longer consists of shelves upon shelves of different pills, but could instead be kiosks with printers inside them. Ultimately, this could even be something you do at home.
While the benefits to technology like this are obvious, the risks are equally obvious. I mean, you'd basically be introducing the Internet directly into your body. Just ingest that for a second and think about everything that it implies. Viruses. LOLcats. Rule 34. Yeah, you know what, maybe I'll just stick with modern American healthcare and making ritual sacrifices to heathen gods, at least one of which will probably be effective.
Monday, October 15. 2012
Observable consequences of the hypothesis that the observed universe is a numerical simulation performed on a cubic space-time lattice or grid are explored. The simulation scenario is first motivated by extrapolating current trends in computational resource requirements for lattice QCD into the future. Using the historical development of lattice gauge theory technology as a guide, we assume that our universe is an early numerical simulation with unimproved Wilson fermion discretization and investigate potentially-observable consequences. Among the observables that are considered are the muon g-2 and the current differences between determinations of alpha, but the most stringent bound on the inverse lattice spacing of the universe, b^(-1) >~ 10^(11) GeV, is derived from the high-energy cut off of the cosmic ray spectrum. The numerical simulation scenario could reveal itself in the distributions of the highest energy cosmic rays exhibiting a degree of rotational symmetry breaking that reflects the structure of the underlying lattice.
Thursday, September 20. 2012
The iPhone 5 is the latest smartphone to hop on-board the LTE (Long Term Evolution) bandwagon, and for good reason: The mobile broadband standard is fast, flexible, and designed for the future. Yet LTE is still a young technology, full of growing pains. Here’s an overview of where it came from, where it is now, and where it might go from here.
The evolution of ‘Long Term Evolution’
LTE is a mobile broadband standard developed by the 3GPP (3rd Generation Partnership Project), a group that has developed all GSM standards since 1999. (Though GSM and CDMA—the network Verizon and Sprint use in the United States—were at one time close competitors, GSM has emerged as the dominant worldwide mobile standard.)
Cell networks began as analog, circuit-switched systems nearly identical in function to the public switched telephone network (PSTN), which placed a finite limit on calls regardless of how many people were speaking on a line at one time.
The second-generation, GPRS, added data (at dial-up modem speed). GPRS led to EDGE, and then 3G, which treated both voice and data as bits passing simultaneously over the same network (allowing you to surf the web and talk on the phone at the same time).
GSM-evolved 3G (which brought faster speeds) started with UMTS, and then accelerated into faster and faster variants of 3G, 3G+, and “4G” networks (HSPA, HSDPA, HSUPA, HSPA+, and DC-HSPA).
Until now, the term “evolution” meant that no new standard broke or failed to work with the older ones. GSM, GPRS, UMTS, and so on all work simultaneously over the same frequency bands: They’re intercompatible, which made it easier for carriers to roll them out without losing customers on older equipment. But these networks were being held back by compatibility.
That’s where LTE comes in. The “long term” part means: “Hey, it’s time to make a big, big change that will break things for the better.”
LTE needs its own space, man
LTE has “evolved” beyond 3G networks by incorporating new radio technology and adopting new spectrum. It allows much higher speeds than GSM-compatible standards through better encoding and wider channels. (It’s more “spectrally efficient,” in the jargon.)
LTE is more flexible than earlier GSM-evolved flavors, too: Where GSM’s 3G variants use 5 megahertz (MHz) channels, LTE can use a channel size from 1.4 MHz to 20 MHz; this lets it work in markets where spectrum is scarce and sliced into tiny pieces, or broadly when there are wide swaths of unused or reassigned frequencies. In short, the wider the channel—everything else being equal—the higher the throughput.
Speeds are also boosted through MIMO (multiple input, multiple output), just as in 802.11n Wi-Fi. Multiple antennas allow two separate benefits: better reception, and multiple data streams on the same spectrum.
Unfortunately, in practice, LTE implementation gets sticky: There are 33 potential bands for LTE, based on a carrier’s local regulatory domain. In contrast, GSM has just 14 bands, and only five of those are widely used. (In broad usage, a band is two sets of paired frequencies, one devoted to upstream traffic and the other committed to downstream. They can be a few MHz apart or hundreds of MHz apart.)
And while LTE allows voice, no standard has yet been agreed upon; different carriers could ultimately choose different approaches, leaving it to handset makers to build multiple methods into a single phone, though they’re trying to avoid that. In the meantime, in the U.S., Verizon and AT&T use the older CDMA and GSM networks for voice calls, and LTE for data.
LTE in the United States
Of the four major U.S. carriers, AT&T, Verizon, and Sprint have LTE networks, with T-Mobile set to start supporting LTE in the next year. But that doesn’t mean they’re set to play nice. We said earlier that current LTE frequencies are divided up into 33 spectrum bands: With the exception of AT&T and T-Mobile, which share frequencies in band 4, each of the major U.S. carriers has its own band. Verizon uses band 13; Sprint has spectrum in band 26; and AT&T holds band 17 in addition to some crossover in band 4.
In addition, smaller U.S. carriers, like C Spire, U.S. Cellular, and Clearwire, all have their own separate piece of the spectrum pie: C Spire and U.S. Cellular use band 12, while Clearwire uses band 41.
As such, for a manufacturer to support LTE networks in the United States alone, it would need to build a receiver that could tune into seven different LTE bands—let alone the various flavors of GSM-evolved or CDMA networks.
With the iPhone, Apple tried to cut through the current Gordian Knot by releasing two separate models, the A1428 and A1429, which cover a limited number of different frequencies depending on where they’re activated. (Apple has kindly released a list of countries that support its three iPhone 5 models.) Other companies have chosen to restrict devices to certain frequencies, or to make numerous models of the same phone.
Other solutions are coming. Qualcomm made a regulatory filing in June regarding a seven-band LTE chip, which could be in shipping devices before the end of 2012 and could allow a future iPhone to be activated in different fashions. Within a year or so, we should see most-of-the-world phones, tablets, and other LTE mobile devices that work on the majority of large-scale LTE networks.
That will be just in time for the next big thing: LTE-Advanced, the true fulfillment of what was once called 4G networking, with rates that could hit 1 Gbps in the best possible cases of wide channels and short distances. By then, perhaps the chip, handset, and carrier worlds will have converged to make it all work neatly together.
Wednesday, September 19. 2012
Via PLOS genetics
Inter-individual variation in facial shape is one of the most noticeable phenotypes in humans, and it is clearly under genetic regulation; however, almost nothing is known about the genetic basis of normal human facial morphology. We therefore conducted a genome-wide association study for facial shape phenotypes in multiple discovery and replication cohorts, considering almost ten thousand individuals of European descent from several countries. Phenotyping of facial shape features was based on landmark data obtained from three-dimensional head magnetic resonance images (MRIs) and two-dimensional portrait images. We identified five independent genetic loci associated with different facial phenotypes, suggesting the involvement of five candidate genes—PRDM16, PAX3, TP63, C5orf50, and COL17A1—in the determination of the human face. Three of them have been implicated previously in vertebrate craniofacial development and disease, and the remaining two genes potentially represent novel players in the molecular networks governing facial development. Our finding at PAX3 influencing the position of the nasion replicates a recent GWAS of facial features. In addition to the reported GWA findings, we established links between common DNA variants previously associated with NSCL/P at 2p21, 8q24, 13q31, and 17q22 and normal facial-shape variations based on a candidate gene approach. Overall our study implies that DNA variants in genes essential for craniofacial development contribute with relatively small effect size to the spectrum of normal variation in human facial morphology. This observation has important consequences for future studies aiming to identify more genes involved in the human facial morphology, as well as for potential applications of DNA prediction of facial shape such as in future forensic applications.
The morphogenesis and patterning of the face is one of the most complex events in mammalian embryogenesis. Signaling cascades initiated from both facial and neighboring tissues mediate transcriptional networks that act to direct fundamental cellular processes such as migration, proliferation, differentiation and controlled cell death. The complexity of human facial development is reflected in the high incidence of congenital craniofacial anomalies, and almost certainly underlies the vast spectrum of subtle variation that characterizes facial appearance in the human population.
Facial appearance has a strong genetic component; monozygotic (MZ) twins look more similar than dizygotic (DZ) twins or unrelated individuals. The heritability of craniofacial morphology is as high as 0.8 in twins and families , , . Some craniofacial traits, such as facial height and position of the lower jaw, appear to be more heritable than others , , . The general morphology of craniofacial bones is largely genetically determined and partly attributable to environmental factors –. Although genes have been mapped for various rare craniofacial syndromes largely inherited in Mendelian form , the genetic basis of normal variation in human facial shape is still poorly understood. An appreciation of the genetic basis of facial shape variation has far reaching implications for understanding the etiology of facial pathologies, the origin of major sensory organ systems, and even the evolution of vertebrates , . In addition, it is feasible to speculate that once the majority of genetic determinants of facial morphology are understood, predicting facial appearance from DNA found at a crime scene will become useful as investigative tool in forensic case work . Some externally visible human characteristics, such as eye color – and hair color , can already be inferred from a DNA sample with practically useful accuracies.
In a recent candidate gene study carried out in two independent European population samples, we investigated a potential association between risk alleles for non-syndromic cleft lip with or without cleft palate (NSCL/P) and nose width and facial width in the normal population . Two NSCL/P associated single nucleotide polymorphisms (SNPs) showed association with different facial phenotypes in different populations. However, facial landmarks derived from 3-Dimensional (3D) magnetic resonance images (MRI) in one population and 2-Dimensional (2D) portrait images in the other population were not completely comparable, posing a challenge for combining phenotype data. In the present study, we focus on the MRI-based approach for capturing facial morphology since previous facial imaging studies by some of us have demonstrated that MRI-derived soft tissue landmarks represent a reliable data source , .
In geometric morphometrics, there are different ways to deal with the confounders of position and orientation of the landmark configurations, such as (1) superimposition ,  that places the landmarks into a consensus reference frame; (2) deformation –, where shape differences are described in terms of deformation fields of one object onto another; and (3) linear distances , , where Euclidean distances between landmarks instead of their coordinates are measured. Rationality and efficacy of these approaches have been reviewed and compared elsewhere –. We briefly compared these methods in the context of our genome-wide association study (GWAS) (see Methods section) and applied them when appropriate.
We extracted facial landmarks from 3D head MRI in 5,388 individuals of European origin from Netherlands, Australia, and Germany, and used partial Procrustes superimposition (PS) , ,  to superimpose different sets of facial landmarks onto a consensus 3D Euclidean space. We derived 48 facial shape features from the superimposed landmarks and estimated their heritability in 79 MZ and 90 DZ Australian twin pairs. Subsequently, we conducted a series of GWAS separately for these facial shape dimensions, and attempted to replicate the identified associations in 568 Canadians of European (French) ancestry with similar 3D head MRI phenotypes and additionally sought supporting evidence in further 1,530 individuals from the UK and 2,337 from Australia for whom facial phenotypes were derived from 2D portrait images.
Tuesday, September 18. 2012
Wearable computing is all the rage this year as Google pulls back the curtain on their Glass technology, but some scientists want to take the idea a stage further. The emerging field of stretchable electronics is taking advantage of new polymers that allow you to not just wear your computer but actually become a part of the circuitry. By embedding the wiring into a stretchable polymer, these cutting edge devices resemble human skin more than they do circuit boards. And with a whole host of possible medical uses, that’s kind of the point.
A Cambridge, Massachusetts startup called MC10 is leading the way in stretchable electronics. So far, their products are fairly simple. There’s a patch that’s meant to be installed right on the skin like a temporary tattoo that can sense whether or not the user is hydrated as well as an inflatable balloon catheter that can measure the electronic signals of the user’s heartbeat to search for irregularities like arrythmias. Later this year, they’re launching a mysterious product with Reebok that’s expected to take advantage of the technology’s ability to detect not only heartbeat but also respiration, body temperature, blood oxygenation and so forth.
The joy of stretchable electronics is that the manufacturing process is not unlike that of regular electronics. Just like with a normal microchip, gold electrodes and wires are deposited on to thin silicone wafers, but they’re also embedded in the stretchable polymer substrate. When everything’s in place, the polymer substrate with embedded circuitry can be peeled off and later installed on a new surface. The components that can be added to stretchable surface include sensors, LEDs, transistors, wireless antennas and solar cells for power.
For now, the technology is still the nascent stages, but scientists have high hopes. In the future, you could wear a temporary tattoo that would monitor your vital signs, or doctors might install stretchable electronics on your organs to keep track of their behavior. Stretchable electronics could also be integrated into clothing or paired with a smartphone. Of course, if all else fails, it’ll probably make for some great children’s toys.
Tuesday, August 28. 2012
Via Daily Mail
Facedeals - a new camera that can recognise shoppers from their Facebook pictures as they enter a shop, and then offer them discounts
A promotional video created to promote the concept shows drinkers entering a bar, and then being offerend cheap drinks as they are recognised.
'Facebook check-ins are a powerful mechanism for businesses to deliver discounts to loyal customers, yet few businesses—and fewer customers—have realized it,' said Nashville-based advertising agency Redpepper.
They are already trialling the scheme in firms close to their office.
'A search for businesses with active deals in our area turned up a measly six offers.
odds we’ll ever be at one of those six spots are low (a strip club and
photography studio among them), and the incentives for a check-in are
not nearly enticing enough for us to take the time.
'So we set out to evolve the check-in and sweeten the deal, making both irresistible.
'We call it Facedeals.'
The Facedeal camera can identify faces when people walk in by comparing Facebook pictures of people who have signed up to the service
Facebook recently hit the headlines when it bought face.com, an Israeli firm that pioneered the use of face recognition technology online.
The social networking giant uses the software to recognise people in uploaded pictures, allowing it to accurately spot friends.
The software uses a complex algorithm to find the correct person from their Facebook pictures
The Facebook camera requires people to have authorised the Facedeals app through their Facebook account.
This verifies your most recent photo tags and maps the biometric data of your face.
The system then learns what a user looks like as more pictures are approved.
This data is then used to identify you in the real world.
In a demonstration video, the firm behind the camera showed it being used to offer free drinks to customers if they signed up to the system.
Friday, June 29. 2012
The techno-wizards over at Google X, the company's R&D laboratory working on its self-driving cars and Project Glass, linked 16,000 processors together to form a neural network and then had it go forth and try to learn on its own. Turns out, massive digital networks are a lot like bored humans poking at iPads.
The pretty amazing takeaway here is that this 16,000-processor neural network, spread out over 1,000 linked computers, was not told to look for any one thing, but instead discovered that a pattern revolved around cat pictures on its own.
This happened after Google presented the network with image stills from 10 million random YouTube videos. The images were small thumbnails, and Google's network was sorting through them to try and learn something about them. What it found — and we have ourselves to blame for this — was that there were a hell of a lot of cat faces.
"We never told it during the training, 'This is a cat,'" Jeff Dean, a Google fellow working on the project, told the New York Times. "It basically invented the concept of a cat. We probably have other ones that are side views of cats."
The network itself does not know what a cat is like you and I do. (It wouldn't, for instance, feel embarrassed being caught watching something like this in the presence of other neural networks.) What it does realize, however, is that there is something that it can recognize as being the same thing, and if we gave it the word, it would very well refer to it as "cat."
So, what's the big deal? Your computer at home is more than powerful enough to sort images. Where Google's neural network differs is that it looked at these 10 million images, recognized a pattern of cat faces, and then grafted together the idea that it was looking at something specific and distinct. It had a digital thought.
Andrew Ng, a computer scientist at Stanford University who is co-leading the study with Dean, spoke to the benefit of something like a self-teaching neural network: "The idea is that instead of having teams of researchers trying to find out how to find edges, you instead throw a ton of data at the algorithm and you let the data speak and have the software automatically learn from the data." The size of the network is important, too, and the human brain is "a million times larger in terms of the number of neurons and synapses" than Google X's simulated mind, according to the researchers.
"It'd be fantastic if it turns out that all we need to do is take current algorithms and run them bigger," Ng added, "but my gut feeling is that we still don't quite have the right algorithm yet."
(Page 1 of 7, totaling 65 entries) » next page
Show tagged entries