The iPhone 5 is the latest smartphone to hop on-board the LTE (Long Term Evolution)
bandwagon, and for good reason: The mobile broadband standard is fast,
flexible, and designed for the future. Yet LTE is still a young
technology, full of growing pains. Here’s an overview of where it came
from, where it is now, and where it might go from here.
The evolution of ‘Long Term Evolution’
LTE is a mobile broadband standard developed by the 3GPP (3rd Generation Partnership Project),
a group that has developed all GSM standards since 1999. (Though GSM
and CDMA—the network Verizon and Sprint use in the United States—were at
one time close competitors, GSM has emerged as the dominant worldwide
mobile standard.)
Cell networks began as analog, circuit-switched systems nearly identical
in function to the public switched telephone network (PSTN), which
placed a finite limit on calls regardless of how many people were
speaking on a line at one time.
The second-generation, GPRS,
added data (at dial-up modem speed). GPRS led to EDGE, and then 3G,
which treated both voice and data as bits passing simultaneously over
the same network (allowing you to surf the web and talk on the phone at
the same time).
GSM-evolved 3G (which brought faster speeds) started with UMTS, and then
accelerated into faster and faster variants of 3G, 3G+, and “4G”
networks (HSPA, HSDPA, HSUPA, HSPA+, and DC-HSPA).
Until now, the term “evolution” meant that no new standard broke or
failed to work with the older ones. GSM, GPRS, UMTS, and so on all work
simultaneously over the same frequency bands: They’re intercompatible,
which made it easier for carriers to roll them out without losing
customers on older equipment. But these networks were being held back by
compatibility.
That’s where LTE comes in. The “long term” part means: “Hey, it’s time
to make a big, big change that will break things for the better.”
LTE needs its own space, man
LTE has “evolved” beyond 3G networks by incorporating new radio
technology and adopting new spectrum. It allows much higher speeds than
GSM-compatible standards through better encoding and wider channels.
(It’s more “spectrally efficient,” in the jargon.)
LTE is more flexible than earlier GSM-evolved flavors, too: Where GSM’s
3G variants use 5 megahertz (MHz) channels, LTE can use a channel size
from 1.4 MHz to 20 MHz; this lets it work in markets where spectrum is
scarce and sliced into tiny pieces, or broadly when there are wide
swaths of unused or reassigned frequencies. In short, the wider the
channel—everything else being equal—the higher the throughput.
Speeds are also boosted through MIMO (multiple input, multiple output),
just as in 802.11n Wi-Fi. Multiple antennas allow two separate
benefits: better reception, and multiple data streams on the same
spectrum.
LTE complications
This map, courtesy Wikipedia,
shows countries in varying states of LTE readiness. Those in red have
commercial service; dark blue countries have LTE networks planned and
deploying; light blue countries are investigating LTE, and grey
countries have no LTE service at all.
Unfortunately, in practice, LTE implementation gets sticky: There are 33 potential bands for LTE, based on a carrier’s local regulatory domain. In contrast, GSM has just 14 bands,
and only five of those are widely used. (In broad usage, a band is two
sets of paired frequencies, one devoted to upstream traffic and the
other committed to downstream. They can be a few MHz apart or hundreds
of MHz apart.)
And while LTE allows voice, no standard has yet been agreed upon;
different carriers could ultimately choose different approaches, leaving
it to handset makers to build multiple methods into a single phone,
though they’re trying to avoid that. In the meantime, in the U.S.,
Verizon and AT&T use the older CDMA and GSM networks for voice
calls, and LTE for data.
LTE in the United States
Of the four major U.S. carriers, AT&T, Verizon, and Sprint have LTE networks, with T-Mobile set to start supporting LTE
in the next year. But that doesn’t mean they’re set to play nice. We
said earlier that current LTE frequencies are divided up into 33
spectrum bands: With the exception of AT&T and T-Mobile, which share
frequencies in band 4, each of the major U.S. carriers has its own
band. Verizon uses band 13; Sprint has spectrum in band 26; and AT&T
holds band 17 in addition to some crossover in band 4.
In addition, smaller U.S. carriers, like C Spire, U.S. Cellular, and Clearwire, all have their own separate piece of the spectrum pie: C Spire and U.S. Cellular use band 12, while Clearwire uses band 41.
As such, for a manufacturer to support LTE networks in the United States alone,
it would need to build a receiver that could tune into seven different
LTE bands—let alone the various flavors of GSM-evolved or CDMA networks.
With the iPhone, Apple tried to cut through the current Gordian Knot by
releasing two separate models, the A1428 and A1429, which cover a
limited number of different frequencies depending on where they’re
activated. (Apple has kindly released a list of countries
that support its three iPhone 5 models.) Other companies have chosen to
restrict devices to certain frequencies, or to make numerous models of
the same phone.
Banded together
Other solutions are coming. Qualcomm made a regulatory filing in June
regarding a seven-band LTE chip, which could be in shipping devices
before the end of 2012 and could allow a future iPhone to be activated
in different fashions. Within a year or so, we should see
most-of-the-world phones, tablets, and other LTE mobile devices that
work on the majority of large-scale LTE networks.
That will be just in time for the next big thing: LTE-Advanced, the true
fulfillment of what was once called 4G networking, with rates that
could hit 1 Gbps in the best possible cases of wide channels and short
distances. By then, perhaps the chip, handset, and carrier worlds will
have converged to make it all work neatly together.
Inter-individual variation in facial shape is one of the most noticeable
phenotypes in humans, and it is clearly under genetic regulation;
however, almost nothing is known about the genetic basis of normal human
facial morphology. We therefore conducted a genome-wide association
study for facial shape phenotypes in multiple discovery and replication
cohorts, considering almost ten thousand individuals of European descent
from several countries. Phenotyping of facial shape features was based
on landmark data obtained from three-dimensional head magnetic resonance
images (MRIs) and two-dimensional portrait images. We identified five
independent genetic loci associated with different facial phenotypes,
suggesting the involvement of five candidate genes—PRDM16, PAX3, TP63, C5orf50, and COL17A1—in
the determination of the human face. Three of them have been implicated
previously in vertebrate craniofacial development and disease, and the
remaining two genes potentially represent novel players in the molecular
networks governing facial development. Our finding at PAX3
influencing the position of the nasion replicates a recent GWAS of
facial features. In addition to the reported GWA findings, we
established links between common DNA variants previously associated with
NSCL/P at 2p21, 8q24, 13q31, and 17q22 and normal facial-shape
variations based on a candidate gene approach. Overall our study implies
that DNA variants in genes essential for craniofacial development
contribute with relatively small effect size to the spectrum of normal
variation in human facial morphology. This observation has important
consequences for future studies aiming to identify more genes involved
in the human facial morphology, as well as for potential applications of
DNA prediction of facial shape such as in future forensic applications.
Introduction
The morphogenesis and
patterning of the face is one of the most complex events in mammalian
embryogenesis. Signaling cascades initiated from both facial and
neighboring tissues mediate transcriptional networks that act to direct
fundamental cellular processes such as migration, proliferation,
differentiation and controlled cell death. The complexity of human
facial development is reflected in the high incidence of congenital
craniofacial anomalies, and almost certainly underlies the vast spectrum
of subtle variation that characterizes facial appearance in the human
population.
Facial appearance has
a strong genetic component; monozygotic (MZ) twins look more similar
than dizygotic (DZ) twins or unrelated individuals. The heritability of
craniofacial morphology is as high as 0.8 in twins and families [1], [2], [3]. Some craniofacial traits, such as facial height and position of the lower jaw, appear to be more heritable than others [1], [2], [3].
The general morphology of craniofacial bones is largely genetically
determined and partly attributable to environmental factors [4]–[11]. Although genes have been mapped for various rare craniofacial syndromes largely inherited in Mendelian form [12],
the genetic basis of normal variation in human facial shape is still
poorly understood. An appreciation of the genetic basis of facial shape
variation has far reaching implications for understanding the etiology
of facial pathologies, the origin of major sensory organ systems, and
even the evolution of vertebrates [13], [14].
In addition, it is feasible to speculate that once the majority of
genetic determinants of facial morphology are understood, predicting
facial appearance from DNA found at a crime scene will become useful as
investigative tool in forensic case work [15]. Some externally visible human characteristics, such as eye color [16]–[18] and hair color [19], can already be inferred from a DNA sample with practically useful accuracies.
In a recent candidate
gene study carried out in two independent European population samples,
we investigated a potential association between risk alleles for
non-syndromic cleft lip with or without cleft palate (NSCL/P) and nose
width and facial width in the normal population [20].
Two NSCL/P associated single nucleotide polymorphisms (SNPs) showed
association with different facial phenotypes in different populations.
However, facial landmarks derived from 3-Dimensional (3D) magnetic
resonance images (MRI) in one population and 2-Dimensional (2D) portrait
images in the other population were not completely comparable, posing a
challenge for combining phenotype data. In the present study, we focus
on the MRI-based approach for capturing facial morphology since previous
facial imaging studies by some of us have demonstrated that MRI-derived
soft tissue landmarks represent a reliable data source [21], [22].
In geometric
morphometrics, there are different ways to deal with the confounders of
position and orientation of the landmark configurations, such as (1)
superimposition [23], [24] that places the landmarks into a consensus reference frame; (2) deformation [25]–[27], where shape differences are described in terms of deformation fields of one object onto another; and (3) linear distances [28], [29],
where Euclidean distances between landmarks instead of their
coordinates are measured. Rationality and efficacy of these approaches
have been reviewed and compared elsewhere [30]–[32].
We briefly compared these methods in the context of our genome-wide
association study (GWAS) (see Methods section) and applied them when
appropriate.
We extracted facial
landmarks from 3D head MRI in 5,388 individuals of European origin from
Netherlands, Australia, and Germany, and used partial Procrustes
superimposition (PS) [24], [30], [33]
to superimpose different sets of facial landmarks onto a consensus 3D
Euclidean space. We derived 48 facial shape features from the
superimposed landmarks and estimated their heritability in 79 MZ and 90
DZ Australian twin pairs. Subsequently, we conducted a series of GWAS
separately for these facial shape dimensions, and attempted to replicate
the identified associations in 568 Canadians of European (French)
ancestry with similar 3D head MRI phenotypes and additionally sought
supporting evidence in further 1,530 individuals from the UK and 2,337
from Australia for whom facial phenotypes were derived from 2D portrait
images.
Wearable computing is all the rage this year as Google pulls back the
curtain on their Glass technology, but some scientists want to take the
idea a stage further. The emerging field of stretchable electronics is
taking advantage of new polymers that allow you to not just wear your
computer but actually become a part of the circuitry. By embedding the
wiring into a stretchable polymer, these cutting edge devices resemble
human skin more than they do circuit boards. And with a whole host of
possible medical uses, that’s kind of the point.
A Cambridge, Massachusetts startup called MC10 is leading the way
in stretchable electronics. So far, their products are fairly simple.
There’s a patch that’s meant to be installed right on the skin like a
temporary tattoo that can sense whether or not the user is hydrated as
well as an inflatable balloon catheter that can measure the electronic
signals of the user’s heartbeat to search for irregularities like
arrythmias. Later this year, they’re launching a mysterious product with
Reebok that’s expected to take advantage of the technology’s ability to
detect not only heartbeat but also respiration, body temperature, blood
oxygenation and so forth.
The joy of stretchable electronics is that the manufacturing process
is not unlike that of regular electronics. Just like with a normal
microchip, gold electrodes and wires are deposited on to thin silicone wafers,
but they’re also embedded in the stretchable polymer substrate. When
everything’s in place, the polymer substrate with embedded circuitry can
be peeled off and later installed on a new surface. The components that
can be added to stretchable surface include sensors, LEDs, transistors,
wireless antennas and solar cells for power.
For now, the technology is still the nascent stages, but scientists
have high hopes. In the future, you could wear a temporary tattoo that
would monitor your vital signs, or doctors might install stretchable
electronics on your organs to keep track of their behavior. Stretchable
electronics could also be integrated into clothing or paired with a
smartphone. Of course, if all else fails, it’ll probably make for some
great children’s toys.