Entries tagged as interface
Related tagsapi facial recognition glass google hardware innovation&society mirror programming software desktop gui os tablet technology display epaper mobile touch wireless game 3d art augmented reality history amazon android apple artificial intelligence book car chrome os cloud data visualisation internet laptop maps microsoft 3d printing ad ai amd ar arduino army asus automation camera chrome computer computer history advertisements algorythm API big data browser cloud computing crowd-sourcing mouse user interaction homeos linux mobile phone network usb web app c c++ cobol code coding databse dna fft sdk htc interaction kinect tv app store botnet copyright hybrid phone remote display wifi 3g cpu cray data center
Tuesday, March 19. 2013
At SXSW this afternoon, Google provided developers with a first glance at the Google Glass Mirror API, the main interface between Google Glass, Google’s servers and the apps that developers will write for them. In addition, Google showed off a first round of applications that work on Glass, including how Gmail works on the device, as well as integrations from companies like the New York Times, Evernote, Path and others.
The Mirror API is essentially a REST API, which should make developing for it very easy for most developers. The Glass device essentially talks to Google’s servers and the developers’ applications then get the data from there and also push it to Glass through Google’s APIs. All of this data is then presented on Glass through what Google calls “timeline cards.” These cards can include text, images, rich HTML and video. Besides single cards, Google also lets developers use what it calls bundles, which are basically sets of cards that users can navigate using their voice or the touchpad on the side of Glass.
It looks like sharing to Google+ is a built-in feature of the Mirror API, but as Google’s Timothy Jordan noted in today’s presentation, developers can always add their own sharing options, as well. Other built-in features seem to include voice recognition, access to the camera and a text-to-speech engine.
Because Glass is a new and unique form factor, Jordan also noted, Google is setting a few rules for Glass apps. They shouldn’t, for example, show full news stories but only headlines, as everything else would be too distracting. For longer stories, developers can always just use Glass to read text to users.
Essentially, developers should make sure that they don’t annoy users with too many notifications, and the data they send to Glass should always be relevant. Developers should also make sure that everything that happens on Glass should be something the user expects, said Jordan. Glass isn’t the kind of device, he said, where a push notification about an update to your app makes sense.
Using Glass With Gmail, Evernote, Path and Others
As part of today’s presentation, Jordan also detailed some Glass apps Google has been working on itself, and apps that some of its partners have created. The New York Times app, for example, shows headlines and then lets you listen to a summary of the article by telling Glass to “read aloud.” Google’s own Gmail app uses voice recognition to answer emails (and it obviously shows you incoming mail, as well). Evernote’s Skitch can be used to take and share photos, and Jordan also showed a demo of social network Path running on Glass to share your location.
So far, there is no additional information about the Mirror API on any of Google’s usual sites, but we expect the company to release more information shortly and will update this post once we hear more.
Wednesday, October 10. 2012
Swipe, swipe, pinch-zoom. Fifth-grader Josephine Nguyen is researching the definition of an adverb on her iPad and her fingers are flying across the screen. Her 20 classmates are hunched over their own tablets doing the same.
Conspicuously absent from this modern scene of high-tech learning: a mouse.
Nguyen, who is 10, said she has used one before — once — but the clunky desktop computer/monitor/keyboard/mouse setup was too much for her.
“It was slow,” she recalled, “and there were too many pieces.”
Gilbert Vasquez, 6, is also baffled by the idea of an external pointing device named after a rodent.
“I don’t know what that is,” he said with a shrug.
Nguyen and Vasquez, who attend public schools here, are part of the first generation growing up with a computer interface that is vastly different from the one the world has gotten used to since the dawn of the personal-computer era in the 1980s.
This fall, for the first time, sales of iPads are cannibalizing sales of PCs in schools, according to Charles Wolf, an analyst for the investment research firm Needham & Co. And a growing number of even more sophisticated technologies for communicating with your computer — such as the Leap Motion boxes and Sony Vaio laptops that read hand motions, as well as voice recognition services such as Apple’s Siri — are beginning to make headway in the commercial market.
John Underkoffler, a former MIT researcher who was the adviser for the high-tech wizardry that Tom Cruise used in “Minority Report,” says that the transition is inevitable and that it will happen in as soon as a few years.
Underkoffler, chief scientist for Oblong, a Los Angeles-based company that has created a gesture-controlled interface for computer systems, said that for decades the mouse was the primary bridge to the virtual world — and that it was not always optimal.
“Human hands and voice, if you use them in the digital world in the same way as the physical world, are incredibly expressive,” he said. “If you let the plastic chunk that is a mouse drop away, you will be able to transmit information between you and machines in a very different, high-bandwidth way.”
This type of thinking is turning industrial product design on its head. Instead of focusing on a single device to access technology, innovators are expanding their horizons to gizmos that respond to body motions, the voice, fingers, eyes and even thoughts. Some devices can be accessed by multiple people at the same time.
Keyboards might still be used for writing a letter, but designing, say, a landscaped garden might be more easily done with a digital pen, as would studying a map of Lisbon by hand gestures, or searching the Internet for Rihanna’s latest hits by voice. And the mouse — which many agree was a genius creation in its time — may end up as a relic in a museum.
The mouse is born
The first computer mouse, built at the Stanford Research Institute in Palo Alto, Calif., by Douglas Englebart and Bill English in 1963, was just a block of wood fashioned with two wheels. It was just one of a number of interfaces the team experimented with. There were also foot pedals, head-pointing devices and knee-mounted joysticks.
But the mouse proved to be the fastest and most accurate, and with the backing of Apple founder Steve Jobs — who bundled it with shipments of Lisa, the predecessor to the Macintosh, in the 1980s — the device suddenly became a mainstream phenomenon.
Englebart’s daughter, Christina, a cultural anthropologist, said that her father was able to predict many trends in technology over the years, but she said the one thing he has been surprised about is that the mouse has lasted as long as it has.
“He never assumed the mouse would be it,” said the younger Englebart, who wrote her father’s biography. “He always figured there would be newer ways of exploring a computer.”
She was 8 years old when her father invented the mouse. Now 57, she says she is finally seeing glimpses of the next stage of computing with the surging popularity of the iPad. These days her two children, 20 and 23, do not use a mouse anymore.
San Antonio and LUCHA elementary schools in eastern San Jose, just 17 miles south of where Englebart conducted his research, provide a glimpse at the future. The schools, which share a campus, have integrated iPod Touches and iPads into the curriculum for all 700 students. The teachers all get Mac Airbooks with touch pads.
“Most children here have never seen a computer mouse,” said Hannah Tenpas, 24, a kindergarten teacher at San Antonio.
Kindergartners, as young as 4, use the iPod Touch to learn letter sounds. The older students use iPads to research historical information and prepare multimedia slide-show presentations about school rules. The intuitive touch-screen interface has allowed the school to introduce children to computers at an age that would have been impossible in the past, said San Antonio Elementary’s principal, Jason Sorich.
Even toddlers are able manipulate a touch screen. A popular YouTube video shows a baby trying to swipe the pages of a fashion magazine that she assumes is a broken iPad.
“For my one-year-old daughter, a magazine is an iPad that does not work. It will remain so for her whole life,” the creator of the video says in a slide at the end of the clip.
The iPad side of the brain
“The popularity of iPads and other tablets is changing how society interacts with information,” said Aniket Kittur, an assistant professor at the Human-Computer Interaction Institute at Carnegie Mellon University. “.?.?. Direct manipulation with our fingers, rather than mediated through a keyboard/mouse, is intuitive and easy for children to grasp.”
Underkoffler said that while desktop computers helped activate the language and abstract-thinking parts of a child’s brain, new interfaces are helping open the spatial part.
“Once our user interface can start to talk to us that way .?.?. we sort of blow the barn doors off how we learn,” he said.
That may explain why iPads are becoming so popular in schools. Apple said in July that the iPad outsold the Mac 2 to 1 for the second consecutive quarter in the education market. In all, the company sold 17 million iPads in the April-to-June quarter; at the same time, mouse sales in the United States are down, some manufacturers say.
“The adoption rate of iPad in education is something I’d never seen from any technology product in history,” Apple chief executive Tim Cook said in July.
At San Antonio Elementary and LUCHA, which started their $300,000 iPad and iPod experiment last school year, the school board president, Esau Herrera, said he is thrilled by the results. Test scores have gone up (although officials say they cannot directly correlate that to the new technology), and the level of engagement has increased.
The schools are now debating what to do with the handful of legacy desktop PCs, each with its own keyboard and mouse, and whether they should bother teaching students to move a pointer around a monitor.
“Things are moving so fast,” said LUCHA Principal Kristin Burt, “that we’re not sure the computer and mouse will even be around when they get old enough to really use them.”
Monday, May 14. 2012
Via Phys Org
A paper-based touch pad on an alarmed cardboard box detects the change in capacitance associated with the touch of a finger to one of its buttons.
The keypad requires the appropriate sequence of touches to disarm the system. Image credit: Mazzeo, et al.
The touch pads are made of metallized paper, which is paper coated in aluminum and transparent polymer. The paper can function as a capacitor, and a laser can be used to cut several individual capacitors in the paper, each corresponding to a key on the touch pad. When a person touches a key, the key’s capacitance is increased. Once the keys are linked to external circuitry and a power source, the system can detect when a key is touched by detecting the increased capacitance.
According to lead researcher Aaron Mazzeo of Harvard University, the next steps will be finding a power source and electronics that are cheap, flexible, and disposable.
Among the applications, inexpensive touch pads could be used for security purposes. The researchers have already developed a box with an alarm and keypad that requires a code to allow authorized access. Disposable touch pads could also be useful in sterile or contaminated medical environments.
Thursday, December 15. 2011
How a little-known 1971 machine launched an industry.
Forty years ago, Nutting Associates released the world’s first mass-produced and commercially sold video game, Computer Space. It was the brainchild of Nolan Bushnell, a charismatic engineer with a creative vision matched only by his skill at self-promotion. With the help of his business partner Ted Dabney and the staff of Nutting Associates, Bushnell pushed the game from nothing into reality only two short years after conceiving the idea.
Computer Space pitted a player-controlled rocket ship against two machine-controlled flying saucers in a space simulation set before a two-dimensional star field. The player controlled the rocket with four buttons: one for fire, which shoots a missile from the front of the rocket ship; two directional rotation buttons (to rotate the ship orientation clockwise or counterclockwise); and one for thrust, which propelled the ship in whichever direction it happened to be pointing. Think of Asteroids without the asteroids, and you should get the picture.
During play, two saucers would appear on the screen and shoot at the player while flying in a zig-zag formation. The player’s goal was to dodge the saucer fire and shoot the saucers.
Considering a game of this complexity playing out on a TV set, you might think that it was created as a sophisticated piece of software running on a computer. You’d think it, but you’d be wrong–and Bushnell wouldn’t blame you for the mistake. How he and Dabney managed to pull it off is a story of audacity, tenacity, and sheer force-of-will worthy of tech legend. This is how it happened.
continue to the full story...
Tuesday, December 13. 2011
Thursday, September 01. 2011
Microsoft Windows chief Steven Sinofsky has taken to the Building Windows 8 blog to explain the company’s decision to keep two interfaces: the traditional desktop UI and the more tablet-friendly Metro UI. His explanation seemed to be in response to criticism and confusion after the latest details were revealed on the new Windows 8 Explorer interface.
On Monday, details on the Windows 8 Explorer file manager interface were revealed showing what looked to be a very traditional Windows UI without any Metro elements. Reactions were mixed with many confused as to what direction Microsoft was heading with its Windows 8 interface. Well, Sinofsky is attempting to answer that and says that it is a “balancing act” of trying to get both interfaces working together harmoniously.
Sinofsky writes in his post:
He proceeds to address each of these concerns, saying that the fluid and intuitive Metro interface is great on the tablet form factor, but when it comes down to getting serious work done, precision mouse and keyboard tools are still needed as well as the ability to run traditional applications. Hence, he explains that in the end they decided to bring the best of both worlds together for Windows 8.
With Windows 8 on a tablet, users can fully immerse themselves in the Metro UI and never see the desktop interface. In fact, the code for the desktop interface won’t even load. But, if the user needs to use the desktop interface, they can do so without needing to switch over to a laptop or other secondary device just for business or work.
A more detailed preview of Windows 8 is expected to take place during Microsoft’s Build developer conference in September. It’s been rumored that the first betas may be distributed to developers then along with a Windows 8 compatible hardware giveaway.
In order to complete the so-called 'Desktop Crisis' discussion, the point of view of Microsoft who has decided to avoid mixing functionalities between desktop's GUI and tablet's GUI.
Wednesday, August 24. 2011
Via ars technica
At the Google I/O conference earlier this year, Google revealed that the Android Market would come to the Google TV set-top platform. Some evidence of the Honeycomb-based Google TV refresh surfaced in June when screenshots from developer hardware were leaked. Google TV development is now being opened to a broader audience.
In a post on the official Google TV blog, the search giant has announced the availability of a Google TV add-on for the Android SDK. The add-on is an early preview that will give third-party developers an opportunity to start porting their applications to Google TV.
The SDK add-on will currently only work on Linux desktop systems because it relies on Linux's native KVM virtualization system to provide a Google TV emulator. Google says that other environments will be supported in the future. Unlike the conventional phone and tablet versions of Android, which are largely designed to run on ARM devices, the Google TV reference hardware uses x86 hardware. The architecture difference might account for the lack of support in Android's traditional emulator.
We are planning to put the SDK add-on to the test later this week so we can report some hands-on findings. We suspect that the KVM-based emulator will offer better performance than the conventional Honeycomb emulator that Google's SDK currently provides for tablet development.
In addition to the SDK add-on, Google has also published a detailed user interface design guideline document that offers insight into best practices for building a 10-foot interface that will work will on Google TV hardware. The document addresses a wide range of issues, including D-pad navigation and television color variance.
The first iteration of Google TV flopped in the market and didn't see much consumer adoption. Introducing support for third-party applications could make Google TV significantly more compelling to consumers. The ability to trivially run applications like Plex could make Google TV a lot more useful. It's also worth noting that Android's recently added support for game controllers and other similar input devices could make Google TV hardware serve as a casual gaming console.
Tuesday, August 09. 2011
Via Slash Gear
Apple released its new OS X Lion for Mac computers recently, and there was one controversial change that had the technorati chatting nonstop. In the new Lion OS, Apple changed the direction of scrolling. I use a MacBook Pro (among other machines, I’m OS agnostic). On my MacBook, I scroll by placing two fingers on the trackpad and moving them up or down. On the old system, moving my fingers down meant the object on the screen moved up. My fingers are controlling the scroll bars. Moving down means I am pulling the scroll bars down, revealing more of the page below what is visible. So, the object moves upwards. On the new system, moving my fingers down meant the object on screen moves down. My fingers are now controlling the object. If I want the object to move up, and reveal more of what is beneath, I move my fingers up, and content rises on screen.
The scroll bars are still there, but Apple has, by default, hidden them in many apps. You can make them reappear by hunting through the settings menu and turning them back on, but when they do come back, they are much thinner than they used to be, without the arrows at the top and bottom. They are also a bit buggy at the moment. If I try to click and drag the scrolling indicator, the page often jumps around, as if I had missed and clicked on the empty space above or below the scroll bar instead of directly on it. This doesn’t always happen, but it happens often enough that I have trained myself to avoid using the scroll bars this way.
So, the scroll bars, for now, are simply a visual indicator of where my view is located on a long or wide page. Clearly Apple does not think this information is terribly important, or else scroll bars would be turned on by default. As with the scroll bars, you can also hunt through the settings menu to turn off the new, so-called “natural scrolling.” This will bring you back to the method preferred on older Apple OSes, and also on Windows machines.
Some disclosure: my day job is working for Samsung. We make Windows computers that compete with Macs. I work in the phones division, but my work machine is a Samsung laptop running Windows. My MacBook is a holdover from my days as a tech journalist. When you become a tech journalist, you are issued a MacBook by force and stripped of whatever you were using before.
I am not criticizing or endorsing Apple’s new natural scrolling in this column. In fact, in my own usage, there are times when I like it, and times when I don’t. Those emotions are usually found in direct proportion to the amount of NyQuil I took the night before and how hot it was outside when I walked my dog. I have found no other correlation.
The new natural scrolling method will probably seem familiar to those of you not frozen in an iceberg since World War II. It is the same direction you use for scrolling on most touchscreen phones, and most tablets. Not all, of course. Some phones and tablets still use styli, and these phone often let you scroll by dragging scroll bars with the pointer. But if you have an Android or an iPhone or a Windows Phone, you’re familiar with the new method.
My real interest here is to examine how the user is placed in the conversation between your fingers and the object on screen. I have heard the argument that the new method tries, and perhaps fails, to emulate the touchscreen experience by manipulating objects as if they were physical. On touchscreen phones, this is certainly the case. When we touch something on screen, like an icon or a list, we expect it to react in a physical way. When I drag my finger to the right, I want the object beneath to move with my finger, just as a piece of paper would move with my finger when I drag it.
This argument postulates a problem with Apple’s natural scrolling because of the literal distance between your fingers and the objects on screen. Also, the angle has changed. The plane of your hands and the surface on which they rest are at an oblique angle of more than 90 degrees from the screen and the object at hand.
Think of a magic wand. When you wave a magic wand with the tip facing out before you, do you imagine the spell shooting forth parallel to the ground, or do you imagine the spell shooting directly upward? In our imagination, we do want a direct correlation between the position of our hands and the reaction on screen, this is true. However, is this what we were getting before? Not really.
The difference between classic scrolling and ‘natural’ scrolling seems to be the difference between manipulating a concept and manipulating an object. Scroll bars are not real, or at least they do not correspond to any real thing that we would experience in the physical world. When you read a tabloid, you do not scroll down to see the rest of the story. You move your eyes. If the paper will not fit comfortably in your hands, you fold it. But scrolling is not like folding. It is smoother. It is continuous. Folding is a way of breaking the object into two conceptual halves. Ask a print newspaper reporter (and I will refrain from old media mockery here) about the part of the story that falls “beneath the fold.” That part better not be as important as the top half, because it may never get read.
Natural scrolling correlates more strongly to moving an actual object. It is like reading a newspaper on a table. Some of the newspaper may extend over the edge of the table and bend downward, making it unreadable. When you want to read it, you move the paper upward. In the same way, when you want to read more of the NYTimes.com site, you move your fingers upward.
The argument should not be over whether one is more natural than the other. Let us not forget that we are using an electronic machine. This is not a natural object. The content onscreen is only real insofar as pixels light up and are arranged into a recognizable pattern. Those words are not real, they are the absence of light, in varying degrees if you have anti-aliasing cranked up, around recognizable patterns that our eyes and brain interpret as letters and words.
The argument should be over which is the more successful design for a laptop or desktop operating system. Is it better to create objects on screen that appropriate the form of their physical world counterparts? Should a page in Microsoft Word look like a piece of paper? Should an icon for a hard disk drive look like a hard disk? What percentage of people using a computer have actually seen a hard disk drive? What if your new ultraportable laptop uses a set of interconnected solid state memory chips instead? Does the drive icon still look like a drive?
Or is it better to create objects on screen that do not hew to the physical world? Certainly their form should suggest their function in order to be intuitive and useful, but they do not have to be photorealistic interpretations. They can suggest function through a more abstract image, or simply by their placement and arrangement.
In the former system, the computer interface becomes a part of the users world. The interface tries to fit in with symbols that are already familiar. I know what a printer looks like, so when I want to set up my new printer, I find the picture of the printer and I click on it. My email icon is a stamp. My music player icon is a CD. Wait, where did my CD go? I can’t find my CD?! What happened to my music!?!? Oh, there it is. Now it’s just a circle with a musical note. I guess that makes sense, since I hardly use CDs any more.
In the latter system, the user becomes part of the interface. I have to learn the language of the interface design. This may sound like it is automatically more difficult than the former method of photorealism, but that may not be true. After all, when I want to change the brightness of my display, will my instinct really be to search for a picture of a cog and gears? And how should we represent a Web browser, a feature that has no counterpart in real life? Are we wasting processing power and time trying to create objects that look three dimensional on a two dimensional screen in a 2D space?
I think the photorealistic approach, and Apple’s new natural scrolling, may be the more personal way to design an interface. Apple is clearly thinking of the intimate relationship between the user and the objects that we touch. It is literally a sensual relationship, in that we use a variety of our senses. We touch. We listen. We see.
But perhaps I do not need, nor do I want, to have this relationship with my work computer. I carry my phone with me everywhere. I keep my tablet very close to me when I am using it. With my laptop, I keep some distance. I am productive. We have a lot to get done.
Monday, August 01. 2011
By Ross Rubin
Last week's Switched On discussed
how Lion's feature set could be perceived differently by new users or
those coming from an iPad versus those who have used Macs for some time,
while a previous Switched On discussed
how Microsoft is preparing for a similar transition in Windows 8. Both
OS X Lion and Windows 8 seek to mix elements of a tablet UI with
elements of a desktop UI or -- putting it another way -- a
finger-friendly touch interface with a mouse-driven interface. If Apple
and Microsoft could wave a wand and magically have all apps adopt
overnight so they could leave a keyboard and mouse behind, they probably
would. Since they can't, though, inconsistency prevails.
See also this article
Monday, June 27. 2011
Image: Interfaculty Initiative in Information Studies, The University of Tokyo
In an interesting meshing of robotics and prosthetics development, Japanese researchers from Tokyo University working in conjunction with Sony Corporation, have created an external forearm device capable of causing independent finger and wrist movement. Introduced on the Rekimoto Lab website, the PossessedHand as it’s called can be strapped to the wrist like a blood pressure cuff and fine tuned to the individual wearing it. The PossessedHand sends small doses of electricity to the muscles in the forearm that control movement, and can be "taught" to send preprogrammed signals that replicate the movements of normal wrist and finger movements, such as plucking the strings of a musical instrument.
Though the signals sent are too weak to actually cause string plucking, they are apparently strong enough to cause the user to understand which finger is supposed to be moved, thus, the device might be construed to be more of a learning device than an actual guitar accessory.
Currently devices that do roughly the same thing are done with electrodes inserted into the skin, or work via gloves worn over the hand, both rather cludgy and perhaps somewhat painful. This new approach in contrast, is said to feel more like a gentle hand massage.
Image: Interfaculty Initiative in Information Studies, The University of TokyoThough the original purpose of the PossessedHand seems to be as an aid to help people learn to play musical instruments, something that has inspired a bit of criticism from the musical community due to the fact that nothing is actually learned when using the device; the hand basically becomes an external part of the instrument, while the brain remains passive; it seems clear the device could be used in multiple other ways. For example, it could be used by hearing people to assist in speaking with deaf sign-language users, or to help people type who have never learned how, or perhaps more importantly to help paralyzed people or those suffering from a stroke.
Interfaculty Initiative in Information Studies, The University of Tokyo
In these instances it’s not always imperative that the user actually learn anything new, just that they are able to communicate when they want to. If the programming of the device could be made to work in real time in other ways, by the user, then its value would greatly increase. For example if a person could speak out loud into a microphone and those words could then be captured and translated to sign-language and transferred directly to their fingers, deaf people would instantly be able to communicate with anyone they meet who is willing to wear the cuff.
(Page 1 of 1, totaling 10 entries)
Show tagged entries