Entries tagged as softwareRelated tags 3d camera flash game hardware headset history mobile mobile phone technology tracking virtual reality web wiki www 3d printing 3d scanner crowd-sourcing diy evolution facial copy food innovation&society medecin microsoft physical computing piracy programming rapid prototyping recycling robot virus advertisements ai algorythm android apple arduino automation data mining data visualisation network neural network sensors siri artificial intelligence big data cloud computing coding fft program amazon cloud ebook google kindle ad app htc ios linux os sdk super collider tablet usb API facial recognition glass interface mirror windows 8 app store open source iphone ar augmented reality army drone art car privacy super computer botnet security browser chrome firefox ieThursday, May 02. 2013Driving Miss dAIsy: What Google’s self-driving cars see on the roadVia Slash Gear -----
We’ve been hearing a lot about Google‘s self-driving car lately, and we’re all probably wanting to know how exactly the search giant is able to construct such a thing and drive itself without hitting anything or anyone. A new photo has surfaced that demonstrates what Google’s self-driving vehicles see while they’re out on the town, and it looks rather frightening.
The image was tweeted by Idealab founder Bill Gross, along with a claim that the self-driving car collects almost 1GB of data every second (yes, every second). This data includes imagery of the cars surroundings in order to effectively and safely navigate roads. The image shows that the car sees its surroundings through an infrared-like camera sensor, and it even can pick out people walking on the sidewalk. Of course, 1GB of data every second isn’t too surprising when you consider that the car has to get a 360-degree image of its surroundings at all times. The image we see above even distinguishes different objects by color and shape. For instance, pedestrians are in bright green, cars are shaped like boxes, and the road is in dark blue. However, we’re not sure where this photo came from, so it could simply be a rendering of someone’s idea of what Google’s self-driving car sees. Either way, Google says that we could see self-driving cars make their way to public roads in the next five years or so, which actually isn’t that far off, and Tesla Motors CEO Elon Musk is even interested in developing self-driving cars as well. However, they certainly don’t come without their problems, and we’re guessing that the first batch of self-driving cars probably won’t be in 100% tip-top shape.
Posted by Christian Babski
in Data visualisation, Hardware, Programming, Software, Technology
at
09:33
Defined tags for this entry: artificial intelligence, car, data visualisation, google, hardware, programming, sensors, software, technology
Cern re-creating first web page to revere early idealsVia BBC -----
Lost to the world: The first website. At the time, few imagined how ubiquitous the technology would become A team at the European Organisation for Nuclear Research (Cern) has launched a project to re-create the first web page. The aim is to preserve the original hardware and software associated with the birth of the web. The world wide web was developed by Prof Sir Tim Berners-Lee while working at Cern. The initiative coincides with the 20th anniversary of the research centre giving the web to the world. According to Dan Noyes, the web manager for Cern's communication group, re-creation of the world's first website will enable future generations to explore, examine and think about how the web is changing modern life. "I want my children to be able to understand the significance of this point in time: the web is already so ubiquitous - so, well, normal - that one risks failing to see how fundamentally it has changed," he told BBC News "We are in a unique moment where we can still switch on the first web server and experience it. We want to document and preserve that". The hope is that the restoration of the first web page and web site will serve as a reminder and inspiration of the web's fundamental values. At the heart of the original web is technology to decentralise control and make access to information freely available to all. It is this architecture that seems to imbue those that work with the web with a culture of free expression, a belief in universal access and a tendency toward decentralising information. SubversiveIt is the early technology's innate ability to subvert that makes re-creation of the first website especially interesting. While I was at Cern it was clear in speaking to those involved with the project that it means much more than refurbishing old computers and installing them with early software: it is about enshrining a powerful idea that they believe is gradually changing the world. I went to Sir Tim's old office where he worked at Cern's IT department trying to find new ways to handle the vast amount of data the particle accelerators were producing. I was not allowed in because apparently the present incumbent is fed up with people wanting to go into the office. But waiting outside was someone who worked at Cern as a young researcher at the same time as Sir Tim. James Gillies has since risen to be Cern's head of communications. He is occasionally referred to as the organisation's half-spin doctor, a reference to one of the properties of some sub-atomic particles. Amazing dream Mr Gillies is among those involved in the project. I asked him why he wanted to restore the first website. "One of my dreams is to enable people to see what that early web experience was like," was the reply. "You might have thought that the first browser would be very primitive but it was not. It had graphical capabilities. You could edit into it straightaway. It was an amazing thing. It was a very sophisticated thing." Those not heavily into web technology may be sceptical of the idea that using a 20-year-old machine and software to view text on a web page might be a thrilling experience. But Mr Gillies and Mr Noyes believe that the first web page and web site is worth resurrecting because embedded within the original systems developed by Sir Tim are the principles of universality and universal access that many enthusiasts at the time hoped would eventually make the world a fairer and more equal place. The first browser, for example, allowed users to edit and write directly into the content they were viewing, a feature not available on present-day browsers. Ideals erodedAnd early on in the world wide web's development, Nicola Pellow, who worked with Sir Tim at Cern on the www project, produced a simple browser to view content that did not require an expensive powerful computer and so made the technology available to anyone with a simple computer. According to Mr Noyes, many of the values that went into that original vision have now been eroded. His aim, he says, is to "go back in time and somehow preserve that experience". Soon to be refurbished: The NeXT computer that was home to the world's first website "This universal access of information and flexibility of delivery is something that we are struggling to re-create and deal with now. "Present-day browsers offer gorgeous experiences but when we go back and look at the early browsers I think we have lost some of the features that Tim Berners-Lee had in mind." Mr Noyes is reaching out to ask those who were involved in the NeXT computers used by Sir Tim for advice on how to restore the original machines. AweThe machines were the most advanced of their time. Sir Tim used two of them to construct the web. One of them is on show in an out-of-the-way cabinet outside Mr Noyes's office. I told him that as I approached the sleek black machine I felt drawn towards it and compelled to pause, reflect and admire in awe. "So just imagine the reaction of passers-by if it was possible to bring the machine back to life," he responded, with a twinkle in his eye. The initiative coincides with the 20th anniversary of Cern giving the web away to the world free. There was a serious discussion by Cern's management in 1993 about whether the organisation should remain the home of the web or whether it should focus on its core mission of basic research in physics. Sir Tim and his colleagues on the project argued that Cern should not claim ownership of the web. Great giveawayManagement agreed and signed a legal document that made the web publicly available in such a way that no one could claim ownership of it and that would ensure it was a free and open standard for everyone to use. Mr Gillies believes that the document is "the single most valuable document in the history of the world wide web". He says: "Without it you would have had web-like things but they would have belonged to Microsoft or Apple or Vodafone or whoever else. You would not have a single open standard for everyone." The web has not brought about the degree of social change some had envisaged 20 years ago. Most web sites, including this one, still tend towards one-way communication. The web space is still dominated by a handful of powerful online companies. A screen shot from the first browser: Those who saw it say it was "amazing and sophisticated". It allowed people to write directly into content, a feature that modern-day browsers no longer have But those who study the world wide web, such as Prof Nigel Shadbolt, of Southampton University, believe the principles on which it was built are worth preserving and there is no better monument to them than the first website. "We have to defend the principle of universality and universal access," he told BBC News. "That it does not fall into a special set of standards that certain organisations and corporations control. So keeping the web free and freely available is almost a human right."
Friday, April 19. 2013A new way to report data center's Power and Water Usage Effectiveness (PUE and WUE)-----
Today (18.04.2013) Facebook launched two public dashboards that report continuous, near-real-time data for key efficiency metrics – specifically, PUE and WUE – for our data centers in Prineville, OR and Forest City, NC. These dashboards include both a granular look at the past 24 hours of data and a historical view of the past year’s values. In the historical view, trends within each data set and correlations between different metrics become visible. Once our data center in Luleå, Sweden, comes online, we’ll begin publishing for that site as well. We began sharing PUE for our Prineville data center at the end of Q2 2011 and released our first Prineville WUE in the summer of 2012. Now we’re pulling back the curtain to share some of the same information that our data center technicians view every day. We’ll continue updating our annualized averages as we have in the past, and you’ll be able to find them on the Prineville and Forest City dashboards, right below the real-time data. Why are we doing this? Well, we’re proud of our data center efficiency, and we think it’s important to demystify data centers and share more about what our operations really look like. Through the Open Compute Project (OCP), we’ve shared the building and hardware designs for our data centers. These dashboards are the natural next step, since they answer the question, “What really happens when those servers are installed and the power’s turned on?” Creating these dashboards wasn’t a straightforward task. Our data centers aren’t completed yet; we’re still in the process of building out suites and finalizing the parameters for our building managements systems. All our data centers are literally still construction sites, with new data halls coming online at different points throughout the year. Since we’ve created dashboards that visualize an environment with so many shifting variables, you’ll probably see some weird numbers from time to time. That’s OK. These dashboards are about surfacing raw data – and sometimes, raw data looks messy. But we believe in iteration, in getting projects out the door and improving them over time. So we welcome you behind the curtain, wonky numbers and all. As our data centers near completion and our load evens out, we expect these inevitable fluctuations to correspondingly decrease. We’re excited about sharing this data, and we encourage others to do the same. Working together with AREA 17, the company that designed these visualizations, we’ve decided to open-source the front-end code for these dashboards so that any organization interested in sharing PUE, WUE, temperature, and humidity at its data center sites can use these dashboards to get started. Sometime in the coming weeks we’ll publish the code on the Open Compute Project’s GitHub repository. All you have to do is connect your own CSV files to get started. And in the spirit of all other technologies shared via OCP, we encourage you to poke through the code and make updates to it. Do you have an idea to make these visuals even more compelling? Great! We encourage you to treat this as a starting point and use these dashboards to make everyone’s ability to share this data even more interesting and robust. Lyrica McTiernan is a program manager for Facebook’s sustainability team. Wednesday, April 17. 2013Google Mirror API now availableTuesday, April 16. 2013Oculus Rift finally gets the reaction virtual reality always wantedVia Slash Gear -----
We’ve already heard plenty about the Oculus Rift virtual reality headset, and while we youngsters are pretty amazed by the technology, nobody has their mind blown more than the elderly, who could only dream about such technology back in their younger days. Recently, a 90-year-old grandmother ended up trying out the Oculus Rift for herself, and she was quite amazed.
Imagimind Studio developer Paul Rivot ended up grabbing an Oculus Rift in order to play around with it and develop some games, but he took a break from that and decided to give his grandmother a little treat, by strapping the Oculus Rift to her head in order to experience a bit of virtual reality herself.
The video is quite entertaining to watch, and we can’t imagine what’s going on inside of her head, knowing that she never grew up with such technology as the Oculus Rift, let alone 3D video games. She even gets to the point where she thought the images being displayed were actual images taken on-location, when in fact it’s all 3D-rendered on a computer. Currently, the Oculus Rift is out in the wild for developers only at this point, and there’s no announced release date for the device, although the company has noted that it should arrive to the general public before the 2014 holiday season. In the meantime, it’s videos like this that only excite us even more.
Tuesday, April 09. 2013A Problem Google Has Created for ItselfVia The Atlantic ----- Over the eons I've been a fan of, and sucker for, each latest automated system to "simplify" and "bring order to" my life. Very early on this led me to the beautiful-and-doomed Lotus Agenda for my DOS computers, and Actioneer for the early Palm. For the last few years Evernote has been my favorite, and I really like it. Still I always have the roving eye. So naturally I have already downloaded the Android version of Google's new app for collecting notes, photos, and info, called Google Keep, with logo at right. This early version has nothing like Evernote's power or polish, but you can see where Google is headed. Here's the problem: Google now has a
clear enough track record of trying out, and then canceling,
"interesting" new software that I have no idea how long Keep will be
around. When Google launched its Google Health service five years ago,
it had an allure like Keep's: here was the one place you could store
your prescription info, test results, immunization records, and so on
and know that you could get at them as time went on. That's how I used
it -- until Google cancelled this "experiment" last year. Same with
Google Reader, and all the other products in the Google Graveyard that Slate produced last week.
![]() After Reader's demise, many people noted
the danger of ever relying on a company's free offerings. When a
company is charging money for a product -- as Evernote does for all
above its most basic service, and same for Dropbox and SugarSync -- you
understand its incentive for sticking with that product. The company
itself might fail, but as long as it's in business it's unlikely just to
get bored and walk away, as Google has from so many experiments. These
include one called Google Notebook, which had some similarities to Keep,
and which I also liked, and which Google abandoned recently.
So: I trust Google for search, the core of how it stays in business. Similarly for Maps and Earth, which have tremendous public-good side effects but also are integral to Google's business. Plus Gmail and Drive, which keep you in the Google ecosystem. But do I trust Google with Keep? No.
The idea looks promising, and you can see how it could end up as an
integral part of the Google Drive strategy. But you could also imagine
that two or three years from now this will be one more "interesting"
experiment Google has gotten tired of.
Until I know a reason that it's in Google's long-term interest to keep Keep going, I'm
not going to invest time in it or lodge info there. The info could of
course be extracted or ported somewhere else -- Google has been very
good about helping people rescue data from products it has killed -- but
why bother getting used to a system that might go away? And I don't
understand how Google can get anyone to rely on its experimental
products unless it has a convincing answer for the "how do we know you
won't kill this?" question.
Thursday, April 04. 2013Writing Open Source Software? Make Sure You Know Your Copyright RightsVia SmartBear -----
![]() Open source is all fine and dandy, but before throwing yourself – and untold lines of code – into a project, make sure you understand exactly what’s going to happen to your code’s copyrights. And to your career. I know. If you wanted to be a lawyer, you would have gone to law school instead of spending your nights poring over K&R. Tough. In 2013, if you're an open source programmer you need to know a few things about copyright law. If you don't, bad things can happen. Really bad things. Before launching into this topic, I must point out that I Am Not A Lawyer (IANAL). If you have a specific, real-world question, talk to someone who is a lawyer. Better still, talk to an attorney who specializes in intellectual property (IP) law. Every time you write code, you're creating copyrighted work. As Simon Phipps, President of the OSI Open Source Initiative (OSI) said when I asked him about programmer copyright gotchas, "The biggest one is the tendency for headstrong younger developers to shun copyright licensing altogether. When they do that, they put all their collaborators at risk and themselves face potential liability claims.” Developers need to know that copyright is automatic under the Berne Convention, Phipps explained. Since that convention was put into place, all software development involve copying and derivatives, Phipps said; all programming without a license potentially infringes on copyright. “It may not pose a problem today, but without a perpetual copyright license collaborators are permanently at risk." “You can pretend copyright doesn't exist all you want, but one day it will bite you,” Phipps continued. “That's why [if you want to start a new project] you need to apply an open source license.” If you want public domain, use the MIT license; it's very simple and protects you and your collaborators from these risks. If you really care, use a modern patent-protecting license like Apache, MPLv2, or GPLv3. Just make sure you get one.
Who Owns That Work-for-Hire Code? It Might Be YouYou should know when you own the copyright and when your employer or consulting client does. If the code you wrote belongs to the boss, after all, it isn’t yours. And if it isn’t yours, you don’t have the right to assign the copyright to an open source project. So let’s look, first, at the assumption that employment or freelance work is automatically work for hire. For example, that little project of yours that you've been working on during your off-hours at work? It's probably yours but... as Daniel A. Tysver, a partner at Beck & Tysver wrote on BitLaw:
What if you're a freelance programmer and you're writing code under a "work for hire" contract? Does your client then own the copyright to the code you wrote – whether or not it’s part of an open source project as well? Well... actually maybe they do, maybe they don't. Tysver continued:
Is he saying that that work-for-hire contract you signed that didn't spell who got the copyright for the code means you may still have the copyright? Well, yes, actually he is. If you take a close look at U.S. Copyright law (PDF), you'll find that there are nine classes of work that are described as “work made for hire” (WMFH). None of them are programming code. So, as an author wrote on Law-Forums.org, under the nom de plume of morcan, “Computer programs do not generally fall into any of the statutory categories for commissioned WMFH and therefore, simply calling it that still won't conform to the statute." He or she continued, "Therefore, you can certainly have a written WMFH agreement (for what it's worth) that expressly outlines the intent of the parties that you be the 'author and owner of the copyright' of the commissioned work, but you still need a (separate) transfer and assignment of all right, title and interest of the contractor's copyright of any and all portions of the works created under the project, which naturally arises from his or her being the author of the WMFH." In other words, without a “transfer of copyright ownership” clause in your contract, you the programmer, not the company that gave you the contract, may still have the copyright.
Rich Santalesa, senior counsel at InformationLawGroup, agreed with morcan. “What tends to happen is that cautious (read: solid) software/copyright attorneys use a belt and suspenders approach, adding into the development agreement that it’s 'to the full extent applicable' a 'Work for Hire' — in the event, practically, that the IRS or some other taxing entity says 'no that person is an employee and not an independent contractor,’” said Santalesa. They also include a transfer and assignment provision that is effective immediately upon execution. “Whenever and wherever possible we [copyright attorneys representing the contracting party for the work] attempt to apply a Work for Hire situation,” explained Santalesa. “So the writer/programmer is, for copyright purposes, never the 'legal author.' It can get tricky, and as always the specific facts matter, with the proof ultimately in the contractual pudding that comes out of the oven.” What I take all this to mean is you should make darn sure that both you and the company that contracted you have a legal contract spelling out exactly what happens to the copyright of any custom code. Simply saying something is a work for hire doesn't cut the mustard. Now, Add in Open Source ContributionsThese same concerns also apply to open source projects. Most projects have some kind of copyright assignment agreements (CAAs) or copyright licensing agreements (CLAs) you must sign before the code you write is committed to the project. In CAAs, you assign your copyright to a company or organization; in CLAs you give the group a broad license to work with your code. While some open source figures, such as Bradley Kuhn of the Software Freedom Conservancy, don't want either kind of copyright agreement in open source software, almost all projects have them. And they can often cause headaches. Take, for example, the recent copyright fuss in the GnuTLS project, a free software implementation of the SSL (Secure Socket Layer) protocol. The project's founder, and one of its two main authors, Nikos Mavrogiannopoulos, announced in December 2012 that he was moving the project outside the infrastructure of the GNU project because of a major disagreement with the Free Software Foundation’s (FSF) decisions and practices. “I no longer consider GnuTLS a GNU project,” he wrote, “and future contributions are not required to be under the copyright of FSF.” Richard M. Stallman, founder of GNU and the FSF, wasn't having any of that! In an e-mail entitled, GNUTLS is not going anywhere, Stallman, a.k.a. RMS, replied, "You cannot take GNUTLS out of the GNU Project. You cannot designate a non-GNU program as a replacement for a GNU package. We will continue the development of GNUTLS." You see, while you don't have to assign your copyright to the FSF when you create a GNU project, the FSF won't protect the project's IP under the GPL unless you do make that assignment. And, back when the project started, Mavrogiannopoulos had transferred the copyrights. In addition, no matter where you are in the world, as RMS noted, if you do elect this path, the copyright goes to the U.S. FSF, not to one of its sister organizations. After many heated words, this particular conflict calmed down. Mavrogiannopoulos now wishes he had made a different decision. “I pretty much regret transferring all rights to FSF, but it seems there is nothing I can do to change that.” He can fork the code, but he can't take the project's name with him since that's part of the copyright. That may sound as though it’s getting far afield of The Least I Need to Know About Copyright as an Open Source Developer, but bear with me for a moment. Because it raises several troubling issues As Michael Kerrisk, a LWN.net author put it, "The first of these problems has already been shown above: Who owns the project? The GnuTLS project was initiated in good faith by Nikos as a GNU project. Over the lifetime of the project, the vast majority of the code contributed to the project has been written by two individuals, both of whom (presumably) now want to leave the GNU project. If the project had been independently developed, then clearly Nikos and Simon would be considered to own the project code and name. However, in assigning copyright to the FSF, they have given up the rights of owners.” However, there's more. As Kerrisk pointed out, “The ability of the FSF—as the sole copyright holder—to sue license violators is touted as one of the major advantages of copyright assignment. However, what if, for one reason or another, the FSF chooses not to exercise its rights?" What advantage does the programmer get then from assigning his or her copyright? Finally, Kerrisk added, there's a problem that occurs with assignment both to companies and to non-profits. “The requirement to sign a copyright assignment agreement imposes a barrier on participation. Some individuals and companies simply won't bother with doing the paperwork. Others may have no problem contributing code under a free software license, but they (or their lawyers) balk at giving away all rights in the code.”
|
1 |
-(void)sampleMethod |
2 |
{ |
3 |
UIButton* myButton = [UIButton buttonWithType:UIButtonTypeRoundedRect]; |
4 |
|
5 |
[myButton setTitle:text forState:UIControlStateNormal]; |
6 |
|
7 |
myButton.showsTouchWhenHighlighted = YES; |
8 |
} |
If property setter for showsTouchWhenHighligthed
is not supported by the tool, it will generate the following placeholder for you to provide its implementation:
1 |
APT.Button.prototype.setShowsTouchWhenHighligthed = function (arg1) { |
2 |
//
================================================================ //
REFERENCES TO THIS FUNCTION: // line(108): SampleCode.m // In scope:
Test.sampleMethod // Actual arguments types: [boolean] // Expected
return type: [unknown type] // |
3 |
if (APT.Global.THROW_IF_NOT_IMPLEMENTED) |
4 |
{ |
5 |
// TODO remove exception handling when implementing this method |
6 |
throw "Not implemented function: APT.Button.setShowsTouchWhenHighligthed" ; |
7 |
} |
8 |
}; |
These placeholders are created for methods, constants, and types that the tool does not support. Additionally, these placeholders may be generated for APIs other than the iOS* SDK APIs. If some files from the original application (containing class or function definitions) are not included in the translation process, the tool may also generate placeholders for the definitions in those missing files.
In each TODO file, you will find details about where those types, methods, or constants are used in the original code. Moreover, for each function or method the TODO file includes information about the type of the arguments that were inferred by the tool. Using these TODO files, you can complete the translation process by the providing the placeholders with your own implementation for that API.
The Intel® HTML5 App Porter Tool - BETA translates most of the definitions in the Xcode* Interface Builder files (i.e.,
XIB files) into equivalent HTML/CSS code. These HTML files use JQuery*
markup to define layouts equivalent to the views in the original XIB
files. That markup is defined based on the translated version of the
view classes and can be accessed programmatically.
Moreover, most
of the events that are linked with handlers in the original application
code are also linked with their respective handles in the translated
version. All the view controller objects, connection logic between
objects and event handlers from all translated XIB files are included in
the XibBoilerplateCode.js
. Only one XibBoilerplateCode.js
file is created per application.
The figure below shows how the different components of each XIB file are translated.
This is a summary of the artifacts generated from XIB files:
XibBoilerplateCode.js
file.XibBoilerplateCode.js
file.For further information on supported widgets and properties refer to the Supported .XIB file featuressection.
The translated application keeps the very same high level structure as the original one. Constructs such as Objective-C* interfaces, categories, C structs, functions, variables, and statements are kept without significant changes in the translated code but expressed in JavaScript.
The execution of the Intel® HTML5 App Porter Tool – BETA produces a set of files that can be divided in four groups:
.m
file) there should be a .js
file with a matching name..js
files are included.\lib
folder that corresponds to some 3rd party libraries and Intel® HTML5 App Porter Tool – BETA library which implements most of the functionality that is not available in HTML5..xib
files (if any): For each translated .xib
file there should be .html
and .css
files with matching names. These files correspond to their HTML5 version.NSData
class, would be located in a file named something like todo_api_js_apt_data.js
or todo_js_nsdata.js
.The generated JavaScript files have names which are practically the
same as the original ones. For example, if you have a file called AppDelegate.m
in the original application, you will end up with a file called AppDelegate.js
in the translated output. Likewise, the names of interfaces, functions,
fields, or variables are not changed, unless the differences between
Objective-C* and JavaScript require the tool to do so.
In short, the high level structure of the translated application is practically the same as the original one. Therefore, the design and structure of the original application will remain the same in the translated version.
The Intel® HTML5 App Porter Tool - BETA both translates the syntax and semantics of the source language (Objective-C*) into JavaScript and maps the iOS* SDK API calls into an equivalent functionality in HTML5. In order to map iOS* API types and calls into HTML5, we use the following libraries and APIs:
lib
folder.You should expect that future versions of the tool will incrementally add more support for API mapping, based on further statistical analysis and user feedback.
In Objective-C*, methods names can be composed by several parts
separated with colons (:) and the methods calls interleaved these parts
with the actual arguments. Since that peculiar syntactic construct is
not available in JavaScript, those methods names are translated by
combining all the methods parts replacing the colons (:) with
underscores (_). For example, a function called initWithColor:AndBackground:
is translated to use the name initWithColor_AndBackground
Identifier names, in general, may also be changed in the translation if there are any conflicts in JavaScript scope. For example, if you have duplicated names for interfaces and protocol, or one instance method and one class method that share the same name in the same interface. Because identifier scoping rules are different in JavaScript, you cannot share names between fields, methods, and interfaces. In any of those cases, the tool renames one of the clashing identifiers by prepending an underscore (_) to the original name.
Here is a list of recommendations to make the most of the tool.
In conclusion, having a well-designed application in the first place will make your life a lot easier when porting your code, even in a completely manual process.
This section provides additional information for developers and it is not required to effectively use Intel® HTML5 App Porter Tool - BETA. You can skip this section if you are not interested in implementation details of the tool.
Here, you can find some high level details of how the different processing steps of the Intel® HTML5 App Porter Tool - BETA are implemented.
To parse Objective-C* files, the tool uses a modified version of clang parser. A custom version of the parser is needed because:
The following picture shows the actual detailed process for parsing .m and .c files:
Missing iOS* SDK headers are inferred as part of the parsing process. The header inference process is heuristic, so you may get parsing errors, in some cases. Thus, you can help the front-end of the tool by providing forward declaration of types or other definitions in header files that are accessible to the tool.
Also, you can try the "Header Generator" module in individual files
by using the command line. In the binary folder of the tool, you will
find an executable headergenerator.exe
that rubs that process.
The translation of Objective-C* language into JavaScript involves a number of steps. We can divide the process in what happens in the front-end and what is in the back-end.
Steps in the front-end:
The output of the front-end is a Zoe program. Zoe is an intermediate abstract language used by LayerD framework; the engine that is used to apply most of the transformations.
The back-end is fully implemented in LayerD by using compile time classes of Zoe language that apply a number of transformations in the AST.
Steps in the back-end:
The tool supports a limited subset of iOS* API. That subset is developed following statistical information about usage of each API. Each release of the tool will include support for more APIs. If you miss a particular kind of API your feedback about it will be very valuable in our assessment of API support.
For some APIs such as Arrays and Strings the tool provides direct mappings into native HTML5 objects and methods. The following table shows a summary of the approach followed for each kind of currently supported APIs.
Framework | Mapping design guideline |
Foundation | Direct mapping to JavaScript when possible. If direct mapping is not possible, use a new class built over standard JavaScript. |
Core Graphics | Direct mapping to Canvas and related HTML5 APIs when possible. If direct mapping is not possible, use a new class built over standard JavaScript. |
UIKit Views | Provide a similar class in package APT, such as APT.View for UIView, APT.Label for UILabel, etc. All views are implemented using jQuery Mobile markup and library. When there are not equivalent jQuery widgets we build new ones in the APT library. |
UIKit Controllers and Delegates | Because HTML5 does not provide natively controllers or delegate objects the tool provides an implementation of base classes for controllers and delegates inside the APT package. |
Direct mapping implies that the original code will be transformed into plain JavaScript without any type of additional layer. For example,
1 |
NSArray anArray = [NSArray arrayWithObjects:@ "One" ,@ "Two" ,@ "Three" ,nil]; |
2 |
// Is translated to JavaScript code: |
3 |
var anArray = [ "One" , "Two" , "Three" ]; |
The entire API mapping happens in the back-end of the tool. This process is implemented using compile time classes and other infrastructure provided by the LayerD framework.
XIB files are converted in two steps:
The first step generates one XML file - with extension .gld - for each view inside the XIB file and one additional XML file with information about other objects inside XIB files and connections between objects and views such as outlets and event handling.
The second stage runs inside the Zoe compiler of LayerD to convert intermediate XML files into final HTML/CSS and JavaScript boilerplate code to duplicate all the functionality that XIB files provides in the original project.
Generated HTML code is as similar as possible to static markup used by jQuery Mobile library or standard HTML5 markup. For widgets that do not have an equivalent in jQuery Mobile, HTML5, or behaves differently, simple markup is generated and handled by classes in APT library.
The following table details the APIs supported by the current version of the tool.
Notes:
Types refers to Interfaces, Protocols, Structs, Typedefs or Enums
Type 'C global' mean that it is not a type, but it is a supported global C function or constant
Colons in Objective-C names are replaced by underscores
Objective-C properties are detailed as a pair of getter/setter method names such as 'title' and 'setTitle'
Objective-C static members appear with a prefixed underscore like in '_dictionaryWithObjectsAndKeys'
Inherited members are not listed, but are supported. For example, NSArray supports the 'count' method. The method 'count' is not listed in NSMutableArray, but it is supported because it inherits from NSArray
![]() |
April '25 | |||||
Mon | Tue | Wed | Thu | Fri | Sat | Sun |
1 | 2 | 3 | 4 | 5 | 6 | |
7 | 8 | 9 | 10 | 11 | 12 | 13 |
14 | 15 | 16 | 17 | 18 | 19 | 20 |
21 | 22 | 23 | 24 | 25 | 26 | 27 |
28 | 29 | 30 |