Entries tagged as programming
Related tags3d printing 3d scanner crowd-sourcing diy evolution facial copy food hardware innovation&society medecin microsoft physical computing piracy rapid prototyping recycling robot software technology virus ai android apple arduino automation data mining data visualisation mobile network neural network sensors siri algorythm artificial intelligence big data cloud computing coding fft ad app google history htc ios linux mobile phone os sdk super collider tablet usb API facial recognition glass interface mirror windows 8 satellite art game car privacy program super computer drone light wifi c c++ cobol fortran language lisp pascal smalltalk code flickr gui internet maps photos qr code databse sql dna genome surveillance army camera
Wednesday, October 30. 2013
15.10.13 - Two EPFL spin-offs, senseFly and Pix4D, have modeled the Matterhorn in 3D, at a level of detail never before achieved. It took senseFly’s ultralight drones just six hours to snap the high altitude photographs that were needed to build the model.
They weigh less than a kilo each, but they’re as agile as eagles in the high mountain air. These “ebees” flying robots developed by senseFly, a spin-off of EPFL’s Intelligent Systems Laboratory (LIS), took off in September to photograph the Matterhorn from every conceivable angle. The drones are completely autonomous, requiring nothing more than a computer-conceived flight plan before being launched by hand into the air to complete their mission.
Three of them were launched from a 3,000m “base camp,” and the fourth made the final assault from the summit of the stereotypical Swiss landmark, at 4,478m above sea level. In their six-hour flights, the completely autonomous flying machines took more than 2,000 high-resolution photographs. The only remaining task was for software developed by Pix4D, another EPFL spin-off from the Computer Vision Lab (CVLab), to assemble them into an impressive 300-million-point 3D model. The model was presented last weekend to participants of the Drone and Aerial Robots Conference (DARC), in New York, by Henri Seydoux, CEO of the French company Parrot, majority shareholder in senseFly.
All-terrain and even in swarms
Last week the dynamic Swiss company – which has just moved into new, larger quarters in Cheseaux-sur-Lausanne – also announced that it had made software improvements enabling drones to avoid colliding with each other in flight; now a swarm of drones can be launched simultaneously to undertake even more rapid and precise mapping missions.
Thursday, May 02. 2013
Via Slash Gear
We’ve been hearing a lot about Google‘s self-driving car lately, and we’re all probably wanting to know how exactly the search giant is able to construct such a thing and drive itself without hitting anything or anyone. A new photo has surfaced that demonstrates what Google’s self-driving vehicles see while they’re out on the town, and it looks rather frightening.
The image was tweeted by Idealab founder Bill Gross, along with a claim that the self-driving car collects almost 1GB of data every second (yes, every second). This data includes imagery of the cars surroundings in order to effectively and safely navigate roads. The image shows that the car sees its surroundings through an infrared-like camera sensor, and it even can pick out people walking on the sidewalk.
Of course, 1GB of data every second isn’t too surprising when you consider that the car has to get a 360-degree image of its surroundings at all times. The image we see above even distinguishes different objects by color and shape. For instance, pedestrians are in bright green, cars are shaped like boxes, and the road is in dark blue.
However, we’re not sure where this photo came from, so it could simply be a rendering of someone’s idea of what Google’s self-driving car sees. Either way, Google says that we could see self-driving cars make their way to public roads in the next five years or so, which actually isn’t that far off, and Tesla Motors CEO Elon Musk is even interested in developing self-driving cars as well. However, they certainly don’t come without their problems, and we’re guessing that the first batch of self-driving cars probably won’t be in 100% tip-top shape.
Tuesday, March 19. 2013
At SXSW this afternoon, Google provided developers with a first glance at the Google Glass Mirror API, the main interface between Google Glass, Google’s servers and the apps that developers will write for them. In addition, Google showed off a first round of applications that work on Glass, including how Gmail works on the device, as well as integrations from companies like the New York Times, Evernote, Path and others.
The Mirror API is essentially a REST API, which should make developing for it very easy for most developers. The Glass device essentially talks to Google’s servers and the developers’ applications then get the data from there and also push it to Glass through Google’s APIs. All of this data is then presented on Glass through what Google calls “timeline cards.” These cards can include text, images, rich HTML and video. Besides single cards, Google also lets developers use what it calls bundles, which are basically sets of cards that users can navigate using their voice or the touchpad on the side of Glass.
It looks like sharing to Google+ is a built-in feature of the Mirror API, but as Google’s Timothy Jordan noted in today’s presentation, developers can always add their own sharing options, as well. Other built-in features seem to include voice recognition, access to the camera and a text-to-speech engine.
Because Glass is a new and unique form factor, Jordan also noted, Google is setting a few rules for Glass apps. They shouldn’t, for example, show full news stories but only headlines, as everything else would be too distracting. For longer stories, developers can always just use Glass to read text to users.
Essentially, developers should make sure that they don’t annoy users with too many notifications, and the data they send to Glass should always be relevant. Developers should also make sure that everything that happens on Glass should be something the user expects, said Jordan. Glass isn’t the kind of device, he said, where a push notification about an update to your app makes sense.
Using Glass With Gmail, Evernote, Path and Others
As part of today’s presentation, Jordan also detailed some Glass apps Google has been working on itself, and apps that some of its partners have created. The New York Times app, for example, shows headlines and then lets you listen to a summary of the article by telling Glass to “read aloud.” Google’s own Gmail app uses voice recognition to answer emails (and it obviously shows you incoming mail, as well). Evernote’s Skitch can be used to take and share photos, and Jordan also showed a demo of social network Path running on Glass to share your location.
So far, there is no additional information about the Mirror API on any of Google’s usual sites, but we expect the company to release more information shortly and will update this post once we hear more.
Wednesday, January 09. 2013
At Velocity 2011, Nicole Sullivan and I introduced CSS Lint, the first code-quality tool for CSS. We had spent the previous two weeks coding like crazy, trying to create an application that was both useful for end users and easy to modify. Neither of us had any experience launching an open-source project like this, and we learned a lot through the process.
After some initial missteps, the project finally hit a groove, and it now regularly get compliments from people using and contributing to CSS Lint. It’s actually not that hard to create a successful open-source project when you stop to think about your goals.
What Are Your Goals?
These days, it seems that anybody who writes a piece of code ends up pushing it to GitHub with an open-source license and says, “I’ve open-sourced it.” Creating an open-source project isn’t just about making your code freely available to others. So, before announcing to the world that you have open-sourced something that hasn’t been used by anyone other than you in your spare time, stop to ask yourself what your goals are for the project.
The first goal is always to create something useful. For CSS Lint, our goal was to create an extensible tool for CSS code quality that could easily fit into a developer’s workflow, whether the workflow is automated or not. Make sure that what you’re offering is useful by looking for others who are doing similar projects and figuring out how large of a user base you’re looking at.
After that, decide why you are open-sourcing the project in the first place. Is it because you just want to share what you’ve done? Do you intend to continue developing the code, or is this just a snapshot that you’re putting out into the world? If you have no intention of continuing to develop the code, then the rest of this article doesn’t apply to you. Make sure that the readme file in your repository states this clearly so that anyone who finds your project isn’t confused.
If you are going to continue developing the code, do you want to accept contributions from others? If not, then, once again, this article doesn’t apply to you. If yes, then you have some work to do. Creating an open-source project that’s conducive to outside contributions is more work than it seems. You have to create environments in which those who are unfamiliar with the project can get up to speed and be productive reasonably quickly, and that takes some planning.
This article is about starting an open-source project with these goals:
Choosing A License
Before you share your code, the most important decision to make is what license to apply. The open-source license that you choose could greatly help or hurt your chances of attracting contributors. All licenses allow you to retain the copyright of the code you produce. While the concept of licensing is quite complex, there are a few common licenses and a few basic things you should know about each. (If you are open-sourcing a project on behalf of a company, please speak to your legal counsel before choosing a license.)
There are many other open-source licenses, but these tend to be the most commonly discussed and used.
One thing to keep in mind is that Creative Commons licenses are not designed to be used with software. All of the Creative Commons licenses are intended for “creative work,” including audio, images, video and text. The Creative Commons organization itself recommends not using Creative Commons licenses for software and instead to use licenses that have been specifically formulated to cover software, as is the case with the four options discussed above.
So, which license should you choose? It largely depends on how you intend your code to be used. Because LGPL, MIT and BSD3 are all compatible with GPL, that’s not a major concern. If you want any modified versions of your code to be used only in open-source software, then GPL is the way to go. If your code is designed to be a standalone component that may be included in other applications without modification, then you might want to consider LGPL. Otherwise, the MIT or BSD3 licenses are popular choices. Individuals tend to favor the MIT license, while businesses tend to favor BSD3 to ensure that their company name can’t be used without permission.
To help you decide, look at how some popular open-source projects are licensed:
Another option is to release your code into the public domain. Public-domain code has no copyright owner and may be used in absolutely any way. If you have no intention of maintaining control over your code or you just want to share your code with the world and not continue to develop it, then consider releasing it into the public domain.
To learn more about licenses, their associated legal issues and how licensing actually works, please read David Bushell’s “Understanding Copyright and Licenses.”
After deciding how to license your open-source project, it’s almost time to push your code out into the world. But before doing that, look at how the code is organized. Not all code invites contributions. If a potential contributor can’t figure out how to read through the code, then it’s highly unlikely any contribution will emerge. The way you lay out the code, in terms of file and directory structure as well as coding style, is a key aspect to consider before sharing it publicly. Don’t just throw out whatever code you have been writing into the wild; spend some time figuring out how others will view your code and what questions they might have.
For CSS Lint, we decided on a basic top-level directory structure of
One of the biggest complaints about open-source projects is the lack of documentation. Documentation isn’t as fun or exciting as writing executable code, but it is critical to the success of an open-source project. The best way to discourage use of and contributions to your software is to have no documentation. This was an early mistake we made with CSS Lint. When the project launched, we had no documentation, and everyone was confused about how to use it. Don’t make the same mistake: get some documentation ready before pushing the project live.
The documentation should be easy to update and shouldn’t require a code push, because it will need to be changed very quickly in response to user feedback. This means that the best place for documentation isn’t in the same repository as the code. If your code is hosted on GitHub, then make use of the built-in wiki for each project. All of the CSS Lint documentation is in a GitHub wiki. If your code is hosted elsewhere, consider setting up your own wiki or a similar system that enables you to update the documentation in real time. A good documentation system must be easy to update, or else you will never update it.
Whether you’re creating a command-line program, an application framework, a utility library or anything else, keep the end user in mind. The end user is not the person who will be modifying the code; it’s the one who will be using the code. People were initially confused about CSS Lint’s purpose and how to use it effectively because of the lack of documentation. Your project will never gain contributors without first gaining end users. Satisfied end users are the ones who will end up becoming contributors because they see the value in what you’ve created.
Even if you’ve laid out the code in a logical manner and have decent end-user documentation, contributions are not guaranteed to start flowing. You’ll need a developer guide to help get contributors up and running as quickly as possible. A good developer guide covers the following:
I spent a lot of time refining the “CSS Lint Developer Guide” based on conversations I had had with contributors and questions that others would ask. As with all documentation, the developer guide should be a living document that continues to grow as the project grows.
Use A Mailing List
All good open-source projects give people a place to go to ask questions, and the easiest way to achieve that is by setting up a mailing list. When we first launched CSS Lint, Nicole and I were inundated with questions. The problem is that those questions were coming in through many different channels. People were asking questions on Twitter as well as emailing each of us directly. That’s exactly the situation you don’t want.
Setting up a mailing list with Yahoo Groups or Google Groups is easy and free. Make sure to do that before announcing the project’s availability, and actively encourage people to use the mailing list to ask questions. Link to the mailing list prominently on the website (if you have one) or in the documentation.
The other important part of running a mailing list is to actively monitor it. Nothing is more frustrating for end users or contributors than being ignored. If you’ve set up a mailing list, take the time to monitor it and respond to people who ask questions. This is the best way to foster a community of developers around the project. Getting a decent amount of traffic onto the mailing list can take some time, but it’s worth the effort. Offer advice to people who want to contribute; suggest to people to file tickets when appropriate (don’t let the mailing list turn into a bug tracker!); and use the feedback you get from the mailing list to improve documentation.
Use Version Numbers
One common mistake made with open-source projects is neglecting to use version numbers. Version numbers are incredibly important for the long-term stability and maintenance of your project. CSS Lint didn’t use version numbers when it was first released, and I quickly realized the mistake. When bugs came in, I had no idea whether people were using the most recent version because there was no way for them to tell when the code was released. Bugs were being reported that had already been fixed, but there was no way for the end user to figure that out.
Stamping each release with an official version number puts a stake in the ground. When somebody files a bug, you can ask what version they are using and then check whether that bug has already been fixed. This greatly reduced the amount of time I spent with bug reports because I was able to quickly determine whether someone was using the most recent version.
Unless your project has been previously used and vetted, start the version number at 0.1.0 and go up incrementally with each release. With CSS Lint, we increased the second number for planned releases; so, 0.2.0 was the second planned release, 0.3.0 was the third and so on. If we needed to release a version in between planned releases in order to fix bugs, then we increased the third number. So, the second unplanned release after 0.2.0 was 0.2.2. Don’t get me wrong: there are no set rules on how to increase version numbers in a project, though there are a couple of resources worth looking at: Apache APR Versioning and Semantic Versioning. Just pick something and stick with it.
In addition to helping with tracking, version numbers do a number of other great things for your project.
Tag Versions in Source Control
When you decide on a new release, use a source-control tag to mark the state of the code for that release. I started doing this for CSS Lint as soon as we started using version numbers. I didn’t think much of it until the first time I forgot to tag a release and a bug was filed by someone looking for that tag. It turns out that developers like to check out particular versions of code.
Tie the tag obviously to the version number by including the version number directly in the tag’s name. With CSS Lint, our tags are in the format of “v0.9.9.” This will make it very easy for anyone looking through tags to figure out what those tags mean — including you, because you’ll be able to better keep track of what changes have been made in each release.
Another benefit of versioning is in producing change logs. Change logs are important for communicating version differences to both end users and contributors. The added benefit of tagging versions and source control is that you can automatically generate change logs based on those tags. CSS Lint’s build system automatically creates a change log for each release that includes not just the commit message but also the contributor. In this way, the change log becomes a record not only of code changes, but also of contributions from the community.
Whenever a new version of the project is available, announce its availability somewhere. Whether you do this on a blog or on the mailing list (or in both places), formally announcing that a new version is available is very important. The announcement should include any major changes made in the code, as well as who has contributed those changes. Contributors tend to contribute more if they get some recognition for their work, so reward the people who have taken the time to contribute to your project by acknowledging their contribution.
Once you have everything set up, the next step is to figure out how you will accept contributions. Your contribution model can be as informal or formal as you’d like, depending on your goals. For a personal project, you might not have any formal contribution process. The developer guide would lay out what is necessary in order for the code to be merged into the repository and would state that as long as a submission follows that format, then the contribution will be accepted. For a larger project, you might want to have a more formal policy.
The first thing to look into is whether you will require a contributor license agreement (CLA). CLAs are used in many large open-source projects to protect the legal rights of the project. Every person who submits code to the project would need to fill out a CLA certifying that any contributed code is original work and that the copyright for that work is being turned over to the project. CLAs also give the project owner license to use the contribution as part of the project, and it warrants that the contributor isn’t knowingly including code to which someone else has a copyright, patent or other right. jQuery, YUI and Dojo all require CLAs for code submissions. If you are considering requiring a CLA from contributors, then getting legal advice regarding it and your licensing structure would be worthwhile.
Next, you may want to establish a hierarchy of people working on the project. Open-source projects typically have three primary designations:
If you’re going to have a formal hierarchy such as this, you’ll need to draft a document that describes the role of each type of contributor and how one may be promoted through the ranks. YUI has just created a formal “Contributor Model,” along with excellent documentation on roles and responsibilities.
At the moment, CSS Lint doesn’t require a CLA and doesn’t have a formal contribution model in place, but everyone should consider it as their open-source project grows.
It probably took us about six months from its initial release to get CSS Lint to the point of being a fully functioning open-source project. Since then, over a dozen contributors have submitted code that is now included in the project. While that number might seem small by the standard of a large open-source project, we take great pride in it. Getting one external contribution is easy; getting contributions over an extended period of time is difficult.
And we know that we’ve been doing something right because of the feedback we receive. Jonathan Klein recently took to the mailing list to ask some questions and ended up submitting a pull request that was accepted into the project. He then emailed me this feedback:
Getting emails like this has become commonplace for CSS Lint, and it can become the norm for your project, too, if you take the time to set up a sustainable eco-system. We all want our projects to succeed, and we want people to want to contribute to them. As Jonathan said, make the barrier to entry as low as possible and developers will find ways to help out.
Monday, December 10. 2012
The Museum of Modern Art in New York (MoMA) last week announced that it is bolstering its collection of work with 14 videogames, and plans to acquire a further 26 over the next few years. And that’s just for starters. The games will join the likes of Hector Guimard’s Paris Metro entrances, the Rubik’s Cube, M&Ms and Apple’s first iPod in the museum’s Architecture & Design department.
The move recognises the design achievements behind each creation, of course, but despite MoMA’s savvy curatorial decision, the institution risks becoming a catalyst for yet another wave of awkward ‘are games art?’ blog posts. And it doesn’t exactly go out of its way to avoid that particular quagmire in the official announcement.
“Are video games art? They sure are,” it begins, worryingly, before switching to a more considered tack, “but they are also design, and a design approach is what we chose for this new foray into this universe. The games are selected as outstanding examples of interaction design — a field that MoMA has already explored and collected extensively, and one of the most important and oft-discussed expressions of contemporary design creativity.”
MoMA worked with scholars, digital conservation and legal experts, historians and critics to come up with its criteria and final list of games, and among the yardsticks the museum looked at for inclusion are the visual quality and aesthetic experience of each game, the ways in which the game manipulates or stimulates player behaviour, and even the elegance of its code.
That initial list of 14 games makes for convincing reading, too: Pac-Man, Tetris, Another World, Myst, SimCity 2000, Vib-Ribbon, The Sims, Katamari Damacy, Eve Online, Dwarf Fortress, Portal, flOw, Passage and Canabalt.
But the wishlist also extends to Spacewar!, a selection of Magnavox Odyssey games, Pong, Snake, Space Invaders, Asteroids, Zork, Tempest, Donkey Kong, Yars’ Revenge, M.U.L.E, Core War, Marble Madness, Super Mario Bros, The Legend Of Zelda, NetHack, Street Fighter II, Chrono Trigger, Super Mario 64, Grim Fandango, Animal Crossing, and, of course, Minecraft.
Art, design or otherwise, MoMA’s focused collection is an uncommonly informed and well-considered list. And their inclusion within MoMA’s hallowed walls, and the recognition of their cultural and historical relevance that is implied, is certainly a boon for videogames on the whole. But reactions to the move have been mixed. The Guardian’s Jonathan Jones posted a blog last week titled Sorry MoMA, Videogames Are Not Art, in which he suggests that exhibiting Pac-Man and Tetris alongside work by Picasso and Van Gogh will mean “game over for any real understanding of art”.
“The worlds created by electronic games are more like playgrounds where experience is created by the interaction between a player and a programme,” he writes. “The player cannot claim to impose a personal vision of life on the game, while the creator of the game has ceded that responsibility. No one ‘owns’ the game, so there is no artist, and therefore no work of art.”
While he clearly misunderstands the capacity of a game to manifest the personal – and singular – vision of its creator, he nonetheless raises valid fears that the creative motivations behind many videogames’ – predominantly commercially-driven entertainment – are incompatible with those of serious art and that their inclusion in established museums risks muddying its definition. But while many commentators have fallen into the same trap of invoking comparisons with cubist and impressionist painters, MoMA has drawn no such parallels.
“We have to keep in mind it’s the design collection that is snapping up video games,” Passage creator Jason Rorher tells us when we put the question to him. “This is the same collection that houses Lego, teapots, and barstools. I’m happy with that, because I primarily think of myself as a designer. But sadly, even the mightiest games in this acquisition look silly when stood up next to serious works of art. I mean what’s the artistic payload of, Passage? ‘You’re gonna die someday.’ You can’t find a sentiment that’s more artistically worn out than that.”
But while he doesn’t see these games’ inclusion as a significant landmark – in fact, he even raises concerns over bandwagon-hopping – he’s still elated to have been included.
“I’m shocked to see my little game standing there next to landmarks like Pac-Man, Tetris, Another World, and… all of them really, all the way up to Canabalt,” he says. “The most pleasing aspect of it, for me, is that something I have made will be preserved and maintained into the future, after I croak. The ephemeral nature of digital-download video games has always worried me. Heck, the Mac version of Passage has already been broken by Apple’s updates, and it’s only been five years!”
Talking of Canabalt, creator Adam Saltsman echoes Rohrer’s sentiment: “Obviously it is a pretty huge honour, but I think it’s also important to note that these selections are part of the design wing of the museum, so Tetris won’t exactly be right next to Dali or Picasso! That doesn’t really diminish the excitement for me though. The MoMA is an incredible institution, and to have my work selected for archival alongside obvious masterpieces like Tetris is pretty overwhelming. “
MoMA’s not the only art institution with an interest in videogames, of course. The Smithsonian American Art Museum ran an exhibition titled The Art of Video Games earlier this year, while the Barbican has put its weight behind all manner of events, including 2002?s The History, Culture and Future of Computer Games, Ear Candy: Video Game Music, and the touring Game On exhibition.
Chris Melissinos, who was one of the guest curators who put the Smithsonian exhibition together and subsequently acted as an adviser to MoMA as it selected its list, doesn’t think such interest is damaging to art, or indeed a sign of out-of-step institutions jumping on the bandwagon. It’s simply, he believes, a reaction to today’s culture.
“This decision indicates that videogames have become an important cultural, artistic form of expression in society,” he told the Independent. “It could become one of the most important forms of artistic expression. People who apply themselves to the craft view themselves as [artists], because they absolutely are. This is an amalgam of many traditional forms of art.”
Of the initial selection, Eve is arguably the most ambitious, and potentially divisive, selection, but perhaps also the best placed to challenge Jones’ predispositions on experiential ownership and creative limitation. It is, after all, renowned for its vociferous, self-governing player community.
“Eve’s been around for close to a decade, is still growing, and through its lifetime has won several awards and achievements, but being acquired into the permanent collection of a world leading contemporary art and design museum is a tremendous honour for us,” Eve Online creative director Torfi Frans Ólafsson tells us. “Eve is born out of a strong ideology of player empowerment and sandbox openness, which especially in our earlier days was often at the cost of accessibility and mainstream appeal.
“Sitting up there along with industrial design like the original iPod, and fancy, unergonomic lemon presses tells us that we were right to stand by our convictions, so in that sense, it’s somewhat of a vindication of our efforts.”
But how do you present an entire universe to an audience that is likely to spend a few short minutes looking at each exhibit? Developer CCP is turning to its many players for help.
“We’ve decided to capture a single day of Eve: Sunday the 9th of December,” explains Ólafsson. “Through a variety of player made videos, CCP videos, massive data analysis and info graphics.”
In presenting Eve in this way, CCP and the games players are collaborating on a strong, coherent vision of the alternative reality they’ve collectively helped to build, and more importantly, reinforcing and redefining the notion of authorship. It doesn’t matter whether you’re an apologist for videogames’ entitlement to the status of art, or someone who appreciates the aesthetics of their design, the important thing here is that their cultural importance is recognised. Sure, the notion of a game exhibit that doesn’t include gameplay might stick in the craw of some, but MoMA’s interest is clearly broader. Ólafsson isn’t too worried, either.
“Even if we don’t fully succeed in making the 3.5 million people that visit the MoMA every year visually grok the entire universe in those few minutes they might spend checking Eve out, I can promise you it sure will look pretty there on the wall.”
Monday, November 26. 2012
Via Addy Osmani
In this post, I will review some of the features I'm personally looking forward to landing and being used in 2013 and beyond.
ES.next implementation status
Be sure to look at Juriy Zaytsev's ECMAScript 6 compatibility table, Mozilla's ES6 status page as well as the bleeding edge versions of modern browsers (e.g Chrome Canary, Firefox Aurora) to find out what ES.next features are available to play with right now.
Alternatively, many ES.next features can be experimented with using Google's Traceur transpiler (useful unit tests with examples here) and there are shims available for other features via projects such as ES6-Shim and Harmony Collections.
Finally, in Node.js (V8), the
used to separating our code into manageable blocks of functionality. In
ES.next, A module is a unit of code contained within a
A module instance is a module which has been evaluated, is linked to other modules or has lexically encapsulated data. An example of a module instance is:
Revisiting the export example above, we can now selectively choose what we wish to
We can just import
We can import
we mentioned the concept of a Module Loader API. The module loader
allows us to dynamically load in scripts for consumption. Similar to
the above example seems fairly trivial to use, the Loader API is there
to provide a way to load modules in controlled contexts and actually
supports a number of different configuration options.
What about classes?
Classes in ES.next are there to provide a declarative surface for the semantics we're used to (e.g functions, prototypes) so that developer intent is expressed instead of the underlying imperative mechanics.
Here's some ES.next code for defining a widget:
Followed by today's de-sugared approach that ignores the semantic improvements brought by ES.next modules over the module pattern and instead emphasises our reliance of function variants:
All the ES.next version does it makes the code more easy to read. What
Where do these modules fit in with AMD?
If anything, the landscape for modularization and loading of code on the front-end has seen a wealth of hacks, abuse and experimentation, but we've been able to get by so far.
Are ES.next modules a step in
the right direction? Perhaps. My own take on them is that reading their
specs is one thing and actually using them is another. Playing with the
newer module syntax in Harmonizr, Require HM and Traceur,
you actually get used to the syntax and semantics very quickly – it
feels like using a cleaner module pattern but with access to native
loader API for any dynamic module loading required at runtime. That
said, the syntax might feel a little too much like Python for some
peoples tastes (e.g the
I'm part of the camp that believe if there's functionality developers are using broadly enough (e.g better modules), the platform (i.e the browser) should be trying to offer some of this natively and I'm not alone in feeling this way. James Burke, who was instrumental in bringing us AMD and RequireJS has previously said:
James has however questioned whether ES.next modules are a sufficient solution. He covered some more of his thoughts on ES.next modules back in June in ES6 Modules: Suggestions for improvement and later in Why not AMD? for anyone interested in reading more about how these modules fit in with RequireJS and AMD.
Isaac Schlueter has also previously written up thoughts on where ES6 modules fall short that are worth noting. Try them out yourself using some of the options below and see what you think.
Use it today
The idea behind
This is a fundamentally important addition to JS as it could both offer performance improvements over a framework's custom implementations and allow easier observation of plain native objects.
Availability: Object.observe will be available in Chrome Canary behind the "Enable Experimental JS APIs" flag. If you don't feel like getting that setup, you can also checkout this video by Rafael Weinstein discussing the proposal.
Use it today
Default Parameter Values
parameter values allow us to initialize parameters if they are not
explicitly supplied. This means that we no longer have to write
The syntax is modified by allowing an (optional) initialiser after the parameter names:
Only trailing parameters may have default values:
Block scoping introduces new declaration forms for defining variables scoped to a single block. This includes:
Maps and sets
With the Maps
Use it today
One possible use for sets is reducing the complexity of filtering operations. e.g:
This results in O(n) for filtering uniques in an array. Almost all methods of array unique with objects are O(n^2) (credit goes to Brandon Benvie for this suggestion).
Availability: Firefox 18, Chrome 24+
Use it today
The Proxy API will allow us to create objects whose properties may be computed at run-time dynamically. It will also support hooking into other objects for tasks such as logging or auditing.
Also checkout Zakas' Stack implementation using ES6 proxies experiment.
Availability: FF18, Chrome 24
WeakMaps help developers avoid memory leaks by holding references to their properties weakly, meaning that if a WeakMap is the only object with a reference to another object, the garbage collector may collect the referenced object. This behavior differs from all variable references in ES5.
A key property of Weak Maps is the inability to enumerate their keys.
So again, the main difference between WeakMaps and Maps is that WeakMaps are not enumerable.
Use it today
Introduces a function for comparison called
Availability: Chrome 24+
Use it today
Converting any Array-Like objects:
The following examples illustrate common DOM use cases:
Use it today
In the meantime, we can use (some) modern browsers, transpilers, shims and in some cases custom builds to experiment with features before ES6 is fully here.
Exciting times are most definitely ahead.
Thursday, November 08. 2012
Wednesday, June 20. 2012
We've been writing a lot recently about how the private space industry is poised to make space cheaper and more accessible. But in general, this is for outfits such as NASA, not people like you and me.
Today, a company called NanoSatisfi is launching a Kickstarter project to send an Arduino-powered satellite into space, and you can send an experiment along with it.
Whether it's private industry or NASA or the ESA or anyone else, sending stuff into space is expensive. It also tends to take approximately forever to go from having an idea to getting funding to designing the hardware to building it to actually launching something. NanoSatisfi, a tech startup based out of NASA's Ames Research Center here in Silicon Valley, is trying to change all of that (all of it) by designing a satellite made almost entirely of off-the-shelf (or slightly modified) hobby-grade hardware, launching it quickly, and then using Kickstarter to give you a way to get directly involved.
ArduSat is based on the CubeSat platform, a standardized satellite framework that measures about four inches on a side and weighs under three pounds. It's just about as small and cheap as you can get when it comes to launching something into orbit, and while it seems like a very small package, NanoSatisfi is going to cram as much science into that little cube as it possibly can.
Here's the plan: ArduSat, as its name implies, will run on Arduino boards, which are open-source microcontrollers that have become wildly popular with hobbyists. They're inexpensive, reliable, and packed with features. ArduSat will be packing between five and ten individual Arduino boards, but more on that later. Along with the boards, there will be sensors. Lots of sensors, probably 25 (or more), all compatible with the Arduinos and all very tiny and inexpensive. Here's a sampling:
Yeah, so that's a lot of potential for science, but the entire Arduino sensor suite is only going to cost about $1,500. The rest of the satellite (the power system, control system, communications system, solar panels, antennae, etc.) will run about $50,000, with the launch itself costing about $35,000. This is where you come in.
NanoSatisfi is looking for Kickstarter funding to pay for just the launch of the satellite itself: the funding goal is $35,000. Thanks to some outside investment, it's able to cover the rest of the cost itself. And in return for your help, NanoSatisfi is offering you a chance to use ArduSat for your own experiments in space, which has to be one of the coolest Kickstarter rewards ever.
Now, NanoSatisfi itself doesn't really expect to get involved with a lot of the actual experiments that the ArduSat does: rather, it's saying "here's this hardware platform we've got up in space, it's got all these sensors, go do cool stuff." And if the stuff that you can do with the existing sensor package isn't cool enough for you, backers of the project will be able to suggest new sensors and new configurations, even for the very first generation ArduSat.
To make sure you don't brick the satellite with buggy code, NanSatisfi will have a duplicate satellite in a space-like environment here on Earth that it'll use to test out your experiment first. If everything checks out, your code gets uploaded to the satellite, runs in whatever timeslot you've picked, and then the results get sent back to you after your experiment is completed. Basically, you're renting time and hardware on this satellite up in space, and you can do (almost) whatever you want with that.
ArduSat has a lifetime of anywhere from six months to two years. None of this payload stuff (neither the sensors nor the Arduinos) are specifically space-rated or radiation-hardened or anything like that, and some of them will be exposed directly to space. There will be some backups and redundancy, but partly, this will be a learning experience to see what works and what doesn't. The next generation of ArduSat will take all of this knowledge and put it to good use making a more capable and more reliable satellite.
This, really, is part of the appeal of ArduSat: with a fast,
efficient, and (relatively) inexpensive crowd-sourced model, there's a
huge potential for improvement and growth. For example, If this
Kickstarter goes bananas and NanoSatisfi runs out of room for people to
get involved on ArduSat, no problem, it can just build and launch
another ArduSat along with the first, jammed full of (say) fifty more
Arduinos so that fifty more experiments can be run at the same time. Or
it can launch five more ArduSats. Or ten more. From the decision to
start developing a new ArduSat to the actual launch of that ArduSat is a
period of just a few months. If enough of them get up there at the same
time, there's potential for networking multiple ArduSats together up in
space and even creating a cheap and accessible global satellite array.
Longer term, there's also potential for making larger ArduSats with more complex and specialized instrumentation. Take ArduSat's camera: being a little tiny satellite, it only has a little tiny camera, meaning that you won't get much more detail than a few kilometers per pixel. In the future, though, NanoSatisfi hopes to boost that to 50 meters (or better) per pixel using a double or triple-sized satellite that it'll call OptiSat. OptiSat will just have a giant camera or two, and in addition to taking high resolution pictures of Earth, it'll also be able to be turned around to take pictures of other stuff out in space. It's not going to be the next Hubble, but remember, it'll be under your control.
Assuming the Kickstarter campaign goes well, NanoSatisfi hopes to complete construction and integration of ArduSat by about the end of the year, and launch it during the first half of 2013. If you don't manage to get in on the Kickstarter, don't worry- NanoSatisfi hopes that there will be many more ArduSats with many more opportunities for people to participate in the idea. Having said that, you should totally get involved right now: there's no cheaper or better way to start doing a little bit of space exploration of your very own.
Check out the ArduSat Kickstarter video below, and head on through the link to reserve your spot on the satellite.
Wednesday, April 18. 2012
Via Christian Babski
Here(@American Scientist) is an interesting and rather complete article on programming language [evolution,war] with some infographics on programming history and methods. It is not that technical, making the reading accessible to any one.
Tuesday, April 17. 2012
Develop looks at the platform with ambitions to challenge Adobe's ubiquitous Flash web player
Initially heralded as the future of browser gaming and the next step beyond the monopolised world of Flash, HTML5 has since faced criticism for being tough to code with and possessing a string of broken features.
The coding platform, the fifth iteration of the HTML standard, was supposed to be a one stop shop for developers looking to create and distribute their game to a multitude of platforms and browsers, but things haven’t been plain sailing.
And whilst this has worked to a certain degree, and a number of companies such as Microsoft, Apple, Google and Mozilla under the W3C have collaborated to bring together a single open standard, the problems it possesses cannot be ignored.
(Page 1 of 3, totaling 25 entries) » next page
Show tagged entries