Monday, April 27. 2015Windows 10 can run reworked Android and iOS appsVia The Verge -----
After months of rumors, Microsoft is revealing its plans to get mobile apps on Windows 10 today. While the company has been investigating emulating Android apps, it has settled on a different solution, or set of solutions, that will allow developers to bring their existing code to Windows 10. iOS and Android developers will be able to port their apps and games directly to Windows universal apps, and Microsoft is enabling this with two new software development kits. On the Android side, Microsoft is enabling developers to use Java and C++ code on Windows 10, and for iOS developers they’ll be able to take advantage of their existing Objective C code. "We want to enable developers to leverage their current code and current skills to start building those Windows applications in the Store, and to be able to extend those applications," explained Microsoft’s Terry Myerson during an interview with The Verge this morning. The idea is simple, get apps on Windows 10 without the need for developers to rebuild them fully for Windows. While it sounds simple, the actual process will be a little more complicated than just pushing a few buttons to recompile apps. "Initially it will be analogous to what Amazon offers," notes Myerson, referring to the Android work Microsoft is doing. "If they’re using some Google API… we have created Microsoft replacements for those APIs." Microsoft’s pitch to developers is to bring their code across without many changes, and then eventually leverage the capabilities of Windows like Cortana, Xbox Live, Holograms, Live Tiles, and more. Microsoft has been testing its new tools with some key developers like King, the maker of Candy Crush Saga, to get games ported across to Windows. Candy Crush Saga as it exists today on Windows Phone has been converted from iOS code using Microsoft’s tools without many modifications. During Microsoft’s planning for bringing iOS and Android apps to Windows, Myerson admits it wasn’t always an obvious choice to have both. "At times we’ve thought, let's just do iOS," Myerson explains. "But when we think of Windows we really think of everyone on the planet. There’s countries where iOS devices aren’t available." Supporting both Android and iOS developers allows Microsoft to capture everyone who is developing for mobile platforms right now, even if most companies still continue to target iOS first and port their apps to Android at the same time or shortly afterward. By supporting iOS developers, Microsoft wants to be third in line for these ported apps, and that’s a better situation than it faces today. Alongside the iOS and Android SDKs, Microsoft is also revealing ways for websites and Windows desktop apps to make their way over to Windows universal apps. Microsoft has created a way for websites to run inside a Windows universal app, and use system services like notifications and in-app purchases. This should allow website owners to easily create web apps without much effort, and list those apps in the Windows Store. It’s not the best alternative to a native app for a lot of scenarios, but for simple websites it offers up a new way to create an app without its developers having to learn new code languages. Microsoft is also looking toward existing Windows desktop app developers with Windows 10. Developers will be able to leverage their .NET and Win32 work and bring this to Windows universal apps. "Sixteen million .NET and Win32 apps are still being used every month on Windows 7 and Windows 8," explains Myerson, so it’s clear Microsoft needs to get these into Windows 10. Microsoft is using some of its HyperV work to virtualize these existing desktop apps on Windows 10. Adobe is one particular test case where Microsoft has been working closely with the firm to package its apps ready for Windows 10. Adobe Photoshop Elements is coming to the Windows Store as a universal app, using this virtualization technology. Performance is key for many desktop apps, so it will be interesting to see if Microsoft has managed to maintain a fluid app experience with this virtualization. Collectively, Microsoft is referring to these four new SDKs as bridges or ramps to get developers interested in Windows 10. It’s a key moment for the company to really win back developers and prove that Windows is still relevant in a world that continues to be dominated by Android and iOS. The aim, as Myerson puts it, is to get Windows 10 on 1 billion devices within the next two to three years. That’s a big goal, and the company will need the support of developers and apps to help it get there. These SDKs will generate questions among Microsoft’s core development community, especially those who invested heavily in the company’s Metro-style design and the unique features of Windows apps in the past. The end result for consumers is, hopefully, more apps, but for developers it’s a question of whether to simply port their existing iOS and Android work across and leave it at that, or extend those apps to use Windows features or even some design elements. "We want to structure the platform so it’s not an all or nothing," says Myerson. "If you use everything together it’s beautiful, but that’s not required to get started." Microsoft still has the tricky mix of ported apps to contend with, and that could result in an app store similar to Amazon's, or even one where developers still aren't interested in porting. This is just the beginning, and Windows universal apps, while promising, still face a rocky and uncertain future.
Friday, April 24. 2015Analyst Watch: Ten reasons why open-source software will eat the worldVia SD Times -----
I recently attended Facebook’s F8 developer conference in San Francisco, where I had a revelation on why it is going to be impossible to succeed as a technology vendor in the long run without deeply embracing open source. Of the many great presentations I listened to, I was most captivated by the ones that explained how Facebook internally developed software. I was impressed by how quickly the company is turning such important IP back into the community. To be sure, many major Web companies like Google and Yahoo have been leveraging open-source dynamics aggressively and contribute back to the community. My aim is not to single out Facebook, except that it was during the F8 conference I had the opportunity to reflect on the drivers behind Facebook’s actions and why other technology providers may be wise to learn from them. Here are my 10 reasons why open-source software is effectively becoming inevitable for infrastructure and application platform companies:
Monday, April 13. 2015BitTorrent launches its Maelstrom P2P Web Browser in a public betaVia TheNextWeb ----- Back in December, we reported on the alpha for BitTorrent’s Maelstrom, a browser that uses BitTorrent’s P2P technology in order to place some control of the Web back in users’ hands by eliminating the need for centralized servers.
Along with the beta comes the first set of developer tools for the browser, helping publishers and programmers to build their websites around Maelstrom’s P2P technology. And they need to – Maelstrom can’t decentralize the Internet if there isn’t any native content for the platform. It’s only available on Windows at the moment but if you’re interested and on Microsoft’s OS, you can download the beta from BitTorrent now. ? Project MaelstromWednesday, April 01. 2015'Largest DDoS attack' in GitHub's history targets anticensorship projectsVia Network World -----
GitHub has been hammered by a continuous DDoS attack for three days. It's the "largest DDoS attack in github.com's history." The attack is aimed at anti-censorship GreatFire and CN-NYTimes projects, but affected all of GitHub. The traffic is reportedly coming from China, as attackers are using the Chinese search engine Baidu for the purpose of "HTTP hijacking." According to tweeted GitHub status messages, GitHub has been the victim of a Distributed Denial of Service (DDoS) attack since Thursday March 26. 24 hours later, GitHub said it had "all hands on deck" working to mitigate the continuous attack. After GitHub later deployed "volumetric attack defenses," the attack morphed to include GitHub pages and then "pages and assets." Today, GitHub said it was 71 hours into defending against the attack. Monday, March 23. 2015This Engineer Turned Radiowaves Into Fashion During the 1930s----- For today's edition of There's Nothing New Under The Sun™, we have a radio engineer who experimented with creating high-tech fashion that would be right at home amongst the 21st century's glitch art and wifi visualizations. Except that these patterns were made in 1938.
The July 1938 issue of Radio-Craft magazine featured photos of RCA engineer C.E. Burnett, turning radiowaves into patterns that could be used on clothes and furniture. Burnett was a radio and TV engineer and was inspired to turn the frequencies he saw around him everyday into practical textiles. In an article titled "Radio Creates Amazing Fashion Patterns," we learn about this new kind of art being created with a "radio kaleidoscope." By photographing a cathode ray tube (the same kind that would eventually fill American living rooms in the form of TV after WWII) and fiddling with the voltages and frequencies, the textile designer is able to create an "electronic snakeskin" pattern.
The patterns could be used for any number of products from hats and shoes to bags and lampshades. The possibilities were endless, and imperfections were welcome, as the article noted. But it also wasn't just a guessing game. Through smart manipulation, you could get the kind of pattern you wanted.
From the July 1938 issue of Radio-Craft: This finding of patterns through electronic means is by no means an entirely hit-or-miss affair. Research has proven that given frequencies may be relied upon to produce patterns of a certain sort. If, for example, a chain effect of linked lines is wanted, they electronic designer looks at his frequency chart, sets the controls and — presto! — a pattern similar to that used on the bracelet illustrated appears.
I have yet to find any photos of Burnett's designs out in the real world and it's unclear from the article what became of this new high-tech inspired design technique. The article notes that surrealists artists would be "frantic with envy" over this method of creating visual art, but that it would find plenty of practical applications.
Whether anyone actually took up the torch for these "electronic patterns" in 1938 is unclear, but whether he knew it or not, Burnett would prove decades ahead of his time. Images: Scanned from the July 1938 issue of Radio-Craft magazine
Friday, March 20. 2015The Internet of Things invasion: Has it gone too far?Via PC World -----
Remember when the Internet was just that thing you accessed on your computer? Today, connectivity is popping up in some surprising places: kitchen appliances, bathroom scales, door locks, clothes, water bottles… even toothbrushes. That’s right, toothbrushes. The Oral-B SmartSeries is billed as the world’s first “interactive electric toothbrush” with built-in Bluetooth. Whatever your feelings on this level of connectivity, it’s undeniable that it’s a new frontier for data. And let’s face it, we’re figuring it out as we go. Consequently, it’s a good idea to keep your devices secure - and that means leveraging a product like Norton Security, which protects mobile devices and can help keep intruders out of your home network. Because, let’s face it, the last thing you want is a toothbrush that turns on you. Welcome to the age of the Internet of Things (IoT for short), the idea that everyday objects - and everyday users - can benefit from integrated network connectivity, whether it’s a washing machine that notifies you when it’s done or a collar-powered tracker that helps you locate your runaway pet. Some of these innovations are downright brilliant. Others veer into impractical or even unbelievable. And some can present risks that we’ve never had to consider before. Consider the smart lock. A variety of companies offer deadbolt-style door locks you can control from your smartphone. One of them, the August Smart Lock, will automatically sense when you approach your door and unlock it for you, even going so far as to lock it again once you’ve passed through. And the August app not only logs who has entered and exited, but also lets you provide a temporary virtual key to friends, family members, a maid service, and the like. That’s pretty cool, but what happens in the event of a dead battery - either in the user’s smartphone or the lock itself? If your phone gets lost or stolen, is there a risk a thief can now enter your home? Could a hacker “pick” your digital lock? Smart lock-makers promise safeguards against all these contingencies, but it begs the question whether the conveniences outweigh the risks. Do we really need the Internet in all our things? The latest that-can’t-possibly-be-a-real-product example made its debut at this year’s Consumer Electronics Show: The Belty, an automated belt/buckle combo that automatically loosens when you sit and tightens when you stand. A smartphone app lets you control the degree of each. Yep. Then there’s the water bottle that reminds you to drink more water. The smart exercise shirt your trainer can use to keep tabs on your activity (or lack thereof). And who can forget the HAPIfork, the “smart” utensil that aims to steer you toward healthier eating by reminding you to eat more slowly? Stop the Internet (of Things), I want to get off. Okay, I shouldn’t judge. And it’s not all bad. There is real value in - and good reason to be excited about - a smart basketball that helps you perfect your jump shot. Or a system of smart light bulbs designed to deter break-ins. Ultimately, the free market will decide which ones are useful and which ones are ludicrous. The important thing to remember is that with the IoT, we’re venturing into new territory. We’re linking more devices than ever to our home networks. We’re installing phone and tablet apps that have a direct line not just to our data, but also our very domiciles.
Wednesday, March 18. 2015First Ubuntu Phone Landing In Europe ShortlyVia TechCrunch ----- A year after it revealed another attempt to muscle in on the smartphone market, Canonical’s first Ubuntu-based smartphone is due to go on sale in Europe in the “coming days”, it said today. The device will be sold for €169.90 (~$190) unlocked to any carrier network, although some regional European carriers will be offering SIM bundles at the point of purchase. The hardware is an existing mid-tier device, the Aquaris E4.5, made by Spain’s BQ — with the Ubuntu version of the device known as the ‘Aquaris E4.5 Ubuntu Edition’. So the only difference here is it will be pre-loaded with Ubuntu’s mobile software, rather than Google’s Android platform. Canonical has been trying to get into the mobile space for a while now. Back in 2013, the open source software maker failed to crowdfund a high end converged smartphone-cum-desktop-computer, called the Ubuntu Edge — a smartphone-sized device that would been powerful enough to transform from a pocket computer into a fully fledged desktop when plugged into a keyboard and monitor, running Ubuntu’s full-fat desktop OS. Canonical had sought to raise a hefty $32 million in crowdfunds to make that project fly. Hence its more modest, mid-tier smartphone debut now. On the hardware side, Ubuntu’s first smartphone offers pretty bog standard mid-range specs, with a 4.5 inch screen, 1GB RAM, a quad-core A7 chip running at “up to 1.3Ghz”, 8GB of on-board storage, 8MP rear camera and 5MP front-facing lens, plus a dual-SIM slot. But it’s the mobile software that’s the novelty here (demoed in action in Canonical’s walkthrough video, embedded below).
Canonical has created a gesture-based smartphone interface called Scopes, which puts the homescreen focus on on a series of themed cards that aggregate content and which the user swipes between to navigate around the functions of the phone, while app icons are tucked away to the side of the screen, or gathered together on a single Scope card. Examples include a contextual ‘Today’ card which contains info like weather and calendar, or a ‘Nearby’ card for location-specific local services, or a card for accessing ‘Music’ content on the device, or ‘News’ for accessing various articles in one place. It’s certainly a different approach to the default grid of apps found on iOS and Android but has some overlap with other, alternative platforms such as Palm’s WebOS, or the rebooted BlackBerry OS, or Jolla’s Sailfish. The problem is, as with all such smaller OSes, it will be an uphill battle for Canonical to attract developers to build content for its platform to make it really live and breathe. (It’s got a few third parties offering content at launch — including Songkick, The Weather Channel and TimeOut.) And, crucially, a huge challenge to convince consumers to try something different which requires they learn new mobile tricks. Especially given that people can’t try before they buy — as the device will be sold online only. Canonical said the Aquaris E4.5 Ubuntu Edition will be made available in a series of flash sales over the coming weeks, via BQ.com. Sales will be announced through Ubuntu and BQ’s social media channels — perhaps taking a leaf out of the retail strategy of Android smartphone maker Xiaomi, which uses online flash sales to hype device launches and shift inventory quickly. Building similar hype in a mature smartphone market like Europe — for mid-tier hardware — is going to a Sisyphean task. But Canonical claims to be in it for the long, uphill haul. “We are going for the mass market,” Cristian Parrino, its VP of Mobile, told Engadget. “But that’s a gradual process and a thoughtful process. That’s something we’re going to be doing intelligently over time — but we’ll get there.” Thursday, February 26. 2015Researchers produce Global Risks report, AI and other technologies included in itVia betanews -----
Let's face it, we're always at risk, and I speak for human kind, not just the personal risks we take each time we leave our homes. Some of these potential terrors are unavoidable -- we can't control the asteroid we find hurtling towards us or the next super volcano that may erupt as the Siberian Traps once did. Some risks however, are well within our control, yet we continue down paths that are both exciting and potentially dangerous. In his book Demon Haunted World, the great astronomer, teacher and TV personality Carl Sagan wrote "Avoidable human misery is more often caused not so much by stupidity as by ignorance, particularly our ignorance about ourselves". Now researchers have published a list of the risks we face and several of them are self-created. Perhaps the most prominent is artificial intelligence, or AI as it generally referred to. The technology has been fairly prominent in the news recently as both Elon Musk and Bill Gates have warned of its dangers. Musk went as far as to invest in some of the companies so that he could keep an eye on things. The new report states "extreme intelligences could not easily be controlled (either by the groups creating them, or by some international regulatory regime), and would probably act to boost their own intelligence and acquire maximal resources for almost all initial AI motivations". Stephen Hawking, perhaps the world's most famous scientist, told the BBC "The development of full artificial intelligence could spell the end of the human race". That's three obviously intelligent men telling us it's a bad idea, but of course that will not deter those who wish to develop it and if it is controlled correctly then it may not be the huge danger we worry about. What else is on the list of doom and gloom? Several more man-made problems, including nuclear war, global system collapse, synthetic biology, and nanotechnology. There is also the usual array of asteroids, super volcanoes and global pandemics. For good measure, the scientists even added in bad global governance. If you would like to read the report for yourself it can be found at the Global Challenges Foundation website. It may keep you awake at night -- even better than a good horror movie could. Tuesday, February 24. 2015If software looks like a brain and acts like a brain—will we treat it like one?Via ars technica ----- Long the domain of science fiction, researchers are now working to create software that perfectly models human and animal brains. With an approach known as whole brain emulation (WBE), the idea is that if we can perfectly copy the functional structure of the brain, we will create software perfectly analogous to one. The upshot here is simple yet mind-boggling. Scientists hope to create software that could theoretically experience everything we experience: emotion, addiction, ambition, consciousness, and suffering. “Right now in computer science, we make computer simulations of neural networks to figure out how the brain works," Anders Sandberg, a computational neuroscientist and research fellow at the Future of Humanity Institute at Oxford University, told Ars. "It seems possible that in a few decades we will take entire brains, scan them, turn them into computer code, and make simulations of everything going on in our brain.” Everything. Of course, a perfect copy does not necessarily mean equivalent. Software is so… different. It's a tool that performs because we tell it to perform. It's difficult to imagine that we could imbue it with those same abilities that we believe make us human. To imagine our computers loving, hungering, and suffering probably feels a bit ridiculous. And some scientists would agree. But there are others—scientists, futurists, the director of engineering at Google—who are working very seriously to make this happen. For now, let’s set aside all the questions of if or when. Pretend that our understanding of the brain has expanded so much and our technology has become so great that this is our new reality: we, humans, have created conscious software. The question then becomes how to deal with it. And while success in this endeavor of fantasy turning fact is by no means guaranteed, there has been quite a bit of debate among those who think about these things whether WBEs will mean immortality for humans or the end of us. There is far less discussion about how, exactly, we should react to this kind of artificial intelligence should it appear. Will we show a WBE human kindness or human cruelty—and does that even matter? The ethics of pulling the plug on an AIIn a recent article in the Journal of Experimental and Theoretical Artificial Intelligence, Sandberg dives into some of the ethical questions that would (or at least should) arise from successful whole brain emulation. The focus of his paper, he explained, is “What are we allowed to do to these simulated brains?” If we create a WBE that perfectly models a brain, can it suffer? Should we care? Again, discounting if and when, it's likely that an early successful software brain will mirror an animal’s. Animal brains are simply much smaller, less complex, and more available. So would a computer program that perfectly models an animal receive the same consideration an actual animal would? In practice, this might not be an issue. If a software animal brain emulates a worm or insect, for instance, there will be little worry about the software’s legal and moral status. After all, even the strictest laboratory standards today place few restrictions on what researchers do with invertebrates. When wrapping our minds around the ethics of how to treat an AI, the real question is what happens when we program a mammal? “If you imagine that I am in a lab, I reach into a cage and pinch the tail of a little lab rat, the rat is going to squeal, it is going to run off in pain, and it’s not going to be a very happy rat. And actually, the regulations for animal research take a very stern view of that kind of behavior," Sandberg says. "Then what if I go into the computer lab, put on virtual reality gloves, and reach into my simulated cage where I have a little rat simulation and pinch its tail? Is this as bad as doing this to a real rat?” As Sandberg alluded to, there are ethical codes for the treatment of mammals, and animals are protected by laws designed to reduce suffering. Would digital lab animals be protected under the same rules? Well, according to Sandberg, one of the purposes of developing this software is to avoid the many ethical problems with using carbon-based animals. To get at these issues, Sandberg’s article takes the reader on a tour of how philosophers define animal morality and our relationships with animals as sentient beings. These are not easy ideas to summarize. “Philosophers have been bickering about these issues for decades," Sandberg says. "I think they will continue to bicker until we upload a philosopher into a computer and ask him how he feels.” While many people might choose to respond, “Oh, it's just software,” this seems much too simplistic for Sandberg. “We have no experience with not being flesh and blood, so the fact that we have no experience of software suffering, that might just be that we haven’t had a chance to experience it. Maybe there is something like suffering, or something even worse than suffering software could experience,” he says. Ultimately, Sandberg argues that it's better to be safe than sorry. He concludes a cautious approach would be best, that WBEs "should be treated as the corresponding animal system absent countervailing evidence.” When asked what this evidence would look like—that is, software designed to model an animal brain without the consciousness of one—he considered that, too. “A simple case would be when the internal electrical activity did not look like what happens in the real animal. That would suggest the simulation is not close at all. If there is the counterpart of an epileptic seizure, then we might also conclude there is likely no consciousness, but now we are getting closer to something that might be worrisome,” he says. So the evidence that the software animal’s brain is not conscious looks…exactly like evidence that a biological animal’s brain is not conscious. Virtual painDespite his pleas for caution, Sandberg doesn’t advocate eliminating emulation experimentation entirely. He thinks that if we stop and think about it, compassion for digital test animals could arise relatively easy. After all, if we know enough to create a digital brain capable of suffering, we should know enough to bypass its pain centers. “It might be possible to run virtual painkillers which are way better than real painkillers," he says. "You literally leave out the signals that would correspond to pain. And while I’m not worried about any simulation right now… in a few years I think that is going to change.” This, of course, assumes that animals' only source of suffering is pain. In that regard, to worry whether a software animal may suffer in the future probably seems pointless when we accept so much suffering in biological animals today. If you find a rat in your house, you are free to dispose of it how you see fit. We kill animals for food and fashion. Why worry about a software rat? One answer—beyond basic compassion—is that we'll need the practice. If we can successfully emulate the brains of other mammals, then emulating a human is inevitable. And the ethics of hosting human-like consciousness becomes much more complicated. Beyond pain and suffering, Sandberg considers a long list of possible ethical issues in this scenario: a blindingly monotonous environment, damaged or disabled emulations, perpetual hibernation, the tricky subject of copies, communications between beings who think at vastly different speeds (software brains could easily run a million times faster than ours), privacy, and matters of self-ownership and intellectual property. All of these may be sticky issues, Sandberg predicts, but if we can resolve them, human brain emulations could achieve some remarkable feats. They are ideally suited for extreme tasks like space exploration, where we could potentially beam them through the cosmos. And if it came down to it, the digital versions of ourselves might be the only survivors in a biological die-off.
Monday, February 23. 2015Android to Become 'Workhorse' of CybercrimeVia EE Times ----- PARIS — As of the end of 2014, 16 million mobile devices worldwide have been infected by malicious software, estimated Alcatel-Lucent’s security arm, Motive Security Labs, in its latest security report released Thursday (Feb. 12). Such malware is used by “cybercriminals for corporate and personal espionage, information theft, denial of service attacks on business and governments and banking and advertising scams,” the report warned. Some of the key facts revealed in the report -- released two weeks in advance of the Mobile World Congress 2015 -- could dampen the mobile industry’s renewed enthusiasm for mobile payment systems such as Google Wallet and Apple Pay. At risk is also the matter of privacy. How safe is your mobile device? Consumers have gotten used to trusting their smartphones, expecting their devices to know them well enough to accommodate their habits and preferences. So the last thing consumers expect them to do is to channel spyware into their lives, letting others monitor calls and track web browsing. Cyber attacks Declaring that 2014 “will be remembered as the year of cyber-attacks,” Kevin McNamee, director, Alcatel-Lucent Motive Security Labs, noted in his latest blog other cases of hackers stealing millions of credit and debit card account numbers at retail points of sale. They include the security breach at Target in 2013 and similar breaches repeated in 2014 at Staples, Home Depot Sally Beauty Supply, Neiman Marcus, United Parcel Service, Michaels Stores and Albertsons, as well as the food chains Dairy Queen and P. F. Chang. “But the combined number of these attacks pales in comparison to the malware attacks on mobile and residential devices,” McNamee insists. In his blog, he wrote, “Stealing personal information and data minutes from individual device users doesn’t tend to make the news, but it’s happening with increased frequency. And the consequences of losing one’s financial information, privacy, and personal identity to cyber criminals are no less important when it’s you.” 'Workhorse of cybercrime' According to the report, in the mobile networks, “Android devices have now caught up to Windows laptops as the primary workhorse of cybercrime.” The infection rates between Android and Windows devices now split 50/50 in 2014, said the report. This may be hardly a surprise to those familiar with Android security. There are three issues. First, the volume of Android devices shipped in 2014 is so huge that it makes a juicy target for cyber criminals. Second, Android is based on an open platform. Third, Android allows users to download apps from third-party stores where apps are not consistently verified and controlled. In contrast, the report said that less than 1% of infections come from iPhone and Blackberry smartphones. The report, however, quickly added that this data doesn’t prove that iPhones are immune to malware. The Motive Security Labs report cited findings by Palo Alto Networks in early November. The Networks discussed the discovery of WireLurker vulnerability that allows an infected Mac OS-X computer to install applications on any iPhone that connects to it via a USB connection. User permission is not required and the iPhone need not be jail-broken. News stories reported the source of the infected Mac OS-X apps as an app store in China that apparently affected some 350,000 users through apps disguised as popular games. These infected the Mac computer, which in turn infected the iPhone. Once infected, the iPhone contacted a remote C&C server. According to the Motive Security Labs report, a couple of weeks later, FireEye revealed Masque Attack vulnerability, which allows third-party apps to be replaced with a malicious app that can access all the data of the original app. In a demo, FireEye replaced the Gmail app on an iPhone, allowing the attacker complete access to the victim’s email and text messages. Spyware on the rise Impact on mobile payment The rise of mobile malware threats isn’t unexpected. But as Google Wallet, Apple Pay and others rush to bring us mobile payment systems, security has to be a top focus. And malware concerns become even more acute in the workplace where more than 90% of workers admit to using their personal smartphones for work purposes. Fixed broadband networks Why is this all happening? Noting that a recent Motive Security Labs survey found that 65 percent of subscribers expect their service provider to protect both their mobile and home devices, the report seems to suggest that the onus is on operators. “They are expected to take a proactive approach to this problem by providing services that alert subscribers to malware on their devices along with self-help instructions for removing it,” said Patrick Tan, General Manager of Network Intelligence at Alcatel-Lucent, in a statement. Due to the large market share it holds within communication networks, Alcatel-Lucent says that it’s in a unique position to measure the impact of mobile and home device traffic moving over those networks to identify malicious and cyber-security threats. Motive Security Labs is an analytics arm of Motive Customer Experience Management. According to Alcatel-Lucent, Motive Security Labs (formerly Kindsight Security Labs), processes more than 120,000 new malware samples per day and maintains a library of 30 million active samples. In the following pages, we will share the hilights of data collected by Motive Security Labs.
Mobile infection rate since December 2012
Alcatel-Lucent’s Motive Security Labs found 0.68% of mobile devices are infected with malware. Using the ITU’s figure of 2.3 billion mobile broadband subscriptions, Motive Security Labs estimated that 16 million mobile devices had some sort of malware infection in December 2014. The report called this global estimate “likely to be on the conservative side.” Motive Security Labs’ sensors do not have complete coverage in areas such as China and Russia, where mobile infection rates are known to be higher. Mobile malware samples since June 2012
Motive Security Labs used the increase in the number of samples in its malware database to show Android malware growth. The chart above shows numbers since June 2012. The number of samples grew by 161% in 2014. In addition to the increase in raw numbers, the sophistication of Android malware also got better, according to Motive Security Labs. Researchers in 2014 started to see malware applications that had originally been developed for the Windows/PC platform migrate to the mobile space, bringing with them more sophisticated command and control and rootkit technologies. Infected device types in 2013 and 2014
The chart shows the breakdown of infected device types that have been observed in 2013 and 2014. Shown in red is Android and shown in blue is Windows. While the involvement of such a high proportion of Windows/PC devices may be a surprise to some, these Windows/PCs are connected to the mobile network via dongles and mobile Wi-Fi devices or simply tethered through smartphones. They’re responsible for about 50% of the malware infections observed. The report said, “This is because these devices are still the favorite of hardcore professional cybercriminals who have a huge investment in the Windows malware ecosystem. As the mobile network becomes the access network of choice for many Windows PCs, the malware moves with them.” Android phones and tablets are responsible for about 50% of the malware infections observed. Currently most mobile malware is distributed as “Trojanized” apps. Android offers the easiest target for this because of its open app environment, noted the report.
« previous page
(Page 4 of 53, totaling 528 entries)
» next page
|
QuicksearchPopular Entries
CategoriesShow tagged entriesSyndicate This BlogCalendar
Blog Administration |