2013 FORMULA 1 UBS CHINESE GRAND PRIX

Shanghai

Race Date:14 Apr 2013
Circuit Name:Shanghai International Circuit
First Grand Prix:2004
Number of Laps:56
Circuit Length:5.451 km
Race Distance:305.066 km
Lap Record:1:32.238 - M Schumacher (2004)

Source:
http://www.formula1.com/races/in_detail/china_895/circuit_diagram.html

Numerical Imagination

Watch:
http://www.charlierose.com/view/interview/12840

Comparing genomes to computer operating systems in terms of the topology and evolution of their regulatory control networks

Abstract

The genome has often been called the operating system (OS) for a living organism. A computer OS is described by a regulatory control network termed the call graph, which is analogous to the transcriptional regulatory network in a cell. To apply our firsthand knowledge of the architecture of software systems to understand cellular design principles, we present a comparison between the transcriptional regulatory network of a well-studied bacterium (Escherichia coli) and the call graph of a canonical OS (Linux) in terms of topology and evolution. We show that both networks have a fundamentally hierarchical layout, but there is a key difference: The transcriptional regulatory network possesses a few global regulators at the top and many targets at the bottom; conversely, the call graph has many regulators controlling a small set of generic functions. This top-heavy organization leads to highly overlapping functional modules in the call graph, in contrast to the relatively independent modules in the regulatory network. We further develop a way to measure evolutionary rates comparably between the two networks and explain this difference in terms of network evolution. The process of biological evolution via random mutation and subsequent selection tightly constrains the evolution of regulatory network hubs. The call graph, however, exhibits rapid evolution of its highly connected generic components, made possible by designers’ continual fine-tuning. These findings stem from the design principles of the two systems: robustness for biological systems and cost effectiveness (reuse) for software systems.

Authors: Koon-Kiu Yan, Gang Fang, Nitin Bhardwaj, Roger P. Alexander and  Mark Gerstein

Source and full article:
http://www.pnas.org/content/107/20/9186.abstract?sid=57e90864-5098-4e73-b1cb-b64eef2eb2f1

Intelligent Content: Soon your media will know you better than you know yourself

With the introduction of analytics into the visual design of written content, we are on the cusp of an era of incredible evolution: one where the design of information changes in real time in response to data about the readers consuming it. New technologies from Amazon, Apple, Google, WordPress and Tumblr already provide a preview of Intelligent Content. In essence, it won’t be long before the media we consume knows us better than we know ourselves.

Content that reacts to being read

Around 1952, computer scientist Grace Hopper introduced new thinking about compilers –machine-independent software that would translate code written in human language into computer friendly binary ones. John Von Nuemann took Dr. Hopper’s work to a new level in his unfinished masterpiece “The Computer and the Brain,” which theorized that massive versions of compilers would eventually result in computers so intelligent that no human mind could keep up with them.
In a way, books and magazines of the future will act as sort of human compilers, translating your reading desires into pure machine language that tells the publisher how to present the material for faster and more pleasurable absorption. It’s difficult to comprehend what these experiences will be like once machines themselves begin creating material for humans. The content itself will be designed to gather information about the reader, mash it up with data about others interested in related subjects, authors, or publishers, then decide what content to present to you next. This is what we mean by Intelligent Content

Curation will guide content

Some argue that readers no longer want curated content, however we believe people always have and always will look to trusted sources for guidance, and that’s where books and magazines will continue to add value. In a world where people are already inundated with information, it’s only going to get worse as we get more and more smothered by everyone else’s stream of consciousness, courtesy of Twitter, Instagram, Pinterest, Facebook and whatever is next.
So the short-term impact of the Intelligent Content movement will feel something like the music industry since 2000. MP3′s meant the end of curated CDs. Now, playlists are compiled and shared with the help of Pandora, Spotify or Songza. Thus magazines and books could soon become the Pandora of dynamic content, with artificial intelligence applets that choose and adapt content, then tailor it to the reader’s context and taste. We see the beginnings of this with Flipboard, but it will only get more advanced.

Experimenting outside the print paradigm

Massive waves of disruption always bring opportunity. Publishers like Hearst and Conde Nast continue to experiment with and push the boundaries of enhanced reading experiences on tablets, but many other publishers still obey the rules of printed media, requiring you to “flip” through virtual pages as the primary mode of navigation. WordPress and Tumblr appear to be closest to offering an always-on and continuously updated experience based on analytics about the reader. The flexibility and customization they offer provide a glimpse into how written and visual content will eventually be continuously reconfigured and redesigned by the moment to accommodate data gathered about what you like to read.
Our future might be filled with mash-ups of video, audio, real-time updates, new navigation interfaces and even content that interacts with a reader’s environment (such as augmented reality). Digital publishers can experiment with new hyper-responsive designs as well as back-end databases that mine your other web activities to determine what you’ll like. For example, Quartz (qz.com), a digital only news site launched in September last year, uses WordPress and responsive design to customize the reader’s experience on a device level. Companies such as Gravity, Contextly and Sailthru offer digital publishers new tools to create more personalized experiences based on a visitor’s profile and previous reading behavior.

The algorithm will be the new editor

In the long term, the algorithm will likely replace the editor and curator. Quick and automatic branding and positioning of the book or magazine on a glowing electronic slab will become more important than the most sage human editor. For focused, long-form content, algorithms will sort out content discovery, delivery and presentation. Google already conquered discovery with algorithms, and now content aggregators such as Zite and Prismatic offer readers an elegant, gated magazine-like design using data from the reader’s social networking profiles, past reading habits and current location.

Using big data to create content on demand

Intelligent Content can also help publishers create content in a more cost-efficient way. One of the main challenges publishers face is predicting which content will be popular. Analyzing the big data that comes from reading and search behavior will help them predict which articles will bring in a much-needed audience.
Recently, researchers at MIT developed an algorithm that can predict topics that will be trending on Twitter hours in advance. Similarly, startups such as Content Fleet and Parse.ly use algorithms to identify emerging popular topics on search engines. This way, a publisher will be able to create content with almost a certain return on investment.
Publishers who recognize the design- and data-driven future of Intelligent Content will have a head start. They can experiment now with new ways to deliver content and measure how its readers engage with it. That data in turn can help them deliver even more engaging content experiences, ultimately preparing them for a future of Intelligent Content.

By
Source:
http://paidcontent.org/2013/03/31/intelligent-content-soon-your-media-will-know-you-better-than-you-know-yourself/

Newspapers, Delivered by Drone

A province in France is piloting a non-piloted system for distributing the news.

Add one more to the list of career paths that are being obviated by robots: news delivery.
In Auvergne, a province in central France, residents get their daily news the old-fashioned way: through newspapers. But the delivery of said newspapers, apparently, will soon be executed with the help of high tech -- because it'll be done with the help of drones.
Auvergne's local postal service, La Poste Group, announced on its blog that it is partnering with the drone-maker Parrot to explore the wacky world of high-flying news delivery. The service will be called "Parrot Air Drone Postal," and it will make use of Parrot's quadricopter drones. To test its general feasibility, the delivery service is already being, er, piloted in Auvergne, Silicon Alley Insider reports, with a team of 20 postal workers and 20 drones. (The postal workers control the drones by a specialized app -- which they can use on iOS or Android devices.)
This may, oui, seem like an April Fool's joke with a French twist. But the idea itself is no joke at all: Drones are hoped to replace humans in package delivery in places far beyond France. The FAA been studiously streamlining the process for public agencies to safely fly drones in U.S. airspace, with the goal of allowing for "the safe integration" of all kinds of drones -- including for commercial purposes -- by September 2015. And FedEx founder Fred Smith has been a vocal proponent of the FedEx's fleet's conversion to unmanned vehicles, on grounds of cost, efficiency, and safety. The paper boys and girls of France may be some of the first package-deliverers to have their jobs transformed by drones. But they won't be the last.

By Megan Garber
Source:
http://www.theatlantic.com/technology/archive/2013/03/newspapers-delivered-by-drone/274506/

Happy World Backup Day!

Remember to ensure all your important files are backed up on March 31st.

Source and read more:
http://www.worldbackupday.com/

Google Code Jam 2013

Registration for Google Code Jam 2013 is now open!
We're back this year for our 10th anniversary! Mark your calendar - the Google Code Jam Qualification Round will start on Friday, April 12, 2013.
Ten years after its inception, Code Jam continues to bring together professional and student programmers from all over the world to solve tough algorithmic puzzles. Last year over 35,000 coders competed, but it was Poland's Jakub Pachocki (meret) who earned the title of Code Jam Champion, and the $10,000 reward.
The competition consists of four online rounds, culminating in the world finals held at Google's office in London, United Kingdom this August. The winner of Code Jam 2013 will walk away with a $15,000 prize, the coveted title of Code Jam Champion, and automatic qualification in the Code Jam 2014 finals to defend his or her title.
Think you have what it takes? Register now, then start practicing!

Source and other details:
https://code.google.com/codejam/

Round Up: All of Google’s jokes for April Fools’ 2013, from Google Maps treasure hunting to YouTube closing

Every year, Google goes all out for April Fools’ Day. The company not only pulls together more jokes than all the other tech giants, but the company makes a point to outdo itself too. It honestly gets difficult to keep track of everything Google thinks up, so like last year we’re putting together a roundup.
First up, Google has created a new treasure map mode on Google Maps. Last year, the company showed off an 8-bit version, but this year it wants you to go out and explore 2D hand drawn landmarks, find hidden treasure chests, and “Beaware of pirates!” (yep, that’s a typo).
google maps treasure 730x343 Round Up: All of Googles jokes for April Fools 2013, from Google Maps treasure hunting to YouTube closing
Google’s announcement talks of the Google Maps Street View team finding a treasure map belonging to the infamous pirate, William “Captain” Kidd, on a recent expedition in the Indian Ocean to expand its underwater Street View collection. The map contains a variety of encrypted symbols which you are tasked with deciphering:

Google’s headquarters naturally gets a special flag:
google mountain view 730x344 Round Up: All of Googles jokes for April Fools 2013, from Google Maps treasure hunting to YouTube closing
Next up is the closing of YouTube. Apparently all this time the site has been an eight-year contest, the goal of which was to find the best video. Google is getting ready to pick the winner, and when it does, it will be shutting down the site (and then relaunching it with just the winning video).

Google says it has 30,000 technicians working on narrowing down the list. It has even managed to get YouTube celebrities, commenters, as well as film reviewers on board to participate in the prank.
When it comes to its marketing budget, Google seems to have a bottomless pit on April 1st. We’ll keep updating this post as Google launches more jokes (there’s a lot more coming: we still have over 12 hours to go before it’s April Fools’ Day on Google’s side of the world).

See also – A Round Up of ALL of Google’s April Fools Jokes. Fair play, they really make an effort… and April Fools Day should be about jokes, not lame attempts at gaining publicity

By Emil Portalinski
Source:
http://thenextweb.com/google/2013/03/31/round-up-all-of-googles-jokes-for-april-fools-2013-from-google-maps-treasure-hunting-to-youtube-closing/?fromcat=all

ISO/IEC 27018 — Information technology — Security techniques — Code of practice for data protection controls for public cloud computing services (DRAFT)

This standard will provide guidance on the privacy elements/aspects of public clouds. It will be accompanied by ISO/IEC 27017 covering the wider information security angles.

The standard is not intended to duplicate or modify ISO/IEC 27002 in relation to cloud computing but will presumably add control objectives and controls relevant to the protection of privacy and personal data in the cloud.

The project has widespread support from national bodies plus the Cloud Security Alliance.

Content


The first Working Draft of this standard is similar in style to ISO/IEC 27015 (the information security management guidelines for financial services) in that it builds on ISO/IEC 27002, expanding on its advice in particular areas.

Status of the standard


The 1st WD is available to members of SC27 for review and contributions.

Publication is possible in 2013, especially if it turns out that the revised version of ISO/IEC 27002 covers most of the applicable security controls adequately without further elaboration.
 
Source:

ISO/IEC 27017 — Information technology — Security techniques — Security in cloud computing (DRAFT)

This standard will provide guidance on the information security elements/aspects of cloud computing. It will be accompanied by ISO/IEC 27018 covering the privacy aspects of cloud computing.

The standard will recommend, in addition to the information security controls recommended in ISO/IEC 27002, cloud-specific security controls.

The project has widespread support from national bodies plus the Cloud Security Alliance.

 

Scope and purpose

The standard is expected to be a guideline or code of practice recommending relevant information security controls for cloud computing.

The working title is “Guidelines on Information security controls for the use of cloud computing services based on ISO/IEC 27002”.

The decision to progress a cloud privacy standard in parallel naturally implies that this standard will exclude privacy and the protection of personal data.



Status of the standard


The standard will build on the revised version of ISO/IEC 27002 (work in progress).

The 3rd WD is available to SC27. It mainly provided implementation advice in the cloud computing context for many of the security controls recommended by 27002.



Note: SC27 decided NOT to progress a separate cloud security management system specification standard, judging that ISO/IEC 27001 is sufficient. Therefore, there are no plans to certify the security of cloud suppliers specifically.
 
Source:

Referencing and Applying WCAG 2.0 in Different Contexts

W3C Workshop on 23 May 2013 in Brussels, Belgium

W3C Web Accessibility Initiative (WAI) invites you to share your experiences using Web Content Accessibility Guidelines (WCAG) 2.0 and its supporting resources in different policy settings and contexts. In this W3C Workshop we will explore approaches for:
  • Developing and updating policies for harmonized uptake of the WCAG 2.0 international standard
  • Applicability of WCAG 2.0 to dynamic content, web applications, mobile web, and other areas
  • Addressing HTML5 and other emerging technologies in web accessibility policies

Invitation

This Workshop is open to policy-makers, users, developers, accessibility experts, researchers, and others interested in adopting, referencing, and applying WCAG 2.0. We invite you to:
  • Discuss approaches for referencing and adopting WCAG 2.0
  • Exchange experiences with implementing policies that reference WCAG 2.0
  • Share resources that support the implementation of WCAG 2.0
  • Identify priorities for further developing resources and support material

Background

The UN Convention on Rights of Persons with Disabilities (CRPD) and many regional and national policies on accessibility of information technologies have increased people's interest in accessibility of the Web. In turn, many organizations and governments are adopting the W3C/WAI guidelines in accessibility policies for the Web, in order to ensure that people can access the Web regardless of disability. This includes the Web Content Accessibility Guidelines (WCAG), accessibility guidelines for web browsers and media players (UAAG), and for web authoring tools (ATAG).
As organizations and governments migrate to WCAG 2.0, questions arise on how to best implement the guidelines in practice. This includes questions on how to best take advantage of the framework provided in WCAG 2.0 to address the continual evolution of web technologies such as WAI-ARIA and HTML5, the convergence of the Web with mobile and digital television, and how to utilize current and planned supporting resources such as Techniques for WCAG 2.0, and Understanding WCAG 2.0 documents. Some of these questions include:
  • How can WCAG 2.0, which is also ISO/IEC 40500, be freely adopted and referenced in policies on web accessibility?
  • How applicable is WCAG 2.0 to web applications, mobile web, social media, digital publishing, and other contexts?
  • What is the role of WCAG 2.0 Techniques, how are they developed, and when will techniques for HTML5 be ready?
  • What is the role of WAI-ARIA, IndieUI, and other accessibility specifications, and how do they relate to HTML5?
  • How does one determine accessibility support in web technologies, dynamic content, or mobile web applications?
  • What are the supporting resources, guidance, and tools around WCAG 2.0, and what translations exist for these?
This workshop provides an opportunity to exchange best practices on these and other questions, and to share strategies on promoting harmonized uptake of WCAG 2.0 internationally.

Source and other details:
http://www.w3.org/WAI/ACT/workshop

MIT Files Court Papers “Partially” Opposing Release Of Documents About Aaron Swartz Investigation

The Massachusetts Institute of Technology (MIT) is “partially” opposing a request by the estate of Aaron Swartz for the release of documents related to the investigation that led to Swartz’s arrest and prosecution in federal court.
In court papers filed today, MIT counsel states that its opposition stems from two factors: its concerns about people in the MIT community named in the documents and the security of its computer networks.
MIT has previously stated that it would release the documents with redactions of names and other information. MIT President L. Rafael Reif said in email to the MIT community earlier this month:
On Friday, the lawyers for Aaron Swartz’s estate filed a legal request with the Boston federal court where the Swartz case would have gone to trial. They demanded that the court release to the public information related to the case, including many MIT documents. Some of these documents contain information about vulnerabilities in MIT’s network. Some contain the names of individual MIT employees involved. In fact, the lawyers’ request argues that those names cannot be excluded (”redacted”) from the documents and urges that they be released in the public domain and delivered to Congress.
The paper filed today reiterates this position, basing it on threats already made to MIT staff and three separate hacking incidents at the university.
The information includes “email, the names, job titles, departments, telephone numbers, email addresses, business addresses, and other identifying information of many members of the MIT community.”
Swartz has become a symbol in the Internet community since his suicide. His supporters have led the debate about the role MIT played in Swartz’s prosecution and the vigilance of the U.S. Attorney General in the case.
MIT claims it is fully cooperating in the investigation that has come since Swartz’s suicide.

By Alex Williams
Source:
http://techcrunch.com/2013/03/29/mit-files-court-papers-partially-opposing-release-of-documents-about-aaron-swartz-investigation/

The Dire Wolf Project



The Dire Wolf Project was started in 1988 in order to bring back the look of the large prehistoric Dire Wolf in a domesticated dog breed. The National American Alsatian Breeder's Club governs the project and standardizes breeding practices for this unique large companion dog. Health and temperament remain the highest priority over the look of the Dire Wolf, so this project is slow and methodical. Join us on a historical journey of Dire Wolf memories and watch as we domesticate history one generation at a time.

Source and read more:
http://theamericanalsatian.tripod.com/direwolfproject/

An American Quilt of Privacy Laws, Incomplete

We don’t need a new platform. We just need to rebrand.

That was the message of a report from the Republican Party a few weeks ago on how to win future presidential elections.
It’s also the strategy that Peter Fleischer, the global privacy counsel at Google, recently proposed for the United States to win converts abroad to its legal model of data privacy protection. In a post on his personal blog, titled “We Need a Better, Simpler Narrative of U.S. Privacy Laws,” he describes the divergent legal frameworks in the United States and Europe.
The American system involves a patchwork of federal and state privacy laws that separately govern the use of personal details in spheres like patient billing, motor vehicle records, education and video rental records. The European Union, on the other hand, has one blanket data protection directive that lays out principles for how information about its citizens may be collected and used, no matter the industry.
Mr. Fleischer — whose blog notes that it reflects his personal views, not his employer’s — is a proponent of the patchwork system because, he writes, it offers multilayered protection for Americans. The problem with it, he argues, is that it doesn’t lend itself to simple storytelling.
“Europe’s privacy narrative is simple and appealing,” Mr. Fleischer wrote in mid-March. If the United States wants to foster trust in American companies operating abroad, he added, it “has to figure out how to explain its privacy laws on a global stage."
Other technology experts, however, view the patchwork quilt of American privacy laws as more of a macramé arrangement — with serious gaps in consumer protection, particularly when it comes to data collection online. Congress should enact a baseline consumer privacy law, says Leslie Harris, the president of the Center for Democracy and Technology, a public policy group that promotes Internet freedom.
"I don’t think this fight is about branding," Ms. Harris says. “We’ve been trying to get a comprehensive privacy law for over a decade, a law that would work for today and for technologies that we have not yet envisioned."
Many Americans are aware that stores, Web sites, apps, ad networks, loyalty card programs and so on collect and analyze details about their purchases, activities and interests — online and off. Last year, both the United States and the European Union proposed to give their citizens greater control over such commercial data-mining.
If the American side now appears to be losing the public relations battle, as Mr. Fleischer suggested, it may be because Europe has forged ahead with its project to modernize data protection. When officials of the United States and the European Union start work on a free trade agreement in the coming months, the trans-Atlantic privacy regulation divide is likely to be one of the sticking points, analysts say.
“We really are an outlier,” says Christopher Calabrese, legislative counsel for privacy-related issues at the American Civil Liberties Union in Washington.
For the moment, officials on either side of the Atlantic seem to be operating at different speeds.
In January 2012, the European Commission proposed a new regulation that could give citizens in the E.U.’s 27 member states some legal powers that Americans now lack. These include the right to transfer text, photo and video files in usable formats from one online service provider to another. American consumers do not have such a national right to data portability, and have to depend on the largesse of companies like Google, which permits them to download their own YouTube videos or Picasa photo albums.
A month after Europe proposed to update its data protections, the Obama administration called on Congress to enact a “consumer privacy bill of rights” that would apply to industries not already covered by sectoral privacy laws. These could include data brokers, companies that collect details on an individual’s likes, leisure pursuits, shopping habits, financial status, health interests and more.
The White House’s blueprint for legislation, for example, would give Americans the right to some control over how their personal data is used, as well as the right to see and correct records that companies hold about them. The White House initiative broadened the historical American view of privacy as “the right to be let alone” — a definition put forward by Louis Brandeis and Samuel Warren in 1890 — to a more modern concept of privacy as the right to commercial data control.
"We can’t wait," a post on the White House blog effused at the time.
A year later, the data protection regulation proposed by the European Commission has been vetted by a number of regulators and committees of the European Parliament. The document now has several thousand amendments, some developed in response to American trade groups that had complained that certain provisions could hinder innovation and impede digital free trade. Peter Hustinx, the European data protection supervisor, said last Wednesday that European officials hoped to enact the law by next spring.
In the United States, by contrast, a year after the Obama administration introduced the notion of a consumer privacy bill of rights, a draft has yet to be completed, let alone made public.
Cameron F. Kerry, the general counsel of the Commerce Department and the official overseeing the privacy effort, was not available to comment last week. In a phone interview in January, however, Mr. Kerry said that the agency was working on legislative language to carry out the White House’s plan.
“The idea is to have baseline privacy protections for those areas not covered today by sectoral regimes,” Mr. Kerry said. He added: “We think it is important to do it in a way that allows for flexibility, that allows for innovation, and is not overly prescriptive.”
Chris Gaither, a Google spokesman, said his company was “engaging on important issues” like security breach notification and declined to comment on consumer privacy legislation. But at least some American technology companies suggest that a baseline privacy law could benefit both consumers and companies. In a statement last year, Microsoft said national privacy legislation could help ensure “that all businesses are using, storing and sharing data in responsible ways.”
With stronger European data rights and trade negotiations pending, Ms. Harris, of the Center for Democracy and Technology, says Congress may feel pressure to pass privacy legislation. That would represent a big change for American consumers as well as a better privacy sound bite abroad.
“We either have to enact our own law or we are going to have to comply with other countries’ laws,” Ms. Harris says. “But doing nothing may no longer be the answer.”
 
By Natasha Singer
Source:

Government Fights for Use of Spy Tool That Spoofs Cell Towers

The government’s use of a secret spy tool was on trial on Thursday in a showdown between an accused identity thief and more than a dozen federal lawyers and law enforcement agents who were fighting to ensure that evidence obtained via a location-tracking tool would be admissible in court.
At issue is whether law enforcement agents invaded Daniel David Rigmaiden’s privacy in 2008 when they used a so-called stingray to track his location through a Verizon Wireless air card that he used to connect his computer to the internet. Also at issue is whether a warrant the government obtained from a judge covered the use of the stingray and whether the government made it sufficiently clear to the judge how the technology it planned to use worked.

Over the course of a three-hour hearing in the U.S. District Court in Arizona, Rigmaiden, 31, asserted that the warrant the government obtained only authorized Verizon Wireless to provide agents with data about the air card but did not authorize agents to use the invasive stingray device. He also asserted that Verizon Wireless “reprogrammed” his air card to make it interact with the FBI’s stingray, something that he says was outside the bounds of the judge’s order.

Rigmaiden and civil liberties groups who have filed amicus briefs in the case also maintain that the government failed its duty to disclose to the judge who issued the warrant that the device they planned to use not only collected data from the target of an investigation but from anyone else in the vicinity who was using an air card or other wireless communication device.

Linda Lye, staff attorney for the American Civil Liberties Union of Northern California, told the judge on Thursday that this was the equivalent of rummaging through ten or twelve apartments to find the correct one where the defendant resided, something that would never be allowed under a normal warranted search.

By withholding information about the stingray from the judge and providing only “scant information” about the data they planned to collect, the FBI had “failed to live up to its duty of candor…. The government should have been clear about what information it wanted to obtain and what information it was going to obtain in using the technology,” she said.

The ACLU recently uncovered emails that show a pattern of agents routinely withholding information from judges about their use of stingrays in applying for warrants for electronic surveillance.

The Rigmaiden case is shining a spotlight on the secretive technology, generically known as a stingray or IMSI catcher, that allows law enforcement agents to spoof a legitimate cell tower in order to trick nearby mobile phones and other wireless communication devices into connecting to the stingray instead of a phone carrier’s tower.

When devices connect, stingrays can see and record their unique ID numbers and traffic data, as well as information that points to the device’s location. To prevent detection by suspects, the stingray is supposed to send the data along to a real tower so that traffic continues to flow.

By gathering the wireless device’s signal strength from various locations, authorities can pinpoint where the device is being used with much more precision than they can get through data obtained from a mobile network provider’s fixed tower location.

Although there are stingray devices that can capture and record the content of phone calls and text messages, the U.S. government has insisted in court documents for the Rigmaiden case that the stingray used in this case could only capture the equivalent of header information — such as the phone or account number assigned to the aircard as well as dialing, routing and address information involved in the communication. As such, the government has maintained that the device is the equivalent of devices designed to capture routing and header data on e-mail and other internet communications, and therefore does not require a search warrant.

The device, however, doesn’t just capture information related to a targeted phone. It captures data from “all wireless devices in the immediate area of the FBI device that subscribe to a particular provider” according to government documents -— including data of innocent people who are not the target of the investigation. FBI policy requires agents to then purge collateral data collected by the surveillance tool.
 
By Kim Zetter
Source and read more:

Tumblr undergoes 3-hour outage due to ‘network issues’

Tumblr’s blogging service went down for roughly three hours on Friday, before quietly coming back online.
“We’re experiencing networking issues and are working quickly to remedy the situation,” a spokersperson told The Next Web.
The down time is utterly embarrassing for the company, which recently passed the milestone of hosting 100 million blogs and 44.6 billion posts, up from 50 million blogs last April. Similar to Twitter’s “fail whale” issues from its early days, Tumblr needs to work through these growing pains if it wants to be taken seriously.

By Josh Ong
Source:
http://thenextweb.com/insider/2013/03/30/tumblr-confirms-site-wide-outage-says-it-is-working-quickly-to-remedy-the-situation/?fromcat=all

Provocateur Comes Into View After Cyberattack

Sven Olaf Kamphuis calls himself the “minister of telecommunications and foreign affairs for the Republic of CyberBunker.” Others see him as the Prince of Spam.

Mr. Kamphuis, who is actually Dutch, is at the heart of an international investigation into one of the biggest cyberattacks identified by authorities. He has not been charged with any crime and he denies direct involvement. But because of his outspoken position in a loose federation of hackers, authorities in the Netherlands and several other countries are examining what role he or the Internet companies he runs played in snarling traffic on the Web this week.
He describes himself in his own Web postings as an Internet freedom fighter, along the lines of Julian Assange of WikiLeaks, with political views that range from eccentric to offensive. His likes: German heavy metal music, “Beavis and Butt-head” and the campaign to legalize medicinal marijuana. His dislikes: Jews, Luddites and authority.
Dutch computer security experts and former associates describe Mr. Kamphuis as a loner with brilliant programming skills. He did not respond to various requests for interviews, but he has communicated with the public through his Facebook page, which includes photos of himself, a thin, angular man with close-cropped hair and dark, bushy eyebrows, often wearing a hoodie sweatshirt.
“He’s like a loose cannon,” said Erik Bais, the owner of A2B-Internet, an Internet service provider that used to work with Mr. Kamphuis’s company, but severed ties two years ago. “He has no regard for repercussions or collateral damage.”
Mr. Kamphuis’s current nemesis is Spamhaus, a group based in Geneva that fights Internet spam by publishing blacklists of alleged offenders. Clients of Spamhaus use the information to block annoying e-mails offering discount Viagra or financial windfalls. But Mr. Kamphuis and other critics call Spamhaus a censor that judges what is or isn’t spam. Spamhaus acted, he wrote, “without any court verdict, just by blackmail of suppliers and Jew lies.”
The spat that rocked the Internet escalated in mid-March when Spamhaus blacklisted two companies that Mr. Kamphuis runs, CB3ROB, an Internet service provider, and CyberBunker, a Web hosting service. Spamhaus contended that CyberBunker was a conduit for vast amounts of spam. CyberBunker says it accepts business from any site as long as it does not deal in “child porn nor anything related to terrorism.”
Mr. Kamphuis responded by soliciting support for a hackers’ campaign to snarl Spamhaus’s Internet operations. “Yo anons, we could use a little help in shutting down illegal slander and blackmail censorship project ‘spamhaus.org,’ which thinks it can dictate its views on what should and should not be on the Internet,” he wrote on Facebook on March 23.
Mr. Kamphuis later disavowed any direct role in the so-called distributed denial of service, or DDoS, attack, which spilled over from Spamhaus to affect other sites. He took to Facebook to inform the world that the flood of Internet traffic that threatened to cripple parts of the Web emanated from Stophaus, an ad-hoc, amorphous group set up in January with the aim to thwart Spamhaus, a company it claims uses its “tiny business to attempt to control the Internet through underhanded extortion tactics.” Stophaus, which lists no contact or location for the group, claims to have members in the United States, Canada, Russia, Ukraine, China and Western Europe.
Mr. Kamphuis said Stophaus was not a front for him; he is merely acting as a spokesman.
Nonetheless, the authorities are curious. The Dutch national prosecutor’s office said on Thursday that it had opened an investigation. Wim de Bruin, a spokesman for the agency, which is based in Rotterdam, said prosecutors were first trying to determine whether the DDoS attacks had originated in the Netherlands. Authorities in Britain and several other European countries are also looking into the matter.
Mr. Kamphuis, who is believed to be about 35, is singled out because of his vocal role. “For the Dutch Internet community, it’s very clear that he has a big role in this, even if there isn’t 100 percent airtight proof that he is behind it,” said J. P. Velders, a security specialist at the University of Amsterdam. “He could not be not involved. How much is he involved — that is for law enforcement to figure out and to act upon.”
Greenhost, a Dutch Internet hosting service, said in a detailed blog post that it had found the digital fingerprints of CB3ROB when it examined the rogue traffic that had been directed at Spamhaus.
Mr. Kamphuis created CB3ROB in 1996 and helped set up CyberBunker in 1999. From 1999 to 2001, he worked on the help desk at a Dutch Internet service provider, XS4ALL, according to one senior manager at the company who declined to be named, citing company policy. One co-worker said Mr. Kamphuis was constantly being reprimanded for hacking into his employer’s computer system. He was known for eccentric behavior; during a company trip to Berlin, the former co-worker said, Mr. Kamphuis refused to travel with his colleagues and rode alone in a bus.
“Sven absolutely hates authority in any form,” this person said. “He was very smart. Too smart for customers, by the way. Oftentimes they couldn’t understand his technobabble when he tried to help them.”
After leaving XS4ALL, he continued to run his Web hosting business, which was based for a time in a former army bunker in Goes, the Netherlands. Photos on Mr. Kamphuis’s Facebook page show him holding a flag in front of the bunker, like a freedom fighter defending his redoubt.
CyberBunker still lists its address as the bunker. But Joost Verboom, a Dutch businessman, says the address is occupied by his own company, BunkerInfra Datacenters, which is building a subterranean Web hosting center at the site. Mr. Verboom said CyberBunker and Mr. Kamphuis left the site a decade ago. It is not clear where the servers of CyberBunker and CB3ROB are now.
Associates say Mr. Kamphuis moved to Berlin in about 2006, and his Facebook page displays photos indicating his interest in the Pirate Party, a small political movement focusing on Internet issues that holds some opposition seats in Berlin’s city-state government assembly, and in the Chaos Computer Club, a group that discusses computer issues.
For a time, CyberBunker’s clients included WikiLeaks and The Pirate Bay, a Web site whose founders were convicted by a Swedish court in 2009 of abetting movie and music piracy. In May 2010, six American entertainment companies obtained a preliminary injunction in a German court ordering CB3ROB and CyberBunker to stop providing bandwidth to The Pirate Bay.
Since the attacks, Mr. Kamphuis has given television interviews from what appeared to be an empty Internet cafe or office. In a Russian television interview, he suggested that the people responsible for the attacks were in countries where there were no laws against cyberattacks or no serious enforcement.
Mr. Kamphuis also continued to provoke people in Facebook postings. “The Internet is puking out a cancer, please stand by while it is being removed,” he wrote.
 
By Eric Pfanner, Kevin J. O'Brien
Source:
 
 

Smelling screen: development and evaluation of an olfactory display system for presenting a virtual odor source

Abstract

We propose a new olfactory display system that can generate an odor distribution on a two-dimensional display screen. The proposed system has four fans on the four corners of the screen. The airflows that are generated by these fans collide multiple times to create an airflow that is directed towards the user from a certain position on the screen. By introducing odor vapor into the airflows, the odor distribution is as if an odor source had been placed onto the screen. The generated odor distribution leads the user to perceive the odor as emanating from a specific region of the screen. The position of this virtual odor source can be shifted to an arbitrary position on the screen by adjusting the balance of the airflows from the four fans. Most users do not immediately notice the odor presentation mechanism of the proposed olfactory display system because the airflow and perceived odor come from the display screen rather than the fans. The airflow velocity can even be set below the threshold for airflow sensation, such that the odor alone is perceived by the user. We present experimental results that show the airflow field and odor distribution that are generated by the proposed system. We also report sensory test results to show how the generated odor distribution is perceived by the user and the issues that must be considered in odor presentation.

Authors: Matsukura H, Yoneda T, Ishida H

Source:
http://www.ncbi.nlm.nih.gov/pubmed/23428445

Evidence Lost: We're Not Likely to See Editing Like Proust's in the Future

One page from the notebooks of Marcel Proust shows the extreme work that went into writing his masterpiece In Search of Lost Time

This image comes from the notebooks of Marcel Proust, one page among the thousands that would eventually become In Search of Lost Time. Though there are a few sections in his manuscripts that seem to have come out more or less as the author had hoped (see here for example), many, many more display whole passages discarded or rewritten like you see one the pages above.
At first, the aggressive self-editing gives you pause: Man, Proust was hard on himself! We are not used to seeing the trail of the hard work that goes into making a beautiful book or essay; computers, like word processors before them, have hidden the the physical evidence of this process. In some places you can still catch glimpses of it -- the history tab of a Wikipedia page -- but mostly, if this trail exists at all it exists in a private file, the track changes of a Microsoft Word document or the revisions history of a Google Doc.
Efforts like Etherpad, which promised to allow real-time collaboration while recording every keystroke of change to a document, show something else too: In our age of networked writing, a tool that records editing history exists for collaboration. This is true of all of the examples I just gave -- Wikipedia, Google Docs, Word's track changes, and Etherpad. Can you imagine tracking the changes of your own edits, just for yourself? Who would do that? If any of these records make it to the future for scholars to examine, they will be the records of our collaborations. The work of an individual's self-edits will have been scrubbed.
Proust may have been writing In Search of Lost Time, but in the act of doing so he was creating an object that preserved, in a sense, the time he had lost ("lost") while writing the books.

By Rebecca J. Rosen
Source:
http://www.theatlantic.com/technology/archive/2013/03/evidence-lost-were-not-likely-to-see-editing-like-prousts-in-the-future/274495/

FCC keeps cellphone RF exposure limits the same, but decides outer ears are 'extremities'

In new rules published today, the FCC has responded to guidelines published by the Government Accounting Office last year asking that it review its policies on RF testing — the core element of FCC hardware certification that helps ensure devices don't emit too much radiation and are generally safe to use. The FCC isn't changing the amount of radiation permitted by SAR testing — the procedure that measures how much radiation is actually absorbed by the human body — but it is making a key change: the outer ear is now identified as an "extremity," which means it can absorb considerably more radiation without running afoul of FCC guidelines.
The precise effects of RF radiation on humans has never been decisively determined, and it's become a far bigger issue with the commoditization of cellphones (and the change in their usage patterns) in the last decade. To that end, while the FCC is keeping its absorption limits in place, it's basically putting out an appeal to the scientific community for comments and research that might help it make a more informed decision — of course, it's been using similar language in rulings since the 1970s, so there's no guarantee the limits will change any time soon.

By Chris Ziegler
Source:
http://www.theverge.com/2013/3/29/4162774/fcc-keeps-cellphone-rf-exposure-limits-the-same

iMessage denial of service ‘prank’ spams users rapidly with messages, crashes iOS Messages app

Over the last couple of days, a group of iOS developers has been targeted with a series of rapid-fire texts sent over Apple’s iMessage system. The messages, likely transmitted via the OS X Messages app using a simple AppleScript, rapidly fill up the Messages app on iOS or the Mac with text, forcing a user to constantly clear both notifications and messages.
In some instances, the messages can be so large that they completely lock up the Messages app on iOS, constituting a ‘denial of service’ (DoS) attack of sorts, even though in this case they appear to be a prank. Obviously, if the messages are repeated an annoyingly large volume but don’t actually crash the app, they’re still limiting the use you’ll get out of the service. But if a string that’s complex enough to crash the app is sent through, that’s a more serious issue.
The attacks hit at least a half-dozen iOS developer and hacker community members that we know of now, and appear to have originated with a Twitter account involved in selling UDIDs, provisioning profiles and more that facilitate in the installation of pirated App Store apps which are re-signed and distributed. The information about the source of the attacks was shared by one of the victims, iOS jailbreak tool and app developer iH8sn0w.

By Matthew Panzarino

Source and read more:
http://thenextweb.com/apple/2013/03/29/imessage-denial-of-service-prank-spams-users-rapidly-with-messages-crashes-ios-messages-app/?fromcat=all

On Security Awareness Training

The focus on training obscures the failures of security design
 
Should companies spend money on security awareness training for their employees? It's a contentious topic, with respected experts on both sides of the debate. I personally believe that training users in security is generally a waste of time and that the money can be spent better elsewhere. Moreover, I believe that our industry's focus on training serves to obscure greater failings in security design.In order to understand my argument, it's useful to look at training's successes and failures. One area where it doesn't work very well is health. We are forever trying to train people to have healthier lifestyles: eat better, exercise more, whatever. And people are forever ignoring the lessons. One basic reason is psychological: We just aren't very good at trading off immediate gratification for long-term benefit. A healthier you is an abstract eventually; sitting in front of the television all afternoon with a McDonald's Super Monster Meal sounds really good right now. Similarly, computer security is an abstract benefit that gets in the way of enjoying the Internet. Good practices might protect me from a theoretical attack at some time in the future, but they’re a bother right now, and I have more fun things to think about. This is the same trick Facebook uses to get people to give away their privacy. No one reads through new privacy policies; it's much easier to just click "OK" and start chatting with your friends. In short: Security is never salient.Another reason health training works poorly is that it’s hard to link behaviors with benefits. We can train anyone -- even laboratory rats -- with a simple reward mechanism: Push the button, get a food pellet. But with health, the connection is more abstract. If you’re unhealthy, then what caused it? It might have been something you did or didn’t do years ago. It might have been one of the dozen things you have been doing and not doing for months. Or it might have been the genes you were born with. Computer security is a lot like this, too.Training laypeople in pharmacology also isn't very effective. We expect people to make all sorts of medical decisions at the drugstore, and they're not very good at it. Turns out that it's hard to teach expertise. We can't expect every mother to have the knowledge of a doctor, pharmacist, or RN, and we certainly can't expect her to become an expert when most of the advice she's exposed to comes from manufacturers' advertising. In computer security, too, a lot of advice comes from companies with products and services to sell.One area of health that is a training success is HIV prevention. HIV may be very complicated, but the rules for preventing it are pretty simple. And aside from certain sub-Saharan countries, we have taught people a new model of their health and have dramatically changed their behavior. This is important: Most lay medical expertise stems from folk models of health. Similarly, people have folk models of computer security (PDF). Maybe they're right, and maybe they're wrong, but they're how people organize their thinking. This points to a possible way that computer security training can succeed. We should stop trying to teach expertise, pick a few simple metaphors of security, and train people to make decisions using those metaphors. On the other hand, we still have trouble teaching people to wash their hands -- even though it’s easy, fairly effective, and simple to explain. Notice the difference, though. The risks of catching HIV are huge, and the cause of the security failure is obvious. The risks of not washing your hands are low, and it’s not easy to tie the resultant disease to a particular not-washing decision. Computer security is more like hand washing than HIV.Another area where training works is driving. We trained, either through formal courses or one-on-one tutoring, and passed a government test to be allowed to drive a car. One reason that works is because driving is a near-term, really cool, obtainable goal. Another reason is even though the technology of driving has changed dramatically over the past century, that complexity has been largely hidden behind a fairly static interface. You might have learned to drive 30 years ago, but that knowledge is still relevant today. On the other hand, password advice from 10 years ago isn't relevant today (PDF). Can I bank from my browser? Are PDFs safe? Are untrusted networks OK? Is JavaScript good or bad? Are my photos more secure in the cloud or on my own hard drive? The “interface” we use to interact with computers and the Internet changes all the time, along with best practices for computer security. This makes training a lot harder.Food safety is my final example. We have a bunch of simple rules -- cooking temperatures for meat, expiration dates on refrigerated goods, the three-second rule for food being dropped on the floor -- that are mostly right, but often ignored. If we can’t get people to follow these rules, then what hope do we have for computer security training?To those who think that training users in security is a good idea, I want to ask: "Have you ever met an actual user?" They're not experts, and we can’t expect them to become experts. The threats change constantly, the likelihood of failure is low, and there is enough complexity that it’s hard for people to understand how to connect their behaviors to eventual outcomes. So they turn to folk remedies that, while simple, don't really address the threats.Even if we could invent an effective computer security training program, there's one last problem. HIV prevention training works because affecting what the average person does is valuable. Even if only half of the population practices safe sex, those actions dramatically reduce the spread of HIV. But computer security is often only as strong as the weakest link. If four-fifths of company employees learn to choose better passwords, or not to click on dodgy links, one-fifth still get it wrong and the bad guys still get in. As long as we build systems that are vulnerable to the worst case, raising the average case won't make them more secure.The whole concept of security awareness training demonstrates how the computer industry has failed. We should be designing systems that won't let users choose lousy passwords and don't care what links a user clicks on. We should be designing systems that conform to their folk beliefs of security, rather than forcing them to learn new ones. Microsoft has a great rule about system messages that require the user to make a decision. They should be NEAT: necessary, explained, actionable, and tested. That's how we should be designing security interfaces. And we should be spending money on security training for developers. These are people who can be taught expertise in a fast-changing environment, and this is a situation where raising the average behavior increases the security of the overall system.If we security engineers do our job right, then users will get their awareness training informally and organically from their colleagues and friends. People will learn the correct folk models of security and be able to make decisions using them. Then maybe an organization can spend an hour a year reminding their employees what good security means at that organization, both on the computer and off. That makes a whole lot more sense.

Source:
http://www.darkreading.com/blog/240151108/on-security-awareness-training.html
 

On the Security of RC4 in TLS

Introduction


This page is about the security of RC4 encryption in TLS. For details of the Lucky 13 attack on CBC-mode encryption in TLS, click here.
The Transport Layer Security (TLS) protocol aims to provide confidentiality and integrity of data in transit across untrusted networks like the Internet. It is widely used to secure web traffic and e-commerce transactions on the Internet. Around 50% of all TLS traffic is currently protected using the RC4 algorithm. It has become increasingly popular because of recent attacks on CBC-mode encryption on TLS, and is now recommended by many commentators.
We have found a new attack against TLS that allows an attacker to recover a limited amount of plaintext from a TLS connection when RC4 encryption is used. The attacks arise from statistical flaws in the keystream generated by the RC4 algorithm which become apparent in TLS ciphertexts when the same plaintext is repeatedly encrypted at a fixed location across many TLS sessions.
We have carried out experiments to demonstrate the feasibility of the attacks.
The most effective countermeasure against our attack is to stop using RC4 in TLS. There are other, less-effective countermeasures against our attacks and we are working with a number of TLS software developers to prepare patches and security advisories.

Who are we?

The team behind this research comprises Nadhem AlFardan, Dan Bernstein, Kenny Paterson, Bertram Poettering and Jacob Schuldt. Nadhem is a PhD student in the Information Security Group at Royal Holloway, University of London. Dan is a Research Professor at the University of Illinois at Chicago and a Professor at the Eindhoven University of Technology. Kenny is a Professor of Information Security and an EPSRC Leadership Fellow in the Information Security Group at Royal Holloway, University of London. Bertram and Jacob are postdocs in the Information Security Group.

What is affected?

Which versions of SSL and TLS are affected?

The attack applies to all versions of SSL and TLS that support the RC4 algorithm.

Which TLS ciphersuites are affected?

All TLS ciphersuites which include RC4 encryption are vulnerable to our attack.

Which TLS implementations are affected?

All TLS implementations which support RC4 are affected.

How severe are the attacks?

The attack is a multi-session attack, which means that we require a target plaintext to be repeatedly sent in the same position in the plaintext stream in multiple TLS sessions. The attack currently only targets the first 256 bytes of the plaintext stream in sessions. Since the first 36 bytes of plaintext are formed from an unpredictable Finished message when SHA-1 is the selected hashing algorithm in the TLS Record Protocol, these first 36 bytes cannot be recovered. This means that the attack can recover 220 bytes of TLS-encrypted plaintext.
The number of sessions needed to reliably recover these plaintext bytes is around 230, but already with only 224 sessions, certain bytes can be recovered reliably. In contrast to the recent Lucky 13 attack, there is no need for sophisticated timing of error messages, and the attacker can be located anywhere on the network path between client and server.
The sessions needed for our attack can be generated in various ways. The attacker could cause the TLS session to be terminated, and some applications running over TLS then automatically reconnect and retransmit a cookie or password. In a web environment, the sessions may also be generated by client-side malware, in a similar way to the BEAST attack.

How does this work relate to known attacks, like BEAST, CRIME and Lucky 13?

TLS in CBC-mode has been the subject of several attacks over the years, most notably padding oracle attacks, the BEAST attack and the recent Lucky 13 attack. For more details of prior attacks, see the Lucky 13 research paper. There are now countermeasures for the BEAST and Lucky 13 attacks, and TLS in CBC-mode is believed to be secure against them once these countermeasures are applied. By contrast, the new attack targets the RC4 algorithm in TLS.

But isn't RC4 already broken?

There have been many attacks on RC4 over the years, most notably against RC4 in the WEP protocol. There, the known attacks crucially exploit the way in which the algorithm's secret key is combined with public information (the WEP IV) during the algorithm's initialisation step. These attacks do not apply to RC4 in TLS, and new attack ideas are needed. Certain bytes of the RC4 keystream are already known to have biases that assist cryptanalysis; in our work, we identify the complete set of biases in the first 256 keystream bytes and combine these using a particular statistical procedure to extract plaintext from ciphertext.

How do the attacks relate to BEAST, CRIME and Lucky 13?

The attacks are quite different from BEAST, CRIME and Lucky 13. BEAST exploits the inadvisable use of chained IVs in CBC-mode in SSL and TLS 1.0. CRIME cleverly exploits the use of compression in TLS. Lucky 13 defeats existing RFC-recommended countermeasures for padding oracle attacks against CBC-mode. Our attacks are against the RC4 algorithm and are based on analysing statistical weaknesses in the RC4 keystream. However, our attacks can be mounted using BEAST-style techniques.

Why doesn't the attack have a cool name?

In Western culture, naming one's attacks after obscure Neil Young albums is now considered passé.

What are the countermeasures?

There are several possible countermeasures against our attacks, some of which are more effective than others:
  • Switch to using CBC-mode ciphersuites. This is a suitable countermeasure provided previous CBC-mode attacks such as BEAST and Lucky 13 have been patched. Many implementations of TLS 1.0 and 1.1 now do have patches against these attacks.
  • Switch to using AEAD ciphersuites, such as AES-GCM. Support for AEAD ciphersuites was specified in TLS 1.2, but this version of TLS is not yet widely supported. We hope that our research will continue to spur support for TLS 1.2 in client and server implementations.
  • Patch TLS's use of RC4. For example, one could discard the first output bytes of the RC4 keystream before commencing encryption/decryption. However, this would need to be carried out in every client and server implementation of TLS in a consistent manner. This solution is not practically deployable given the large base of legacy implementations and the lack of a facility to negotiate such a byte discarding procedure. Furthermore, this will not provide security against potential future improvements to our attack. Our recommendation for the long term is to avoid using RC4 in TLS and to switch to using AEAD algorithms.
  • Modify browser behaviour. There are ways to modify the manner in which a browser using TLS handles HTTP GET requests to make the attack less effective. However, care is needed to avoid potential future improvements to our attack. Our recommendation for the long term is to avoid using RC4 in TLS and to switch to using AEAD algorithms.

Patches, advisories and press

We are working with the IETF TLS Working Group and affected vendors to prepare and test patches. We will update the information here as this process continues.
CVE:
The US NIST National Vulnerability Database has accorded this CVE-2013-2566.

Source:
http://www.isg.rhul.ac.uk/tls/

The Dangers of Surveillance

Abstract:
From the Fourth Amendment to George Orwell’s Nineteen Eighty-Four, our law and literature are full of warnings about state scrutiny of our lives. These warnings are commonplace, but they are rarely very specific. Other than the vague threat of an Orwellian dystopia, as a society we don’t really know why surveillance is bad, and why we should be wary of it. To the extent the answer has something to do with “privacy,” we lack an understanding of what “privacy” means in this context, and why it matters. Developments in government and corporate practices, however, have made this problem more urgent. Although we have laws that protect us against government surveillance, secret government programs cannot be challenged until they are discovered. And even when they are, courts frequently dismiss challenges to such programs for lack of standing, under the theory that mere surveillance creates no tangible harms, as the Supreme Court did recently in the case of Clapper v. Amnesty International. We need a better account of the dangers of surveillance.

This article offers such an account. Drawing on law, history, literature, and the work of scholars in the emerging interdisciplinary field of “surveillance studies,” I explain what those harms are and why they matter. At the level of theory, I explain when surveillance is particularly dangerous, and when it is not. Surveillance is harmful because it can chill the exercise of our civil liberties, especially our intellectual privacy. It is also gives the watcher power over the watched, creating the the risk of a variety of other harms, such as discrimination, coercion, and the threat of selective enforcement, where critics of the government can be prosecuted or blackmailed for wrongdoing unrelated to the purpose of the surveillance.

At a practical level, I propose a set of four principles that should guide the future development of surveillance law, allowing for a more appropriate balance between the costs and benefits of government surveillance. First, we must recognize that surveillance transcends the public-private divide. Even if we are ultimately more concerned with government surveillance, any solution must grapple with the complex relationships between government and corporate watchers. Second, we must recognize that secret surveillance is illegitimate, and prohibit the creation of any domestic surveillance programs whose existence is secret. Third, we should recognize that total surveillance is illegitimate and reject the idea that it is acceptable for the government to record all Internet activity without authorization. Fourth, we must recognize that surveillance is harmful. Surveillance menaces intellectual privacy and increases the risk of blackmail, coercion, and discrimination; accordingly, we must recognize surveillance as a harm in constitutional standing doctrine.


Author: Neil M. Richards, (March 25, 2013). Harvard Law Review, 2013

Source and download: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2239412##

Observation of quantum state collapse and revival due to the single-photon Kerr effect

Abstract:

To create and manipulate non-classical states of light for quantum information protocols, a strong, nonlinear interaction at the single-photon level is required. One approach to the generation of suitable interactions is to couple photons to atoms, as in the strong coupling regime of cavity quantum electrodynamic systems1, 2. In these systems, however, the quantum state of the light is only indirectly controlled by manipulating the atoms3. A direct photon–photon interaction occurs in so-called Kerr media, which typically induce only weak nonlinearity at the cost of significant loss. So far, it has not been possible to reach the single-photon Kerr regime, in which the interaction strength between individual photons exceeds the loss rate. Here, using a three-dimensional circuit quantum electrodynamic architecture4, we engineer an artificial Kerr medium that enters this regime and allows the observation of new quantum effects. We realize a gedanken experiment5 in which the collapse and revival of a coherent state can be observed. This time evolution is a consequence of the quantization of the light field in the cavity and the nonlinear interaction between individual photons. During the evolution, non-classical superpositions of coherent states (that is, multi-component ‘Schrödinger cat’ states) are formed. We visualize this evolution by measuring the Husimi Q function and confirm the non-classical properties of these transient states by cavity state tomography. The ability to create and manipulate superpositions of coherent states in such a high-quality-factor photon mode opens perspectives for combining the physics of continuous variables6 with superconducting circuits. The single-photon Kerr effect could be used in quantum non-demolition measurement of photons7, single-photon generation8, autonomous quantum feedback schemes9 and quantum logic operations10.

Authors: Gerhard Kirchmair, Brian Vlastakis, Zaki Leghtas, Simon E. Nigg, Hanhee Paik, Eran Ginossar, Mazyar Mirrahimi, Luigi Frunzio, S. M. Girvin & R. J. Schoelkopf

Source and full article:
http://www.nature.com/nature/journal/v495/n7440/full/nature11902.html

SoundHound partners with Rdio to launch new Android tablet app with improved UI and music discovery

dog-music

Music search company SoundHound has released a new tablet-optimized version of its Android app with a redesigned layout, improved music discovery and Rdio as a launch partner.
SoundHound, which has over 130 million users worldwide, says the app has been designed specifically with the Google’s Nexus 7 and Nexus 10 tablets, as well as the Kindle Fire and Kindle HD in mind. Amazon tablet owners will need to get the app from the Amazon Appstore for Android, though the latest version doesn’t appear to have arrived in the store yet.
soundhound 1 SoundHound partners with Rdio to launch new Android tablet app with improved UI and music discovery“By utilizing the tablet’s larger screen space and leveraging new tools from Google, users can much more fluidly navigate within the app and have access to more content in one location,” VP James Hom said in a statement.
The service employs music recognition technology, which can also process humming and singing, to let users quickly identify and discover new music.
soundhound 2 SoundHound partners with Rdio to launch new Android tablet app with improved UI and music discoverySoundHound also highlighted Rdio as a launch partner for the updated app. The pair teamed up last December to release a new music mapping feature. Over the past few months, it has risen to become one of the app’s most popular features, according to SoundHound.
The full version 5.3 change log is as follows:
  • Optimized for medium and large tablets
  • Visually stunning music discovery
  • See LiveLyrics, the newest way to experience lyrics as they magically move in sync with the song
  • Explore charts, albums and artists with graphic imagery
  • Streamlined sharing to Facebook, Twitter, and more
  • Easily purchase the songs you love
Phone updates:
  • Find your songs quickly with SoundHound’s even faster music recognition!
  • Scroll through history more easily
By Josh Ong
Source:
http://thenextweb.com/apps/2013/03/29/soundhound-partners-with-rdio-to-launch-new-android-tablet-app-with-improved-ui-and-music-discovery/?fromcat=all

Cyberattacks Seem Meant to Destroy, Not Just Disrupt

American Express customers trying to gain access to their online accounts Thursday were met with blank screens or an ominous ancient type face. The company confirmed that its Web site had come under attack.       
The assault, which took American Express offline for two hours, was the latest in an intensifying campaign of unusually powerful attacks on American financial institutions that began last September and have taken dozens of them offline intermittently, costing millions of dollars.
JPMorgan Chase was taken offline by a similar attack this month. And last week, a separate, aggressive attack incapacitated 32,000 computers at South Korea’s banks and television networks.
The culprits of these attacks, officials and experts say, appear intent on disabling financial transactions and operations.
Corporate leaders have long feared online attacks aimed at financial fraud or economic espionage, but now a new threat has taken hold: attackers, possibly with state backing, who seem bent on destruction.
“The attacks have changed from espionage to destruction,” said Alan Paller, director of research at the SANS Institute, a cybersecurity training organization. “Nations are actively testing how far they can go before we will respond.”
Security experts who studied the attacks said that it was part of the same campaign that took down the Web sites of JPMorgan Chase, Wells Fargo, Bank of America and others over the last six months. A group that calls itself the Izz ad-Din al-Qassam Cyber Fighters has claimed responsibility for those attacks.
The group says it is retaliating for an anti-Islamic video posted on YouTube last fall. But American intelligence officials and industry investigators say they believe the group is a convenient cover for Iran. Just how tight the connection is — or whether the group is acting on direct orders from the Iranian government — is unclear. Government officials and bank executives have failed to produce a smoking gun.
North Korea is considered the most likely source of the attacks on South Korea, though investigators are struggling to follow the digital trail, a process that could take months. The North Korean government of Kim Jong-un has openly declared that it is seeking online targets in its neighbor to the south to exact economic damage.
Representatives of American Express confirmed that the company was under attack Thursday, but said that there was no evidence that customer data had been compromised. A representative of the Federal Bureau of Investigation did not respond to a request for comment on the American Express attack.
Spokesmen for JPMorgan Chase said they would not talk about the recent attack there, its origins or its consequences. JPMorgan has openly acknowledged previous denial of service attacks. But the size and severity of the most recent one apparently led it to reconsider.
The Obama administration has publicly urged companies to be more transparent about attacks, but often security experts and lawyers give the opposite advice.
The largest contingent of instigators of attacks in the private sector, government officials and researchers say, remains Chinese hackers intent on stealing corporate secrets.
The American and South Korean attacks underscore a growing fear that the two countries most worrisome to banks, oil producers and governments may be Iran and North Korea, not because of their skill but because of their brazenness. Neither country is considered a superstar in this area. The appeal of digital weapons is similar to that of nuclear capability: it is a way for an outgunned, outfinanced nation to even the playing field. “These countries are pursuing cyberweapons the same way they are pursuing nuclear weapons,” said James A. Lewis, a computer security expert at the Center for Strategic and International Studies in Washington. “It’s primitive; it’s not top of the line, but it’s good enough and they are committed to getting it.”
American officials are currently weighing their response options, but the issues involved are complex. At a meeting of banking executives, regulators and representatives from the departments of Homeland Security and Treasury last December, some pressed the United States to hit back at the hackers, while others argued that doing so would only lead to more aggressive attacks, according to two people who attended the meeting.
 
By Nicole Perlroth and David E. Sanger
Source and read more: