On the Fastrack


Unlocking the potential for Big Data in 2014

A popular internet meme goes something like this: ‘It’s 2013: where’s my flying car?’ The question, asked on t-shirts and coffee mugs around the world, debated on technology forums, or bantered around with friends at a café, is essentially one that asks: Where exactly are the benefits of this ‘better future’ we all signed up for?
The answer is simple: the benefits are in the data.

The age of data

Flying cars notwithstanding, 2013 was the year that data and its potential became widely acknowledged. It’s also the year that data was demonised—everything from government leaks to questions surrounding privacy and personal security has turned our idea of data from a neutral concept of tiny bits of information to a murky shadow that follows us around whether we want it to or not.
There’s a bit of irony here: the term itself is Latin for ‘gift,’ but increasingly more people are beginning to question where, how, and to whom they want to give their data. In a post-PRISM world, lines are drawn between those who see data as a threat, and those who envision it as a way to solve the biggest challenges of our lifetime.
In truth, data presents an opportunity to realise incredible positive changes to our lives: the ability to improve the way we live, how we discover and solve problems, and most importantly how we approach and implement solutions that change everything from the way business is conducted to how natural disasters are predicted, prevented, and responded to.

Changing our lives

Using data to change our lives isn’t some unrealistic dream for the future—in fact, in some ways the idea has been around for quite a while. Meteorological data has been collected in various parts of the world since 3000 BCE; the 19th century introduced a range of library classification methods.
And with the holiday season in full swing, let’s not forget the ancient Roman census—another example of how data collection and interpretation have enjoyed a long history of impacting every facet of our lives.
The difference now is simply one of scale. We are at a turning point for humanity, as for the first time in any civilisation there is enough information being collected to start to apply mathematics to many of our greatest, as yet unsolved, problems.
Forget flying cars: data is already enabling scientists to cure disease, predict when we’ll get sick, and generate higher crop yields to feed our expanding population. A bit closer to home, you know data is working in your favour when your mobile suggests a better route for the evening drive after a traffic app picks up accidents or delays near your neighbourhood.

A new view of the future

But the proper, safe use data is all about intentions, and the intelligence behind intentions is crucial. Are we determined to use the goldmine at our fingertips in ways that fundamentally improve our future?
Imagine: with data analytic tools stepping in to automate the process of understanding vast amounts of information, humans are free to apply our energies to more creative tasks. How much more productive would you be if data were to automate the repetitive, auto-pilot moments of your day?
Connected, driverless cars, for example, would take away the stress of the morning rush hour, letting you log on and get ahead of the day, kick back and have a coffee, or work in a little extra playtime with the children.
It is no longer a question of whether there is enough data yet, or if technology can process it. For the first time, the answer to both is a resounding ‘yes’. Given that data is the key to unlocking solutions for the modern world we’ve dreamed of, we must be able to see past the Orwellian scaremongering and tap into the huge potential for progress and enhancement that understanding data provides.

Unlocking the potentials of data

To understand this in the real world, look at retailers. On the whole, retailers have embraced data, using it to build a pattern of customers’ online shopping habits to improve communications and loyalty.
That means retail organisations no longer have to take a stab in the dark as to our preferences and needs based on our age and gender—they can now predict our purchasing patterns before we even decide, giving us personalised offers or product recommendations to encourage good spending habits and completely eliminating the need for guess-work and assumption.
Making decisions based on assumptions is how most problems have historically been approached. We tend to decide on solutions that appeal to what a problem looks like, not necessarily what it is.
Coming at a solution this way means that the nuances–the human element–tend to get lost: again, people become simply numbers on a page. But it’s the nuances that are all important, and although you may have not been looking for them, what you don’t know often represents the biggest opportunity.

Insight beyond insight

A method that flips the traditional approach on its head and gives an approximate answer for the exact problem—like Adaptive Behavioural Analytics—gives you exactly that insight in to the unknown and unexpected. This means that things you weren’t necessarily looking for can now be discovered and acted upon. Let’s not forget that Columbus wasn’t setting out for America!
Adaptive Behavioural Analytics is clearing the way for us to approach data safely, with good intentions, and most importantly armed with the right tools to tease out nuances and solve exact problems. Its automated and adaptive characteristics level the playing field for businesses who want the most out of their data: there’s no longer a need for a team of scientists to support and maintain complicated analytics.
As a result, small companies can make sure they reap the benefits of this insight without spending precious resources on maintaining a data science team—organisations can spot road bumps before they appear, hijack new opportunities before they even occur, predict customer behaviour and habits in order to drive profits and identify fraud before it happens.
And the possibilities that data—coupled with the right analytics—can deliver are limitless. Only our imagination draws the boundaries of what we can accomplish, and new applications are becoming feasible every day.
Why restrain data’s future potential with fear, when the right tools and intentions are driving incredible solutions to global problems? That sounds much more exciting than a jetpack or a flying car—and those who are poised to step into the forefront of this revolution agree.

By Martina King

N.S.A. Phone Surveillance Is Lawful, Federal Judge Rules

A federal judge in New York on Friday ruled that the National Security Agency’s program that is systematically keeping phone records of all Americans is lawful, creating a conflict among lower courts and increasing the likelihood that the issue will be resolved by the Supreme Court.
In the ruling, Judge William H. Pauley III, of the United States District Court for the Southern District of New York, granted a motion filed by the federal government to dismiss a challenge to the program brought by the American Civil Liberties Union, which had tried to halt the program.
Judge Pauley said that protections under the Fourth Amendment do not apply to records held by third parties, like phone companies.
“This blunt tool only works because it collects everything,” Judge Pauley said in the ruling.
“While robust discussions are underway across the nation, in Congress and at the White House, the question for this court is whether the government’s bulk telephony metadata program is lawful. This court finds it is,” he added.
A spokesman for the Justice Department said, “We are pleased the court found the N.S.A.'s bulk telephony metadata collection program to be lawful.” He declined to comment further.
Jameel Jaffer, the A.C.L.U. deputy legal director, said the group intended to appeal. “We are extremely disappointed with this decision, which misinterprets the relevant statutes, understates the privacy implications of the government’s surveillance and misapplies a narrow and outdated precedent to read away core constitutional protections,” he said.
The ruling comes nearly two weeks after Judge Richard J. Leon of Federal District Court for the District of Columbia said the program most likely violated the Fourth Amendment. As part of that ruling, Judge Leon ordered the government to stop collecting data on two plaintiffs who brought the case against the government.
In his ruling, Judge Leon said that the program “infringes on ‘that degree of privacy’ that the founders enshrined in the Fourth Amendment,” which prohibits unreasonable searches and seizures.
While Judge Leon ordered the government to stop collecting data on the two plaintiffs, he stayed the ruling, giving the government time to appeal the decision.
Judge Pauley, whose courtroom is just blocks from where the World Trade Center towers stood, endorsed arguments made in recent months by senior government officials — including the former F.B.I. director Robert S. Mueller III — that the program may have caught the Sept. 11, 2001, hijackers had it been in place before the attacks.
In the months before Sept. 11, the N.S.A. had intercepted several calls made to an Al Qaeda safe house in Yemen. But because the N.S.A. was not tracking all phone calls made from the United States, it did not detect that the calls were coming from one of the hijackers who was living in San Diego.
“Telephony metadata would have furnished the missing information and might have permitted the N.S.A. to notify the Federal Bureau of Investigation of the fact that al-Mihdhar was calling the Yemeni safe house from inside the United States,” Judge Pauley said, referring to the hijacker, Khalid al-Mihdhar.
Judge Pauley said that the “government learned from its mistake and adapted to confront a new enemy: a terror network capable of orchestrating attacks across the world.”
The government, he added, “launched a number of countermeasures, including a bulk telephony metadata collection program — a wide net that could find and isolate gossamer contacts among suspected terrorists in an ocean of seemingly disconnected data.”
The main dispute between Judge Pauley and Judge Leon was over how to interpret a 1979 Supreme Court decision, Smith v. Maryland, in which the court said a robbery suspect had no reasonable expectation that his right to privacy extended to the numbers dialed from his phone.
“Smith’s bedrock holding is that an individual has no legitimate expectation of privacy in information provided to third parties,” Judge Pauley wrote.
But Judge Leon said in his ruling that advances in technology and suggestions in concurring opinions in later Supreme Court decisions had undermined Smith. The government’s ability to construct a mosaic of information from countless records, he said, called for a new analysis of how to apply the Fourth Amendment’s prohibition of unreasonable government searches.
Judge Pauley disagreed. “The collection of breathtaking amounts of information unprotected by the Fourth Amendment does not transform that sweep into a Fourth Amendment search,” he wrote.
He acknowledged that “five justices appeared to be grappling with how the Fourth Amendment applies to technological advances” in a pair of 2012 concurrences in United States v. Jones. In that decision, the court unanimously rejected the use of a GPS device to track the movements of a drug suspect over a month. The majority in the 2012 case said that attaching the device violated the defendant’s property rights.
In one of the concurrences, Justice Sonia Sotomayor wrote that “it may be necessary to reconsider the premise that an individual has no reasonable expectation of privacy in information voluntarily disclosed to third parties.”
But Judge Pauley wrote that the 2012 decision did not overrule the one from 1979. “The Supreme Court,” he said, “has instructed lower courts not to predict whether it would overrule a precedent even if its reasoning has been supplanted by later cases.”
As for changes in technology, he wrote, customers’ “relationship with their telecommunications providers has not changed and is just as frustrating.”

NIST Seeks Input in Advance of Request for Proposals to Support National Cybersecurity Center of Excellence

The National Cybersecurity Center of Excellence (NCCoE) is inviting comments on a Partial Draft Request for Proposals (RFP) for a contractor to operate a Federally Funded Research and Development Center (FFRDC) to support the mission of the NCCoE. The FFRDC will be the first solely dedicated to enhancing the security of the nation's information systems.
The NCCoE was established in partnership with the state of Maryland and Montgomery County in February 2012. The center is a public-private entity that helps businesses secure their data and digital infrastructure by bringing together experts from industry, government and academia to find practical solutions for today's most pressing cybersecurity needs.
Following three Federal Register Notices announcing its intention to establish a FFRDC to support the cybersecurity center, NIST issued the Partial Draft RFP to give potential contractors a better understanding of the government's requirements. This process should also increase efficiency in proposal preparation and evaluation, negotiation and contract award.
FFRDCs are operated by a university or consortium of universities, other not-for-profit or nonprofit organization or an industrial firm, as an autonomous organization or as an identifiable separate operating unit of a parent organization. The centers work in the public interest and provide a highly efficient way to leverage and rapidly assemble resources and scientific and engineering talent, both public and private. By design, they have greater access to government and supplier data, and are required to be free from organizational conflicts of interest as well as bias toward any particular company, technology or product—key attributes, given the NCCoE's collaborative nature.
FFRDCs can have a number of structures that reflect various balances of contractor/government control and ownership. In the case of the NCCoE, federal staff will provide overall management of the center, and the FFRDC will support its mission through three major task areas: research, development, engineering and technical support; program/project management; and facilities management.
The Partial Draft RFP outlines NIST's plan to award a single Indefinite-Delivery/Indefinite-Quantity type contract with firm-fixed price, labor-hour or cost-reimbursement task orders. Specific work to be performed will be detailed in task orders. The proposed base period for the contract is 5 years, with a maximum amount of $400 million for that period.
Access the Partial Draft RFP at https://www.fbo.gov/spg/DOC/NIST/AcAsD/DRAFT_SB1341-14-RP-0005/listing.html. Interested parties have until 5 p.m. Eastern Time, Jan. 17, 2014, to submit their comments. NIST will hold an industry day Jan. 8, 2014, that will include discussion of the acquisition process and a question and answer session. Register for the industry day at https://www.ibbr.umd.edu/NCCoEFFRDCIndustry by Jan. 6, 2014.

Stanford Researchers: It Is Trivially Easy to Match Metadata to Real People

True, the telephony metadata that the NSA collects does not include customer names, but it's really no trouble to figure them out.

In defending the NSA's telephony metadata collection efforts, government officials have repeatedly resorted to one seemingly significant detail: This is just metadata—numbers dialed, lengths of calls. "There are no names, there’s no content in that database," President Barack Obama told Charlie Rose in June.
No names; just metadata.
New research from Stanford demonstrates the silliness of that distinction. Armed with very sparse metadata, Jonathan Mayer and Patrick Mutchler found it easy—trivially so—to figure out the identity of a caller.
Mayer and Mutchler are running an experiment which works with volunteers who agree to use an Android app, MetaPhone, that allows the researchers access to their metadata. Now, using that data, Mayer and Mutchler say that it was hardly any trouble at all to figure out who the phone numbers belonged to, and they did it in just a few hours.
They write:
We randomly sampled 5,000 numbers from our crowdsourced MetaPhone dataset and queried the Yelp, Google Places, and Facebook directories. With little marginal effort and just those three sources—all free and public—we matched 1,356 (27.1%) of the numbers. Specifically, there were 378 hits (7.6%) on Yelp, 684 (13.7%) on Google Places, and 618 (12.3%) on Facebook.
What about if an organization were willing to put in some manpower? To conservatively approximate human analysis, we randomly sampled 100 numbers from our dataset, then ran Google searches on each. In under an hour, we were able to associate an individual or a business with 60 of the 100 numbers. When we added in our three initial sources, we were up to 73.
How about if money were no object? We don’t have the budget or credentials to access a premium data aggregator, so we ran our 100 numbers with Intelius, a cheap consumer-oriented service. 74 matched.1 Between Intelius, Google search, and our three initial sources, we associated a name with 91 of the 100 numbers.
Their results weren't perfect (and they note that the Intelius data was particularly spotty), but they didn't even try all that hard. "If a few academic researchers can get this far this quickly, it’s difficult to believe the NSA would have any trouble identifying the overwhelming majority of American phone numbers," they conclude.
It's also difficult to believe they wouldn't try. As federal district judge Richard Leon wrote in his decision last week, "There is also nothing stopping the Government from skipping the [National Security Letter] step altogether and using public databases or any of its other vast resources to match phone numbers with subscribers."

By Rebecca J. Rosen

Bitcoin exchanges in India suspend services as authorities crack down on the virtual currency

A number of Bitcoin exchanges in India have suspended their services days after the Reserve Bank of India (RBI) warned that use of the cryptocurrency could violate money laundering and financial terrorism laws.
Among the services that are pausing, INRBTC.com says “the only option left now is suspend the services until further arrangements can be made,” while Buysellbitco.in will remain offline until “we can outline a clearer framework with which to work.”
A report from DNA today suggests that Mahim Gupta, who runs Buysellbitco.in, could be arrested if police “are able to establish money laundering” charges against him.
India’s crackdown comes a week after China banned local exchanges from dealing in its national currency, sending the value of Bitcoin plummeting.

By Jon Russell

Cardholder Authentication for the PIV Digital Signature Key

Draft NISTIR 7863

Familiarity Breeds Contempt: The Honeymoon Effect and the Role of Legacy Code in Zero-Day Vulnerabilities


Work on security vulnerabilities in software has primarily focused on three points in the software life-cycle: (1) finding and removing software defects, (2) patching or hardening software after vulnerabilities have been discovered, and (3) measuring the rate of vulnerability exploitation. This paper examines an earlier period in the software vulnerability lifecycle, starting from the release date of a version through to the disclosure of the fourth vulnerability, with a particular focus on the time from release until the very first disclosed vulnerability.

Analysis of software vulnerability data, including up to a decade of data for several versions of the most popular operating systems, server applications and user applications (both open and closed source), shows that properties extrinsic to the software play a much greater role in the rate of vulnerability discovery than do intrinsic properties such as software quality. This leads us to the observation that (at least in the first phase of a product’s existence), software vulnerabilities have different properties from software defects.

We show that the length of the period after the release of a software product (or version) and before the discovery of the first vulnerability (the ’Honeymoon’ period) is primarily a function of familiarity with the system. In addition, we demonstrate that legacy code resulting from code re-use is a major contributor to both the rate of vulnerability discovery and the numbers of vulnerabilities found; this has significant implications for software engineering principles and practice.
By Sandy Clark, Stefan Frei, Matt Blaze, Jonathan Smith

What’s the future of mobile messaging apps? E-commerce, among other things

Mobile messaging apps have been one of the biggest success stories of 2013. The growth of Snapchat, WhatsApp, Kik, Line, WeChat and others has inspired Twitter to refocus on its near-jettisoned private messaging service, and Instagram to introduce ‘Direct’ messaging.
But what of the future? What’s next for messaging apps?
The answer is a long and complicated one (we’ll have some lengthier thoughts on that soon) but essentially the potential is pretty endless since messaging apps are platforms, and thus capable of delivering any kind of service that is in demand. Be that gaming, music, marketing, and — for this example – e-commerce.We’ve already seen Tencent’s WeChat messaging app used to sell Xiaomi smartphones in China — selling 150,000 units in under 10 minutes, no less — and there’s further proof of the potential today from Japanese chat app Line.
Line, which has over 300 million registered users worldwide, recently began flash sales in Thailand — one of its strongest overseas markets, where it has 20 million registered users — and the results have been impressive.
While we don’t know how many units were picked up: its first high-profile flash sale (with L’Oreal’s Maybelline makeup brand) today sold out in 13 minutes.
Similarly, a week previous, a lower-profile flash sale of iPhone phone cases sold out in 25 minutes, as Tech In Asia reports. (Again, though, we have no details on how many units were sold, how many customers tried to buy them, etc.)
How is Line doing this? The company isn’t spamming users, they must opt in by following the flash sales account which provides details. It’s a brilliant model since, as we said about the sticker sales business, if flash sales don’t interest you, then you need never know about or come into contact with them in your use of Line.
The company — which makes money selling stickers, providing a platform for opt-in company marketing and via a ‘connected’ games service — is quickly developing into an Internet juggernaut.
Line’s messaging service grossed $99 million in its last quarter of business, a big increase on the $58 million it recorded just one quarter previously. (For reference, Twitter’s most recent quarter of business brought in $168.6 million.)

By Jon Russell

Caltech Announces Open Access Policy

On January 1, 2014, a new open-access policy for faculty's scholarly writings will take effect at the California Institute of Technology (Caltech). According to this policy, approved by the faculty at their June 10 meeting, all faculty members will automatically grant nonexclusive rights to the Institute to disseminate their scholarly papers, making wider distribution of their work possible and eliminating confusion about copyright when posting research results on Caltech's websites.Open-access policies were pioneered by Harvard University in 2008, and since then many research universities have adopted similar policies, including Duke University, Princeton University, the University of California, and the Massachusetts Institute of Technology. The principal rationale for open-access policies is to ensure maximum availability of scholarly work in a timely fashion. As John Dabiri, chair of the Caltech faculty and professor of aeronautics and bioengineering, explains, "The decision of our faculty to make their papers freely accessible online will ensure that the global community of researchers, students, and casual followers of science and engineering will learn about our work at earlier stages, enabling them to put it to use for the benefit of society."Caltech faculty produce more than 2,000 papers each year spread out over hundreds of scholarly journals, all of which have their own policies regarding the author's copyright in the material, how and when it can be exercised, and by whom. This has led to substantial confusion regarding exactly when a particular paper can be posted to a faculty member's website or to CaltechAUTHORS, the Institute's online repository. Juggling different publishers' demands is not only cumbersome; the consequences of failing to do so may be significant. Indeed, some publishers, seeking to protect their own investment in scholarly work, have authorized third-party agencies to find articles posted in violation of their contractual rights and to issue Digital Millennium Copyright Act takedown notices that threaten legal action if articles are not removed from the web.Caltech's new open-access policy will simplify the copyright status of scholarly papers and, as University Librarian Kimberly Douglas says, put faculty "in the driver's seat, to empower them to do what makes sense for them." Faculty may still grant exclusive rights to their publishers, either permanently or for an embargoed period, but to do so, they must request a waiver from the open-access policy. At other institutions with open-access policies, such as MIT and Harvard, faculty have requested waivers for about 5 percent of the total number of papers produced, usually to comply with the requirement of a few publishers that want a formal waiver in order to even consider manuscripts for publication.Several research groups at Caltech, including the Institute for Quantum Information and Matter (IQIM), the Kavli Neuroscience Institute (KNI), and Theoretical Astrophysics Including Relativity and Cosmology (TAPIR), already point to the Institute's online repositories for relevant faculty papers. Under the open-access policy, Douglas explains, other research groups will likely take the same step, posting their papers to their own research websites to facilitate public access. Articles will also be available globally through the CaltechAUTHORS database. Indeed, Caltech's new open-access policy is partly motivated by a February 2013 directive from the United States Office of Science and Technology Policy requiring federal agencies to develop plans to make the final results of federally funded research freely available within a year of publication.Caltech's new policy "continues the opening of access to Caltech research results, which began in 2003 with making PhD theses available to the whole world, both new ones that are produced in electronic form and old ones that have been scanned and made available online," says Rick Flagan, chair of the Caltech faculty library committee and Irma and Ross McCollum-William H. Corcoran Professor of Chemical Engineering and Environmental Science and Engineering. "Open access means that papers that would previously have been locked behind a publisher's gatekeeper will now be freely accessible to all.""Ideas are most powerful when they are free to move, not held behind a screen until they are purchased from a vendor," concurs Brent Fultz, a professor of materials science and applied physics and member of the faculty board. "The new open-access policy at Caltech increases the impact of our ideas by better connecting them to the information society around us."Of course, Caltech faculty will continue to publish in peer-reviewed academic journals, and publishers will continue to own the rights to their own formatted versions, but final manuscripts in the author's format will be available through Caltech. And, says Douglas, "If they want to, authors can make their personal versions as available as possible, either independently or through Caltech."Whatever the future holds for scholarly communication and publishing, the motivations underlying Caltech's new open-access policy are essentially traditional scholarly ones. "Our goal at Caltech is to discover things that can transform our world," says Morteza Gharib, vice provost for research and the Hans Liepmann Professor of Aeronautics and Bioinspired Engineering. "This objective also means that we take responsibility for ensuring that people everywhere have a reasonable chance of learning about our work."


2014 Cybersecurity Forum to Focus on Trusted Computing, Security Automation and Information Sharing

The 2014 Cybersecurity Innovation Forum, to be held January 28-30, 2014, at the Baltimore Convention Center in Baltimore, Md., will focus on the existing threat landscape and provide presentations and keynotes on current and emerging practices, technologies and standards to protect the nation’s infrastructure, citizens and economic interests from cyberattack.

The goal of the forum—sponsored by the National Institute of Standards and Technology’s (NIST) National Cybersecurity Center of Excellence—is to identify a roadmap for active cyber defense through integrating trusted computing, information sharing and security automation technologies. Meeting organizers are bringing together expertise from the Trusted Computing and Security Automation conferences and discussions on information sharing into a single event. Merging several cybersecurity conferences takes advantage of the synergy of a broader audience of public- and private-sector cybersecurity employees.

Keynote speakers include Goldman Sachs Managing Director and Chief Information Risk Officer Phil Venables, Special Assistant to the President and Cybersecurity Coordinator Michael Daniels, and Chief Information Security Officer for the County of Los Angeles Robert Pittman. Other keynotes will cover industry views of the security threat, the Presidential Policy Directive on Critical Infrastructure Security and Resilience (PPD 21), impacts of PPD 21 and Executive Order 13636 on improving the cybersecurity of critical infrastructure, and the U.S. government’s collaboration with industry to secure our nation’s cybersecurity.

The forum offers four tracks—Trusted Computing, Security Automation, Information Sharing, and Research—where attendees will hear from government and industry experts and have opportunities for collaboration and networking.

The Department of Homeland Security, the National Security Agency and NIST are organizing the event. More information about the event and how to register can be found at www.nist.gov/itl/csd/2014-cybersecurity-innovation-forum.cfm.


Who Owns the World’s Biggest Bitcoin Wallet? The FBI

Who owns the single largest Bitcoin wallet on the internet? The U.S. government.
In September, the FBI shut down the Silk Road online drug marketplace, and it started seizing bitcoins belonging to the Dread Pirate Roberts — the operator of the illicit online marketplace, who they say is an American man named Ross Ulbricht.
The seizure sparked an ongoing public discussion about the future of Bitcoin, the world’s most popular digital currency, but it had an unforeseen side-effect: It made the FBI the holder of the world’s biggest Bitcoin wallet.
The FBI now controls more than 144,000 bitcoins that reside at a bitcoin address that consolidates much of the seized Silk Road bitcoins. Those 144,000 bitcoins are worth close to $100 million at Tuesday’s exchange rates. Another address, containing Silk Road funds seized earlier by the FBI, contains nearly 30,000 bitcoins ($20 million).
That doesn’t make the FBI the world’s largest bitcoin holder. This honor is thought to belong to bitcoin’s shadowy inventor Satoshi Nakamoto, who is estimated to have mined 1 million bitcoins in the currency’s early days. His stash is spread across many wallets. But it does put the federal agency ahead of the Cameron and Tyler Winklevoss, who in July said that they’d cornered about 1 percent of all bitcoins (there are 12 million bitcoins in circulation).
In the fun house world of bitcoin tracking, it’s hard to say anything for certain. But it is safe to say that there are new players in the Bitcoin world — although not as many people are buying bitcoins as one might guess from all of the media attention.
Satoshi stores his wealth in a large number of bitcoin addresses, most of them holding just 50 bitcoins. It’s a bit of a logistical nightmare, but most savvy Bitcoin investors spread out their bitcoins across multiple wallets. That way if they lose the key to one of them or get hacked, all is not lost.
“It’s easier to keep track of one address, but it’s also most risky that way,” says Andrew Rennhack, the operator of the Bitcoin Rich List, a website that tracks the top addresses in the world of bitcoin.
According to Rennhack, the size of the bitcoin universe has expanded over the past year, but the total number of people on the planet who hold at least one bitcoin is actually pretty small — less than a quarter-million people. Today, there are 246,377 bitcoin addresses with at least one bitcoin in them, he says. And many people keep their bitcoins in more than one address. A year ago, that number was 159,916, he says.
Although some assume that the largest Bitcoin addresses are held by bitcoin dinosaurs — miners who got into the game early on, when it was easy to rack up thousands of bitcoins with a single general-purpose computer — almost all of the top 10 bitcoin addresses do not fit that profile, says Sarah Meiklejohn, a University of California, San Diego, graduate student.
She took a look at how many transactions in these wallets seemed to match the profile of early-day miners and found that only one of them really fit the bill.
The rest seem to belong to what Meiklejohn calls Bitcoin’s “nouveau riche”: People who are accumulating bitcoins from non-mining sources. “What you’re seeing is this influx of a different kind of wealth,” she says.
Because most bitcoin addresses haven’t been publicly identified — like the FBI’s — it’s hard to say exactly makes up the new Bitcoin top 10. Meiklejohn says that they’re likely to include wallets created by up-and-coming Bitcoin exchanges or businesses. One of them is the wallet that’s thought to contain 96,000 bitcoins stolen from the Silk-Road successor, Sheep Marketplace.

By Robert Mcmillan

İnternet Ortamında Yapılan Yayınların Düzenlenmesi ve Bu Yayınlar Yoluyla İşlenen Suçlarla Mücadele Edilmesi Hakkında Kanunda Değişiklik Yapılmasına Dair Kanun teklifi ve Gerekçesi


NET EFFECTS: The Past, Present, and Future Impact of Our Networks

Tom Wheeler, FCC chairman, writes in his introduction:

Throughout my professional life I have been involved with the introduction of new technologies. And though my day job was to chase the future, history has been an abiding hobby. One of the ways I have tried to understand what lies beyond the next hill in the landscape of the communications revolution is to study the advent of similar periods in the past.

Over the last several years I have been investigating the network revolutions of history. I called the project “From Gutenberg to Google: The History of Our Future.” The goal was to assemble the work into a book. When President Obama nominated me to be Chairman of the Federal Communications Commission (FCC) the project was put on hold. Nevertheless, this review has taught me a lot about the present realities of our changing network environment.
Author: Tom Wheeler
Publication date: December 4, 2013.

Kişisel Verilerin Korunmasına İlişkin Ulusal ve Uluslararası Durum Değerlendirmesi ile Bilgi Güvenliği ve Kişisel Verilerin Korunması Kapsamında Gerçekleştirilen Denetim Çalışmaları

Devlet Denetleme Kurulu, Denetleme Raporu


CopyrightX applicatio​ns now open

After a successful first experience in 2013, Professor William Fisher will offer the networked course, CopyrightX, again this spring, under the auspices of Harvard Law School, the HarvardX distance-learning initiative, and the Berkman Center for Internet & Society. The course explores the current law of copyright and the ongoing debates concerning how that law should be reformed. For more information, please see http://copyx.org/.

To apply, please go to: brk.mn/applycx14

Like the inaugural offering, in 2014 CopyrightX will offer an online course to approximately 500 participants, divided into 20 “sections,” each taught by a Harvard Teaching Fellow. This group will constitute one of three layers within CopyrightX: the other two are the Harvard Law School course on copyright and “satellite” sections taught in countries other than the United States. Participants in each layer will have the opportunity to engage with and learn from the participants in the other layers.

The 500 students in the online component of CopyrightX will be selected through an open application process that opens on December 13 and closes on December 23. We welcome diverse and international participation; please read about admissions process here: http://copyx.org/logistics/admission/.

Internet Monitor 2013: Reflections on the Digital World

Internet Monitor 2013: Reflections on the Digital World, the Internet Monitor project's first-ever annual report, is a collection of essays from roughly two dozen experts around the world, including Ron Deibert, Malavika Jayaram, Viktor Mayer-Schönberger, Molly Sauter, Bruce Schneier, Ashkan Soltani, and Zeynep Tufekci, among others. The report highlights key events and recent trends in the digital space.
To mirror the collaborative spirit of the initiative, we compile—based on an open invitation to the members of the extended Berkman community—nearly two dozen short essays from friends, colleagues, and collaborators in the United States and abroad.
The result is intended for a general interest audience and invites reflection and discussion of the past year’s notable events and trends in the digitally networked environment. Our goal is not to describe the “state of the Internet” in any definitive way, but rather to highlight and discuss some of the most fascinating developments and debates over the past year worthy of broader public conversation.
Our contributors canvass a broad range of topics and regions—from a critique of India’s Unique Identity project to a review of corporate transparency reporting to a first-person report from the Gezi Park protests. A common thread explores how actors within government, industry, and civil society are wrestling with the changing power dynamics of the digital realm.
Download report:

How the Bitcoin protocol actually works

Many thousands of articles have been written purporting to explain Bitcoin, the online, peer-to-peer currency. Most of those articles give a hand-wavy account of the underlying cryptographic protocol, omitting many details. Even those articles which delve deeper often gloss over crucial points. My aim in this post is to explain the major ideas behind the Bitcoin protocol in a clear, easily comprehensible way. We’ll start from first principles, build up to a broad theoretical understanding of how the protocol works, and then dig down into the nitty-gritty, examining the raw data in a Bitcoin transaction.
Understanding the protocol in this detailed way is hard work. It is tempting instead to take Bitcoin as given, and to engage in speculation about how to get rich with Bitcoin, whether Bitcoin is a bubble, whether Bitcoin might one day mean the end of taxation, and so on. That’s fun, but severely limits your understanding. Understanding the details of the Bitcoin protocol opens up otherwise inaccessible vistas. In particular, it’s the basis for understanding Bitcoin’s built-in scripting language, which makes it possible to use Bitcoin to create new types of financial instruments, such as smart contracts. New financial instruments can, in turn, be used to create new markets and to enable new forms of collective human behaviour. Talk about fun!
I’ll describe Bitcoin scripting and concepts such as smart contracts in future posts. This post concentrates on explaining the nuts-and-bolts of the Bitcoin protocol. To understand the post, you need to be comfortable with public key cryptography, and with the closely related idea of digital signatures. I’ll also assume you’re familiar with cryptographic hashing. None of this is especially difficult. The basic ideas can be taught in freshman university mathematics or computer science classes. The ideas are beautiful, so if you’re not familiar with them, I recommend taking a few hours to get familiar.
It may seem surprising that Bitcoin’s basis is cryptography. Isn’t Bitcoin a currency, not a way of sending secret messages? In fact, the problems Bitcoin needs to solve are largely about securing transactions — making sure people can’t steal from one another, or impersonate one another, and so on. In the world of atoms we achieve security with devices such as locks, safes, signatures, and bank vaults. In the world of bits we achieve this kind of security with cryptography. And that’s why Bitcoin is at heart a cryptographic protocol.
My strategy in the post is to build Bitcoin up in stages. I’ll begin by explaining a very simple digital currency, based on ideas that are almost obvious. We’ll call that currency Infocoin, to distinguish it from Bitcoin. Of course, our first version of Infocoin will have many deficiencies, and so we’ll go through several iterations of Infocoin, with each iteration introducing just one or two simple new ideas. After several such iterations, we’ll arrive at the full Bitcoin protocol. We will have reinvented Bitcoin!
This strategy is slower than if I explained the entire Bitcoin protocol in one shot. But while you can understand the mechanics of Bitcoin through such a one-shot explanation, it would be difficult to understand why Bitcoin is designed the way it is. The advantage of the slower iterative explanation is that it gives us a much sharper understanding of each element of Bitcoin.
Finally, I should mention that I’m a relative newcomer to Bitcoin. I’ve been following it loosely since 2011 (and cryptocurrencies since the late 1990s), but only got seriously into the details of the Bitcoin protocol earlier this year. So I’d certainly appreciate corrections of any misapprehensions on my part. Also in the post I’ve included a number of “problems for the author” – notes to myself about questions that came up during the writing. You may find these interesting, but you can also skip them entirely without losing track of the main text.

By Michael Nielsen
Source and read the full article:

How to Use The Cloud for Better, More Efficient Healthcare

Discover how cloud computing is reinventing healthcare by providing patients with an electronic version of their medical records.

By Jane Munn
Source and read:

Photos in direct messages and swipe between timelines

Source and read:

Japan Tops World In Mobile Apps Revenue

Data Tracker Says Japan Spends 10% More Than U.S. on Apps.

Source and more:

YouTube reveals its top 10 videos for 2013 (What Does YouTube Say?)

There are more than 6 billion hours of videos watched on YouTube each month so it’s certainly a barometer of what may be viral or popular in the mainstream. With that, the Google-owned video site is once again releasing its annual list of the top videos and channels of 2013.
Perhaps unsurprisingly, Yivis’s The Fox (What Does The Fox Say?) takes top billing as the top trending video while Psy’s Gentleman ranks as the top trending music video of the year.
Source and more:


Primo - Teaching programming logic to children age 4 to 7

Primo is a playful physical programming interface that teaches children programming logic without the need for literacy

Trusted Identities to Secure Critical Infrastructure

Every week seems to bring news of yet another website hacked, user accounts compromised, or personal data stolen or misused. Just recently, many Facebook users were required to change their passwords because of hacks at Adobe, a completely different company. Why? Because hackers know that users frequently re-use the same password at multiple websites. This is just one of many reasons that the system of passwords as it exists today is hopelessly broken. And while today it might be a social media website, tomorrow it could be your bank, health services providers, or even public utilities. Two complementary national initiatives aim to do better before the impacts of this problem grow even worse.

Developed in 2011, the National Strategy for Trusted Identities in Cyberspace (NSTIC) is a key Administration initiative to work collaboratively with the private sector, advocacy groups, public sector agencies, and other organizations to improve the privacy, security, and convenience of sensitive online transactions. NSTIC calls for the creation of an Identity Ecosystem – an online environment in which individuals can trust each other because they follow agreed-upon standards to authenticate their digital identities. What this means for individual users is that they will be able to choose from a variety of more secure, privacy-enhancing identity solutions that they can use in lieu of passwords for safer, more convenient experiences everywhere they go online.

The NSTIC also helps multiple sectors in the online marketplace, because trusted identities provide a variety of benefits: enhanced security, improved privacy, new types of transactions, reduced costs, and better customer service. The National Institute of Standards and Technology (NIST) is leading implementation of the NSTIC.

NIST is also leading the development of a voluntary framework for reducing cyber risks to critical infrastructure. This latter work is being done in response to Executive Order 13636 Improving Critical Infrastructure Cybersecurity,” which President Obama issued in recognition of the fact that the national and economic security of the United States depends on the reliable functioning of critical infrastructure. On October 29, NIST released a preliminary version of the Cybersecurity Framework, developed using information collected through the Request for Information (RFI) that was published in the Federal Register on February 26, 2013, a series of public workshops, and other discussions.

How are these two national cybersecurity efforts related? While the Executive Order focuses on critical infrastructure, managing identities is a foundational enabler for cybersecurity efforts across all sectors. The NSTIC complements the goals and objectives of President Obama’s Executive Order by promoting the use of trusted identity solutions in lieu of passwords, which will help strengthen the cybersecurity of critical infrastructure. Trusted identities offer owners and operators of critical infrastructure more secure, privacy-enhancing, and easy-to-use solutions to help secure IT systems from potential attack.

A key NSTIC initiative is facilitating the work of a private sector-led Identity Ecosystem steering group, which is working to develop an Identity Ecosystem Framework in which different market sectors can implement convenient, interoperable, secure, and privacy-enhancing trusted solutions for digital identity, including within critical infrastructure. This group currently has more than 200 members, including many from critical infrastructure sectors; membership is currently free and we encourage all stakeholders to get involved. Like the NSTIC, the Cybersecurity Framework will result in flexible, voluntary guidelines for industry to implement better cybersecurity practices, with the private sector offering a marketplace of tools and technologies. A key element of success for both the NSTIC and the Cybersecurity Executive Order will be market adoption of their primary deliverables; accordingly, implementation activities around both initiatives include the development of mutually beneficial legal, economic, and other incentives to promote deployment.

To ensure that the Cybersecurity Framework takes full advantage of the trusted identity solutions marketplace, we strongly encourage input on the preliminary Cybersecurity Framework. On October 29, 2013, NIST announced a 45-day public comment period on the preliminary Framework in the Federal Register. Comments are due no later than 5pm EST on December 13, 2013. (Click here for more information on how to submit comments.)

We look forward to your valuable input on how trusted identities can help secure the nation’s critical infrastructure.

By Michael Daniel

This Is the MIT Surveillance Video That Undid Aaron Swartz

The door to the network closet pops open and a slender figure enters, a bicycle helmet hanging at his side. He sheds his backpack and pulls out a cardboard box containing a small hard drive, then kneels out of frame. After about five minutes, he stands, turns off the lights and furtively exits the closet.
This scene, captured by a video camera hidden in a wiring closet at MIT, was the beginning of a probe that led to federal charges against the late coder and activist Aaron Swartz. The video, along with dozens of other documents related to the case, has been released to the public for the first time through my Freedom of Information Act lawsuit against the U.S. Secret Service.
The video was made in January 2011, near the end of a months-long cat-and-mouse game between MIT personnel and a then-unknown user who’d been downloading millions of articles from a service called JSTOR, which provides searchable copies of academic journals online. MIT has a subscription that allows free access to students from MIT’s public network. Someone had been sporadically using that access to automatically download one article after another, at times so aggressively that JSTOR’s website was slowed.
On January 4, 2011, MIT technicians traced the downloads to the closet in the basement of Building 16. There they found an Acer laptop wired to MIT’s network and concealed under a box. They called the police, and after some discussion decided to leave the laptop in place so as not to alert the perpetrator. MIT technicians planted the IP camera to see who came back for it.
Those few minutes of glitchy video — capturing Swartz swapping hard drives on his stashed laptop — would prove fateful. After a second visit to the closet two days later, Swartz was arrested nearby and identified.
The JSTOR hack was not Swartz’s first experiment in liberating costly public documents. In 2008, the federal court system briefly allowed free access to its court records system, Pacer, which normally charged the public eight cents per page. Theoretically, the free access was only available from computers at 17 libraries across the country; Swartz used one of the library passwords to cycle sequentially through case numbers, requesting a new document from Pacer every three seconds, and uploading it to the cloud. Swartz pulled nearly 20 million pages of public court documents, which are now available for free on the Internet Archive.
The FBI investigated that hack, but in the end no charges were filed. Swartz wasn’t so lucky with the Secret Service, which handled the MIT investigation. With extensive cooperation from MIT, the case was pressed by federal prosecutors in Boston, who charged Swartz with computer and wire fraud. Swartz potentially faced seven years in prison if convicted at trial, though he rejected plea bargains of between four and six months in custody.
His jury trial was looming when Swartz took his own life in January, 2013.
MIT faced a firestorm of criticism in the wake of Swartz’s suicide. Critics, including Swartz’s family and prominent MIT alumni, said the institution betrayed its own principles by not advocating for less harsh treatment of Swartz.
Looking at the video, it’s easy to see what MIT and the Secret Service presumably saw — a furtive hacker going someplace he shouldn’t go, doing something he shouldn’t do.
But photos from the putative crime scene, also released by the Secret Service, add context missing from the video: a concrete support in the network closet is crammed with a jumble of Sharpie graffiti dating back to the early 1980s — earlier generations of hackers at the institution that invented hacking, going places they shouldn’t go, doing things they shouldn’t do, leaving their mark at the very spot where, on January 4, 2011, MIT lost its tolerance for such behavior.
The Secret Service has also released about 400 pages of documents about Swartz. All but 147 pages are copies of already-public court filings. The new material can be found here Update: 12.06.13 You can find all nine of the newly-released videos here, and the 177 photos here.

By Kevin Poulsen

Why Cognition-as-a-Service is the next operating system battlefield

The Semantic Web may have failed, but higher intelligence is coming to applications anyway, in another form: Cognition-as-a-Service (CaaS). And this may just be the next evolution of the operating system.
CaaS will enable every app to become as smart as Siri in its own niche. CaaS powered apps will be able to think and interact with consumers like intelligent virtual assistants — they will be “cognitive apps.” You will be able to converse with cognitive apps, ask them questions, give them commands — and they will be able to help you complete tasks and manage your work more efficiently.
For example your calendar will become a cognitive app — it will be able to intelligently interact with you to help you manage your time and scheduling like a personal assistant would — but the actual artificial intelligence that powers it will come from a third-party cloud based cognitive platform.
Cognitive apps will not be as intelligent as humans anytime soon, and they probably will not be anything like the 20th century ideas of humanoid robots. But they’re going to be a lot smarter than the software of today.

Cognition in the clouds

But the key is that the intelligence that powers cognitive apps will come from cloud based platforms that host their brains — the apps themselves won’t really have to be that smart on their own. Which means that truly vast, always increasing, intelligence will be available via APIs to all kinds of apps, and right into the full range of consumer appliances, devices and even the Internet of things. All apps and even things will start to become cognitive.
Even in the last few months several interesting announcements were made that all signify this trend:
  • The startup, Vicarious, has developed a new form of AI that is capable of reading CAPTCHA images, the most widely used test for differentiating human and computer actions online.
  • Next IT announced the Alme platform for virtual healthcare assistants, furthering the development of intelligent virtual assistants with domain-specific expertise
  • Google is finally starting to make search smarter by incorporating more contextual conversational capabilities for queries, and even challenging Siri directly within iOS.
  • Stephen Wolfram announced the Wolfram Language, which models the world and combines both programs and data — what he calls a new “language for the global brain,” — that will essentially be able to weave sophisticated computational knowledge into everything.
  • And finally, IBM announced it is going allow third-party developers to build cognitive apps that leverage cognition hosted in the cloud on Watson.
All of these announcements foretell the development of platforms that will allow apps and services to all function more intelligently and intuitively. In fact, the coming competition between different CaaS platforms may be the 2015 – 2030 equivalent of the operating systems wars of the 1980s and 1990s. This means CaaS platforms are the strategic high ground in the battle to own the future operating system of the Web and mobile applications — the operating system of the global, networked brain itself.

The new OS battle
We already see growing signs of heated competition between Apple and Google to make smarter mobile virtual intelligent assistants. How long before both companies open up API’s to their CaaS platforms to the rest of their ecosystems?
App developers will soon need to choose which CaaS ecosystem to build on — Google, Apple, Facebook, Microsoft, or maybe even Wolfram’s new Wolfram Language ecosystem.
In the long-run however, a more vendor neutral cognition platform may emerge as the winner: one that is more like Amazon Web Services in that it just provides the underlying service and doesn’t compete with third-party apps that use it. This could come from Amazon, or Wolfram perhaps. CaaS platforms may eventually even be open-sourced and made widely available — perhaps via a Linux equivalent for the cognitive operating system era that might borrow from many of the original ideas and standards of the Semantic Web.

By Nova Spivack, Bottlenose
Source and more:

Facebook, Google lead tech industry group demanding government surveillance reform

A group of the world’s most powerful Internet companies have come together to form the Reform Government Surveillance group, an organization pushing for wide-scale changes to US government surveillance in light of NSA revelations revealed by whistleblower Edward Snowden.
Facebook, Google, Microsoft, Apple, AOL, LinkedIn, Twitter and Yahoo have formed the alliance to push their shared belief that “it is time for the world’s governments to address the practices and laws regulating government surveillance of individuals and access to their information.”
The organization is pledging its support to sweeping new reform proposed by Washington politicians, and its website includes five central principles for change:
  1. Limiting governments’ authority to collect users’ information
  2. Oversight and accountability
  3. Transparency about government demands
  4. Respecting the freer flow of information
  5. Avoiding conflict about governments
An open letter to governments urges the US to “take the lead and make reforms that ensure that government surveillance efforts are clearly restricted by law, proportionate to the risks, transparent and subject to independent oversight.”
The website includes quotes from each company’s CEO, but a blog post from Microsoft General Counsel and Executive Vice President, Legal & Corporate Affairs Brad Smith further explains that the NSA revelations have put the people’s trust in governments “at risk,” and the group is working to institute a range of principles to promote greater transparency and more ethical policies.
Reform Government Surveillance


NIST Cloud Computing Industry Day


The National Institute of Standards and Technology (NIST) will be hosting an Industry Day for Cloud Computing to engage vendors and federal employees in a discussion about to the opportunities and challenges of Cloud Computing in the Federal Government.
The Industry Day will focus on procurement strategies around cloud services, as well as Infrastructure as a Service capabilities in the market place to solve various NIST scientific use cases.


United States Department of Commerce (DOC)
National Institute of Standards and Technology (NIST)
Cloud Computing Industry Day
December 16, 2013
7:30 am – 2:00 pm
NIST Gaithersburg, MD 20899
Red Auditorium
7:30 am - 8:30 am – Registration (check-in) and vendor exhibit set-up  
9:00 am – 9:30 am – Welcome and introductions            
Speaker: Del Brockett, CIO, NIST  
9:30 am – 10:15 am – Panel Session 1: Government Procurement Strategies for Cloud Services            
Moderator: Sherwin Mc Adam, Cloud Computing Program Manager, NIST
  • Divya Soni, Contract Specialist, NIST
  • Marcelo Olascoaga, Cloud Computing Services Program Management Office, GSA
  • Seth Rogier, Director of Strategic Sourcing, U.S. Department of Commerce
  • Mark Langstein, Contract Law Division, U.S. Department of Commerce
  • Daniel McCrae, Director of Service Delivery Division, NOAA
  • Doug Vandyke, GM Civilian Government, AWS  
10:30 am – 11:45 am – Panel Session 2: IaaS Solutions for Scientific Use Cases            
Moderator: Sherwin Mc Adam, Cloud Computing Program Manager, NIST
  • Craig Atkinson, Chief Technical Officer, JHC Technology
  • Tim Bixler, AWS Senior Manager Solution Architects
  • Jon Guyer, Scientific Advisor, NIST
  • Carolyn Rowland, NIST
  • Przemek Klosowski, NIST
  • Marc Salit, NIST  
12:00 pm – 2:00 pm – Vendor and Government Exhibits and Networking
  • Storage
  • Compute Resources
  • Orchestration
  • Authentication
  • Identity Management
  • Personal Identity Verification (PIV)  
2:00 pm Event Concludes
Source and more:

Facebook Considers Adding a 'Sympathize' Button

For when you've seen something ... but can't 'like' it.

Source and read:

Carry On: Sound Advice from Schneier on Security

Up-to-the-minute observations from a world-famous security expert Bruce Schneier is known worldwide as the foremost authority and commentator on every security issue from cyber-terrorism to airport surveillance. This groundbreaking book features more than 160 commentaries on recent events including the Boston Marathon bombing, the NSA's ubiquitous surveillance programs, Chinese cyber-attacks, the privacy of cloud computing, and how to hack the Papal election. Timely as an Internet news report and always insightful, Schneier explains, debunks, and draws lessons from current events that are valuable for security experts and ordinary citizens alike.
  • Bruce Schneier's worldwide reputation as a security guru has earned him more than 250,000 loyal blog and newsletter readers
  • This anthology offers Schneier's observations on some of the most timely security issues of our day, including the Boston Marathon bombing, the NSA's Internet surveillance, ongoing aviation security issues, and Chinese cyber-attacks
  • It features the author's unique take on issues involving crime, terrorism, spying, privacy, voting, security policy and law, travel security, the psychology and economics of security, and much more
  • Previous Schneier books have sold over 500,000 copies
Carry On: Sound Advice from Schneier on Security is packed with information and ideas that are of interest to anyone living in today's insecure world.

Author: Bruce Schneier
Publication date: December 16, 2013

Secure Cloud 2014

1-2 April 2014, Amsterdam

Cloud Security Alliance (CSA), ENISA and Fraunhofer-FOKUS have joined forces to organize the third edition of the SecureCloud conference.
SecureCloud 2014 is an opportunity for government experts, industry experts and corporate decision makers to discuss and exchange ideas about how to shape the future of cloud computing security. It is also a place to learn from cloud computing experts about cloud computing security and privacy as well as to discuss about practical case studies from industry and government.
SecureCloud 2014 focuses on
  • legal issues
  • cryptography
  • incident reporting
  • critical information infrastructures and
  • certification and compliance.

We invite thought-leaders and experts from industry, academia and government to submit proposals for presentations, discussion panels, or workshops. Please visit the “call for papers” section on this website for further details. We look forward to receiving your proposals.
The isits International School of IT Security is delighted to announce this conference and will be your contact concerning all aspects of the organization.
Source and more:

Mitigating attacks on Industrial Control Systems (ICS); the new Guide from EU Agency ENISA

The EU’s cyber security agency ENISA has provided a new manual for better mitigating attacks on Industrial Control Systems (ICS), supporting vital industrial processes primarily in the area of critical information infrastructure (such as the energy and chemical transportation industries) where sufficient knowledge is often lacking. As ICS are now often connected to Internet platforms, extra security preparations have to be taken. This new guide provides the necessary key considerations for a team charged with ICS Computer Emergency Response Capabilities (ICS-CERC).
Mitigating attacks on Industrial Control Systems (ICS); the new Guide from EU Agency ENISA

Industrial Control Systems are indispensable for a number of industrial processes, including energy distribution, water treatment, transportation, as well as chemical, government, defence and food processes. The ICS are lucrative targets for intruders, including criminal groups, foreign intelligence, phishers, spammers or terrorists. Cyber-incidents affecting ICS can have disastrous effects on a country’s economy and on people’s lives. They can cause long power outages, paralyse transports and cause ecological catastrophes. Therefore, the ability to respond to and mitigate the impact of ICS incidents is crucial for protecting critical information infrastructure and enhancing cyber-security on a national, European and global level. Consequently, ENISA has prepared this guide about good practices for prevention and preparedness for bodies with ICS-CERC and highlights the following conclusions;
  • While for traditional ICT systems the main priority is integrity, for ICS systems availability is the highest priority (of the “CIA” scale : Confidentiality, Integrity, Availability.) This has to do with the fact that ICS are indispensable for the seamless operation of critical infrastructure.
  • The main ICS actors sometimes do not have sufficient cyber-security expertise. Likewise, the established CERTs do not necessarily understand sector-specific technical aspects of ICS.
  • Given the potential significant damage of ICSs, the hiring process for ICS-CERC teams requires staff to be vetted thoroughly, and consideration should be given to many things, for example, an individual’s ability to perform under pressure and response willingness during non-working hours.
  • The importance of cooperation at both the domestic and international level must be recognised.
  • The unique challenges of ICS cyber-security services can be mitigated by using identified good practices for CERTs, existing global and European experiences, and better exchange of good practices.
The Executive Director of ENISA, Professor Udo Helmbrecht stated: “Until a few decades ago, ICS functioned in discrete, separated environments, but nowadays they are often connected to the Internet. This enables streamlining and automation of industrial processes, but it also increases the risk of exposure to cyber-attacks.

For full report; Good practice guide for CERTs in the area of Industrial Control Systems