Blogs as an Alternative Public Sphere: The Role of Blogs, Mainstream Media, and TV in Russia's Media Ecology


Applying a combination of quantitative and qualitative methods, we investigate whether Russian blogs represent an alternative public sphere distinct from web-based Russian government information sources and the mainstream media. Based on data collected over a one-year period (December 2010 through December 2011) from thousands of Russian political blogs and other media sources, we compare the cosine similarity of the text from blogs, mainstream media, major TV channels, and official government websites. We find that, when discussing a selected set of major political and news topics popular during the year, blogs are consistently the least similar to government sources compared to TV and the mainstream media. We also find that the text of mainstream media outlets in Russia (primarily traditional and web-native newspapers) are more similar to government sources than one would expect given the greater editorial and financial independence of those media outlets, at least compared to largely state-controlled national TV stations. We conclude that blogs provide an alternative public sphere: a space for civic discussion and organization that differs significantly from that provided by the mainstream media, TV, and government.

By Bruce Etling/Hal Roberts/Robert Farris
Source and read the paper:

INET Istanbul 2014 Conference

INET Istanbul

Location: InterContinental Istanbul
Date: 21 May 2014

Join us Wednesday 21 May for INET Istanbul. This unique event will explore privacy and digital content in a global context.  With experts from across public policy, technology, and academia, our agenda will focus on key issues such as managing privacy and data protection in the face of massive Government surveillance programs; and the complex inter-play of intellectual property rights and innovation.

For Registration:

Opinion 04/2014 on surveillance of electronic communications for intelligence and national security purposes


Executive Summary

Since the summer of 2013, several international media outlets have reported widely on surveillance activities from intelligence services, both in the United States and in the European Union based on documents primarily provided by Edward Snowden. The revelations have sparked an international debate on the consequences of such large-scale surveillance for citizens’ privacy. The way intelligence services make use of data on our day-to-day communications as well as the content of those communications underlines the need to set limits to the scale of surveillance.

The right to privacy and to the protection of personal data is a fundamental right enshrined in the International Covenant on Civil and Political Rights, the European Convention on Human rights and the European Union Charter on Fundamental Rights. It follows that respecting the rule of law necessarily implies that this right is afforded the highest possible level of protection.

From its analysis, the Working Party concludes that secret, massive and indiscriminate surveillance programs are incompatible with our fundamental laws and cannot be justified by the fight against terrorism or other important threats to national security. Restrictions to the fundamental rights of all citizens could only be accepted if the measure is strictly necessary and proportionate in a democratic society.

This is why the Working Party recommends several measures in order for the rule of law to be guaranteed and respected.

First, the Working Party calls for more transparency on how surveillance programmes work. Being transparent contributes to enhancing and restoring trust between citizens and governments and private entities. Such transparency includes better information to individuals when access to data has been given to intelligence services. In order to better inform individuals on the consequences the use of online and offline electronic communication services may have as well as how they can better protect themselves, the Working Party intends to organise a conference on surveillance in the second half of 2014 bringing together all relevant stakeholders.

In addition, the Working Party strongly advocates for more meaningful oversight of surveillance activities. Effective and independent supervision on the intelligence services, including on processing of personal data, is key to ensure that no abuse of these programmes will take place. Therefore, the Working Party considers that an effective and independent supervision of intelligence services implies a genuine involvement of the data protection authorities.

The Working Party further recommends enforcing the existing obligations of EU Member States and of Parties to the ECHR to protect the rights of respect for private life and to protection of one's personal data. Moreover the Working Party recalls that controllers subject to EU jurisdiction shall comply with existing applicable EU data protection legislation. The Working Party furthermore recalls that data protection authorities may suspend data flows and should decide according to their national competence if sanctions are in order in a specific situation.

Neither Safe Harbor, nor Standard Contractual Clauses, nor BCRs could serve as a legal basis to justify the transfer of personal data to a third country authority for the purpose of massive and indiscriminate surveillance. In fact, the exceptions included in these instruments are limited in scope and should be interpreted restrictively. They should never be implemented to the detriment of the level of protection guaranteed by EU rules and instruments governing transfers.

The Working Party urges the EU institutions to finalise the negotiations on the data protection reform package. It welcomes in particular the proposal of the European Parliament for a new article 43a, providing for mandatory information to individuals when access to data has been given to a public authority in the last twelve months. Being transparent about these practices will greatly enhance trust.

Furthermore, the Working Party considers that the scope of the national security exemption should be clarified in order to give legal certainty regarding the scope of application of EU law. To date, no clear definition of the concept of national security has been adopted by the European legislator, nor is the case law of the European courts conclusive.

Finally, the Working Party recommends the quick start of negotiations on an international agreement to grant adequate data protection safeguards to individuals when intelligence activities are carried out. The Working Party also supports the development of a global instrument providing for enforceable, high level privacy and data protection principles.

Source and read more:

Coupling Functions Enable Secure Communications


Secure encryption is an essential feature of modern communications, but rapid progress in illicit decryption brings a continuing need for new schemes that are harder and harder to break. Inspired by the time-varying nature of the cardiorespiratory interaction, here we introduce a new class of secure communications that is highly resistant to conventional attacks. Unlike all earlier encryption procedures, this cipher makes use of the coupling functions between interacting dynamical systems. It results in an unbounded number of encryption key possibilities, allows the transmission or reception of more than one signal simultaneously, and is robust against external noise. Thus, the information signals are encrypted as the time variations of linearly independent coupling functions. Using predetermined forms of coupling function, we apply Bayesian inference on the receiver side to detect and separate the information signals while simultaneously eliminating the effect of external noise. The scheme is highly modular and is readily extendable to support different communications applications within the same general framework.

By Tomislav Stankovski, Peter V. E. McClintock, and Aneta Stefanovska

Source and read the Paper:

Answering the Critical Question: Can You Get Private SSL Keys Using Heartbleed?


The widely-used open source library OpenSSL revealed on Monday it had a major bug, now known as “heartbleed". By sending a specially crafted packet to a vulnerable server running an unpatched version of OpenSSL, an attacker can get up to 64kB of the server’s working memory. This is the result of a classic implementation bug known as a Buffer over-read
There has been speculation that this vulnerability could expose server certificate private keys, making those sites vulnerable to impersonation. This would be the disaster scenario, requiring virtually every service to reissue and revoke its SSL certificates. Note that simply reissuing certificates is not enough, you must revoke them as well.
Unfortuntately, the certificate revocation process is far from perfect and was never built for revocation at mass scale. If every site revoked its certificates, it would impose a significant burden and performance penalty on the Internet. At CloudFlare scale the reissuance and revocation process could break the CA infrastructure. So, we’ve spent a significant amount of time talking to our CA partners in order to ensure that we can safely and successfully revoke and reissue our customers' certificates.
While the vulnerability seems likely to put private key data at risk, to date there have been no verified reports of actual private keys being exposed. At CloudFlare, we received early warning of the Heartbleed vulnerability and patched our systems 12 days ago. We’ve spent much of the time running extensive tests to figure out what can be exposed via Heartbleed and, specifically, to understand if private SSL key data was at risk.
Here’s the good news: after extensive testing on our software stack, we have been unable to successfully use Heartbleed on a vulnerable server to retrieve any private key data. Note that is not the same as saying it is impossible to use Heartbleed to get private keys. We do not yet feel comfortable saying that. However, if it is possible, it is at a minimum very hard. And, we have reason to believe based on the data structures used by OpenSSL and the modified version of NGINX that we use, that it may in fact be impossible.
To get more eyes on the problem, we have created a site so the world can challenge this hypothesis:
CloudFlare Challenge: Heartbleed
This site was created by CloudFlare engineers to be intentionally vulnerable to heartbleed. It is not running behind CloudFlare’s network. We encourage everyone to attempt to get the private key from this website. If someone is able to steal the private key from this site using heartbleed, we will post the full details here.
While we believe it is unlikely that private key data was exposed, we are proceeding with an abundance of caution. We’ve begun the process of reissuing and revoking the keys CloudFlare manages on behalf of our customers. In order to ensure that we don’t overburden the certificate authority resources, we are staging this process. We expect that it will be complete by early next week.
In the meantime, we’re hopeful we can get more assurance that SSL keys are safe through our crowd-sourced effort to hack them. To get everyone started, we wanted to outline the process we’ve embarked on to date in order to attempt to hack them.

The bug

A heartbeat is a message that is sent to the server just so the server can send it back. This lets a client know that the server is still connected and listening. The heartbleed bug was a mistake in the implementation of the response to a heartbeat message.
Here is the offending code
p = &s->s3->[0]


hbtype = *p++;
n2s(p, payload); 
pl = p;


buffer = OPENSSL_malloc(1 + 2 + payload + padding);
bp = buffer;


memcpy(bp, pl, payload);
The incoming message is stored in a structure called rrec, which contains the incoming request data. The code reads the type (finding out that it's a heartbeat) from the first byte, then reads the next two bytes which indicate the length of the heartbeat payload. In a valid heartbeat request, this length matches the length of the payload sent in the heartbeat request.
The major problem (and cause of heartbleed) is that the code does not check that this length is the actual length sent in the heartbeat request, allowing the request to ask for more data than it should be able to retrieve. The code then copies the amount of data indicated by the length from the incoming message to the outgoing message. If the length is longer than the incoming message, the software just keeps copying data past the end of the message. Since the length variable is 16 bits, you can request up to 65,535 bytes from memory. The data that lives past the end of the incoming message is from a kind of no-man’s land that the program should not be accessing and may contain data left behind from other parts of OpenSSL.
When processing a request that contains a longer length than the request payload, some of this unknown data is copied into the response and sent back to the client. This extra data can contain sensitive information like session cookies and passwords, as we describe in the next section.
The fix for this bug is simple: check that the length of the message actually matches the length of the incoming request. If it is too long, return nothing. That’s exactly what the OpenSSL patch does.

Malloc and the Heap

So what sort of data can live past the end of the request? The technical answer is “heap data,” but the more realistic answer is that it’s platform dependent.
On most computer systems, each process has its own set of working memory. Typically this is split into two data structures: the stack and the heap. This is the case on Linux, the operating system that CloudFlare runs on its servers.
The memory address with the highest value is where the stack data lives. This includes local working variables and non-persistent data storage for running a program. The lowest portion of the address space typically contains the program’s code, followed by static data needed by the program. Right above that is the heap, where all dynamically allocated data lives.
Heap organization
Managing data on the heap is done with the library calls malloc (used to get memory) and free (used to give it back when no longer needed). When you call malloc, the program picks some unused space in the heap area and returns the address of the first part of it to you. Your program is then able to store data at that location. When you call free, memory space is marked as unused. In most cases, the data that was stored in that space is just left there unmodified.
Every new allocation needs some unused space from the heap. Typically this is chosen to be at the lowest possible address that has enough room for the new allocation. A heap typically grows upwards; later allocations get higher addresses. If a block of data is allocated early it gets a low address and later allocations will get higher addresses, unless a big early block is freed.
This is of direct relevance because both the incoming message request (s->s3-> and the certificate private key are allocated on the heap with malloc. The exploit reads data from the address of the incoming message. For previous requests that were allocated and freed, their data (including passwords and cookies) may still be in memory. If they are stored less than 65,536 bytes higher in the address space than the current request, the details can be revealed to an attacker.
Requests come and go, recycling memory at around the top of the heap. This makes extracting previous request data very likely from this attack. This is a important in understanding what you can and cannot get at using the vulnerability. Previous requests could contain password data, cookies or other exploitable data. Private keys are a different story; due to the way the heap is structured. The good news is this means that it is much less likely private SSL keys would be exposed.

Source and more:

Cookies that give you away: The surveillance implications of web tracking

Over the past three months we’ve learnt that NSA uses third-party tracking cookies for surveillance (1, 2). These cookies, provided by a third-party advertising or analytics network (e.g.,, are ubiquitous on the web, and tag users’ browsers with unique pseudonymous IDs. In a new paper, we study just how big a privacy problem this is. We quantify what an observer can learn about a user’s web traffic by purely passively eavesdropping on the network, and arrive at surprising answers.
At first sight it doesn’t seem possible that eavesdropping alone can reveal much. First the eavesdropper on the Internet backbone sees millions of HTTP requests and responses. How can he associate the third-party HTTP request containing a user’s cookie with request to the first-party web page that the browser visited, which doesn’t contain the cookie? Second, how can visits to different first parties be linked to each other? And finally, even if all the web traffic for a single user can be linked together, how can the adversary go from a set pseudonymous cookies to the user’s real-world identity?

The diagram illustrates how the eavesdropper can use multiple third-party cookies to link traffic. When a user visits ‘,’ the response contains the embedded tracker X, with an ID cookie ‘xxx’. The visits to exampleA and to X are tied together by IP address, which typically doesn’t change within a single page visit [1]. Another page visited by the same user might embed tracker Y bearing the pseudonymous cookie ‘yyy’. If the two page visits were made from different IP addresses, an eavesdropper seeing these cookies can’t tell that the same browser made both visits. But if a third page, however, embeds both trackers X and Y, then the eavesdropper will know that IDs ‘xxx’ and ‘yyy’ belong to the same user. This method applied iteratively has the potential of tying together a lot of the traffic of a single user.

By Dillon Reisman
Source and read the full paper:

NIST Launches a New U.S. Time Standard: NIST-F2 Atomic Clock

The U.S. Department of Commerce's National Institute of Standards and Technology (NIST) has officially launched a new atomic clock, called NIST-F2, to serve as a new U.S. civilian time and frequency standard, along with the current NIST-F1 standard.
NIST-F2 would neither gain nor lose one second in about 300 million years, making it about three times as accurate as NIST-F1, which has served as the standard since 1999. Both clocks use a "fountain" of cesium atoms to determine the exact length of a second.
NIST scientists recently reported the first official performance data for NIST-F2,* which has been under development for a decade, to the International Bureau of Weights and Measures (BIPM), located near Paris, France. That agency collates data from atomic clocks around the world to produce Coordinated Universal Time (UTC), the international standard of time. According to BIPM data, NIST-F2 is now the world's most accurate time standard.**
NIST-F2 is the latest in a series of cesium-based atomic clocks developed by NIST since the 1950s. In its role as the U.S. measurement authority, NIST strives to advance atomic timekeeping, which is part of the basic infrastructure of modern society. Many everyday technologies, such as cellular telephones, Global Positioning System (GPS) satellite receivers, and the electric power grid, rely on the high accuracy of atomic clocks. Historically, improved timekeeping has consistently led to technology improvements and innovation.
"If we've learned anything in the last 60 years of building atomic clocks, we've learned that every time we build a better clock, somebody comes up with a use for it that you couldn't have foreseen," says NIST physicist Steven Jefferts, lead designer of NIST-F2.
For now, NIST plans to simultaneously operate both NIST-F1 and NIST-F2. Long-term comparisons of the two clocks will help NIST scientists continue to improve both clocks as they serve as U.S. standards for civilian time. The U.S. Naval Observatory maintains military time standards.
Both NIST-F1 and NIST-F2 measure the frequency of a particular transition in the cesium atom—which is 9,192,631,770 vibrations per second, and is used to define the second, the international (SI) unit of time. The key operational difference is that F1 operates near room temperature (about 27 ºC or 80 ºF) whereas the atoms in F2 are shielded within a much colder environment (at minus 193 ºC, or minus 316 ºF). This cooling dramatically lowers the background radiation and thus reduces some of the very small measurement errors that must be corrected in NIST-F1. (See backgrounder on clock operation and accompanying animation of NIST-F2.)
Primary standards such as NIST-F1 and NIST-F2 are operated for periods of a few weeks several times each year to calibrate NIST timescales, collections of stable commercial clocks such as hydrogen masers used to keep time and establish the official time of day. NIST clocks also contribute to UTC. Technically, both F1 and F2 are frequency standards, meaning they are used to measure the size of the SI second and calibrate the "ticks" of other clocks. (Time and frequency are inversely related.)
NIST provides a broad range of timing and synchronization measurement services to meet a wide variety of customer needs. NIST official time is used to time-stamp hundreds of billions of dollars in U.S. financial transactions each working day, for example. NIST time is also disseminated to industry and the public through the Internet Time Service, which as of early 2014 received about 8 billion automated requests per day to synchronize clocks in computers and network devices; and NIST radio broadcasts, which update an estimated 50 million watches and other clocks daily.
At the request of the Italian standards organization, NIST fabricated many duplicate components for a second version of NIST-F2, known as IT-CsF2 to be operated by Istituto Nazionale di Ricerca Metrologica (INRIM), NIST's counterpart in Turin, Italy. Two co-authors from Italy contributed to the new report on NIST-F2.
The cesium clock era officially dates back to 1967, when the second was defined based on vibrations of the cesium atom. Cesium clocks have improved substantially since that time and are likely to improve a bit more. But clocks that operate at microwave frequencies such as those based on cesium or other atoms are likely approaching their ultimate performance limits because of the relatively low frequencies of microwaves. In the future, better performance will likely be achieved with clocks based on atoms that switch energy levels at much higher frequencies in or near the visible part of the electromagnetic spectrum. These optical atomic clocks divide time into smaller units and could lead to time standards more than 100 times more accurate than today's cesium standards. Higher frequency is one of a variety of factors that enables improved precision and accuracy.
For the media: High-definition video b-roll available on request.
*T.P. Heavner, E.A. Donley, F. Levi, G. Costanzo, T.E. Parker, J.H. Shirley, N. Ashby, S.E. Barlow and S.R. Jefferts. First Accuracy Evaluation of NIST-F2. Metrologia. Forthcoming. See
**These data are reported monthly in BIPM's Circular T, available online at NIST-F2 is scheduled to be listed for the first time in the March 2014 edition. The value of interest is Type B (systematic) uncertainty.

Turkish ISPs Hijacking Traffic: This is How an Internet Breaks

While we may be tired of hearing about blocked Internet access, the most recent move in Turkey should make us sit up and take notice again, as it represents an attack not just on the DNS infrastructure, but on the global Internet routing system itself.
I would argue that people in Turkey haven’t had real Internet service since mid-March when the Turkish government banned access to, and required the blocking of, Twitter and subsequently YouTube. As reported, in the most recent effort to comply with the Turkish government mandate, Turkish ISPs have taken aim at the open public DNS services provided by companies such as Google. This is fragmenting the Internet — destroying its very purpose — and the Internet Society has been clear in its position that it should be undone.
This latest move attempts to address a perceived problem: many Turkish Internet users were using the well-known IP address of the Google Public DNS service to circumvent the crippled DNS services offered by their ISP. And with that, they could again access Twitter and YouTube.
While the service that is being blocked is an actual DNS server, the blocking is being performed at a lower level, in the routing system itself. To block access to Google Public DNS servers, Turkish ISPs’ routers are announcing an erroneous and very specific Internet route that includes the well-known IP address. With this modification, the Turkish routers are now lying about how to get to the Google Public DNS service, and taking all the traffic to a different destination. They are lying about where the Google service resides — by hijacking the traffic. Apparently, the ISPs are not just null-routing it (sending into oblivion) — but rather sending the traffic to their own DNS servers which then (wait for it) give out the wrong answers. So, these servers are masquerading as the Google Public DNS service.
Both (a) blocking Twitter and YouTube by returning false DNS results and (b) the use of false routing announcements are attacks on the integrity of the Internet’s infrastructure — DNS and routing. Both of these infrastructure services are imperative to have a global Internet, and they are operated by collective agreement to adhere to Internet protocols and best practices — that’s what puts the “inter” in inter-network.
In 2012, when the US government was contemplating laws that would require ISPs to falsify DNS results in an effort to curtail access to websites offering counterfeit goods (SOPA — “Stop Online Piracy Act” and PIPA — “Protect IP Act”), we put together a whitepaper outlining the pitfalls of such DNS filtering. Those concerns apply in the case of the DNS blocking of Twitter and YouTube in Turkey, and there are analogs for the route hijacking approach, too:
Easily circumvented
Users who wish to download filtered content can simply use IP addresses instead of DNS names. As users discover the many ways to work around DNS filtering, the effectiveness of filtering will be reduced. ISPs will be required to implement stronger controls, placing them in the middle of an unwelcome battle between Internet users and national governments.
Doesn’t solve the problem
Filtering DNS or blocking the name does not remove the illegal content. A different domain name pointing to the same Internet address could be established within minutes.
Incompatible with DNSSEC and impedes DNSSEC deployment
DNSSEC is a new technology designed to add confidence and trust to the Internet. DNSSEC ensures that DNS data are not modified by anyone between the data owner and the consumer. To DNSSEC, DNS filtering looks the same as a hacker trying to impersonate a legitimate web site to steal personal information—exactly the problem that DNSSEC is trying to solve.
DNSSEC cannot differentiate legally sanctioned filtering from cybercrime.
Causes collateral damage
When both legal and illegal content share the same domain name, DNS filtering blocks access to everything. For example, blocking access to a single Wikipedia article using DNS filtering would also block millions of other Wikipedia articles.
Puts users at-risk
When local DNS service is not considered reliable and open, Internet users may use alternative and non-standard approaches, such as downloading software that redirects their traffic to avoid filters. These makeshift solutions subject users to additional security risks.
Encourages fragmentation
A coherent and consistent structure is important to the successful operation of the Internet. DNS filtering eliminates this consistency and fragments the DNS, which undermines the structure of the Internet.
Drives service underground
If DNS filtering becomes widespread, “underground” DNS services and alternative domain namespaces will be established, further fragmenting the Internet, and taking the content out of easy view of law enforcement.
Raises human rights and due process concerns
DNS filtering is a broad measure, unable to distinguish illegal and legitimate content on the same domain. Implemented carelessly or improperly, it has the potential to restrict free and open communications and could be used in ways that limit the rights of individuals or minority groups.
The kicker is that this sort of approach to blocking use of (parts of) the Internet just doesn’t work. There are always workarounds, although they are becoming increasingly tortuous (dare I say “byzantine”?) and impede the future growth of the Internet’s technology. If Internet technology is like building blocks, this is like sawing the corners off your whole set of blocks and then trying to build a model with them.
All that this escalation of Internet hostility achieves is: a broken Internet.
In 2010, the Internet Society published a paper based on a thought exercise about what would become of the Internet if different forces prevailed in the Internet’s evolution. We’re seeing escalations on all vectors of the quadrants we outlined in the 2010 scenarios and while we believed it was a thought-experiment at the time, it’s amazing to see how much of the then-barely-imaginable is becoming real in one way or another. Collectively, we should take heed of the outcomes that those scenarios paint — and work together to get beyond this.
In the immediate term, there are technologies available to provide better security of (and, therefore, confidence in) DNS and routing infrastructures — see our related post on the Deploy360 site: Turkish Hijacking of DNS Providers Shows Clear Need For Deploying BGP And DNS Security.

By Leslie Daigle

Myths and Facts on NTIA Announcement on Intent to Transition Key Internet Domain Name Functions

The United States Government controls the Internet through the Internet Assigned Numbers Authority (IANA) functions contract.
There is no one party – government or industry, including the United States Government – that controls the Internet. The Internet is a decentralized network of networks.
The IANA functions are a set of interdependent technical functions that enable the continued efficient operation of the Internet. The IANA functions include: (1) the coordination of the assignment of technical Internet protocol parameters; (2) the processing of change requests to the authoritative root zone file of the DNS and root key signing key (KSK) management; (3) the allocation of Internet numbering resources; and (4) other services related to the management of the .ARPA and .INT top-level domains (TLDs).
ICANN as the IANA functions operator processes changes to three different databases. First, ICANN distributes the protocol parameters or Internet standards developed by the Internet Engineering Task Force (IETF). Second, it allocates IP numbers to the Regional Internet Registries (RIR) who then distribute IP numbers to Internet Service Providers. Third, ICANN processes change requests or updates to the authoritative root zone file or “address book” of the DNS from top level domain name operators – those companies or institutions that manage .com, .org, .us, .uk, etc. In all three cases ICANN’s role is to implement the policies or requests at the direct instruction of the various IANA functions customers.
NTIA’s role in the IANA functions includes the clerical role of administering changes to the authoritative root zone file and, more generally, serving as the historic steward of the DNS via the administration of the IANA functions contract. NTIA has never substituted its judgment for that of the IANA customers.
The proposed transition has alarmed business leaders and others who rely on the smooth functioning of the Internet.
A broad group of U.S. and international stakeholders – such as the U.S. Chamber of Commerce, AT&T, Cisco, Verizon, Comcast, and Google - have expressed strong support and pledged cooperation in this process.
This transition is “giving the Internet to authoritarian regimes.”
The U.S. Government has made it clear that we will not accept a proposal that replaces its role with a government or intergovernmental organization.
The criteria specified by the Administration firmly establish Internet governance as the province of multistakeholder institutions, rather than governments or intergovernmental institutions, and reaffirm our commitment to preserving the Internet as an engine for economic growth, innovation, and free expression.
The U.S. government will only transition its role if and when it receives it receives a satisfactory proposal to replace its role from the global Internet community — the same industry, technical, and civil society entities that have successfully managed the technical functions of Internet governance for nearly twenty years.
With the U.S. withdrawal from stewardship over the IANA functions, the U.N.’s International Telecommunication Union will take over the Internet – making it easier for repressive regimes to censor speech online.
The transition process that is underway will help prevent authoritarian countries from exerting too much influence over the Internet by putting control of key Internet domain name functions in the hands of the global community of Internet stakeholders — specifically industry, technical experts, and civil society — instead of an intergovernmental organization.
This transition of the Internet Domain Name System (DNS) to the global multistakeholder community is meant to quell international criticism following disclosure of National Security Agency surveillance practices.
This transition is part of a process set out sixteen years ago. The Administration believes the timing is right to start the transition process. ICANN as an organization has matured and taken steps in recent years to improve its accountability and transparency and its technical competence. At the same time, international support continues to grow for the multistakeholder model.
The United States has made an irreversible decision to transition NTIA’s role when the current IANA contract ends in September 2015.
Before any transition takes place, the businesses, civil society organizations and technical experts of the global Internet community must agree on a plan that supports and enhances the multistakeholder community; maintains the security, stability and resiliency of the Internet’s domain name system; meets the needs and expectations of the global customers and partners of these services; and maintain the openness of the Internet.
We have made clear that the transition proposal must have broad community support and reflect the four key principles we outlined in our announcement. If the global multistakeholder community does not develop a plan that meets these criteria by Sept. 30, 2015, we can extend the contract for up to four years.
ICANN is not up to the task of convening a process to develop a proposal to transition the current role.
As both the current IANA functions contractor and as the global policy coordinator for the DNS, ICANN is uniquely positioned to convene a multistakeholder process to develop a plan to transition the USG role to the global multistakeholder community based on the specified criteria. ICANN held a number of productive sessions at its meeting in Singapore March 23-27 to initiate discussions among stakeholders on a transition plan.
The Internet community is not up to the task of developing a proposal that will ensure the security and stability of the Internet.
That very community has been responsible for operational Internet governance for most of the World Wide Web’s existence. The highly resilient, distributed global system that we call the Internet is itself a testament to their technical skills and effectiveness in coordinating a decentralized network of networks.
The U.S. Government’s action immediately affects the Internet.
The U.S. role will remain unchanged until the global community develops a transition plan that incorporates the principles outlined in the U.S. Government’s announcement. The average Internet user will not notice this process or eventual transition.
The U.S. Government transition will lead to blocking of web sites.
The Internet is not controlled by any one government or entity. It is a network of networks. The U.S. Government’s role with respect to the Domain Name system is a technical one. Our work has been content neutral and policy and judgment free.
Free expression online exists and flourishes not because of U.S. Government oversight with respect to the Domain Name System, or because of any asserted special relationship that the U.S. has with ICANN. Instead, free expression is protected because of the open, decentralized nature of the Internet and the neutral manner in which the technical aspects of the Internet are managed.
We have made clear in our announcement of the transition that open, decentralized and non-governmental management of the Internet must continue.