İSTANBUL TAHKİM MERKEZİ KANUNU

29 Kasım 2014  CUMARTESİ
Resmî Gazete
Sayı : 29190
KANUN
İSTANBUL TAHKİM MERKEZİ KANUNU
Kanun No. 6570                                                                                               Kabul Tarihi: 20/11/2014
Amaç ve kapsam
MADDE 1 (1) Bu Kanunun amacı, yabancılık unsuru taşıyanlar da dâhil olmak üzere uyuşmazlıkların tahkim veya alternatif uyuşmazlık çözüm yöntemleriyle çözülmesini sağlamak üzere İstanbul Tahkim Merkezinin kurulması ile Merkezin teşkilat ve faaliyetlerine ilişkin usul ve esasları düzenlemektir.
Kuruluş
MADDE 2 (1) Bu Kanunun uygulanmasını sağlamak ve Kanunla kendisine verilen görevleri yerine getirmek üzere tüzel kişiliği haiz, özel hukuk hükümlerine tabi İstanbul Tahkim Merkezi kurulmuştur.
 
Source:

Workshop on Cybersecurity in a Post-Quantum World

Purpose:

The advent of practical quantum computing will break all commonly used public key cryptographic algorithms. In response, NIST is researching cryptographic algorithms for public key-based key agreement and digital signatures that are not susceptible to cryptanalysis by quantum algorithms. NIST is holding this workshop to engage academic, industry, and government stakeholders. This workshop will be co-located with the 2015 International Conference on Practice and Theory of Public-Key Cryptography, which will be held March 30 - April 1, 2015. NIST seeks to discuss issues related to post-quantum cryptography and its potential future standardization.

See: http://www.nist.gov/itl/csd/ct/post-quantum-crypto-workshop-2015.cfm

Three Pilot Projects Receive Grants to Improve Online Security and Privacy

The U.S. Department of Commerce's National Institute of Standards and Technology (NIST) today announced nearly $3 million in grants that will support projects for online identity protection to improve privacy, security and convenience. The three recipients of the National Strategy for Trusted Identities in Cyberspace (NSTIC) grants will pilot solutions that make it easier to use mobile devices instead of passwords for online authentication, minimize loss from fraud and improve access to state services.
This is the third round of grants awarded through NSTIC, which was launched by the Obama administration in 2011 and is managed by NIST. The initiative supports collaboration between the private sector, advocacy groups and public-sector agencies to encourage the adoption of secure, efficient, easy-to-use and interoperable identity credentials to access online services in a way that promotes confidence, privacy, choice and innovation.
“The Commerce Department is committed to protecting a free and open Internet, while also working with the private sector to ensure consumers’ security and privacy,” said U.S. Deputy Secretary of Commerce Bruce Andrews. “The grants announced will help spur development of new initiatives that aim to protect people and businesses from online identity theft and fraud.”
The NSTIC pilots have made progress both in advancing the strategy and fostering collaborations that would not otherwise have happened. One consortium of firms that are normally rivals wrote in its proposal, “Even if individual vendors in the identity space could develop a framework, it would be very difficult to get buy-in from other vendors who are competitors. With the recognition and funding from NSTIC, the pilot activities gain the vendor neutrality, visibility and credibility needed to get the various identity vendors to work together to develop a common framework that they can adopt.”
“The pilots take the vision and principles embodied in NSTIC and translate them into real solutions,” said NIST's Jeremy Grant, senior executive advisor for identity management and head of the NSTIC National Program Office. “At a time when concerns about data breaches and identity theft are growing, these new NSTIC pilots can play an important role in fostering a marketplace of online identity solutions.”
The pilots will also inform the work of the Identity Ecosystem Steering Group (IDESG), a private sector-led organization created to help coordinate development of standards that enable more secure, user-friendly ways to give individuals and organizations confidence in their online interactions.
The grantees announced today are:
GSMA (Atlanta, Ga.: $821,948)
GSMA has partnered with America’s four major mobile network operators to pilot a common approach—interoperable across all four operators—that will enable consumers and businesses to use mobile devices for secure, privacy-enhancing identity and access management. GSMA’s global Mobile Connect Initiative is the foundation for the pilot; the initiative will be augmented in the United States to align with NSTIC. By allowing any organization to easily accept identity solutions from any of the four operators, the solution would reduce a significant barrier to online service providers accepting mobile-based credentials. GSMA also will tackle user interface, user experience, security and privacy challenges, with a focus on creating an easy-to-use solution for consumers.
Confyrm (San Francisco, Calif.: $1,235,376)
The Confyrm pilot will demonstrate ways to minimize loss when criminals create fake accounts or take over online accounts. A key barrier to federated identity (in which the identity provider of your choice “vouches” for you at other sites) is the concern that accounts used in identity solutions may not be legitimate, or in the control of their rightful owner. Account compromises and the subsequent misuse of identity result in destruction of personal information, damage to individual reputations and financial loss. Confyrm will demonstrate how a “shared signals” model can mitigate the impact of account takeovers and fake accounts through early fraud detection and notification, with special emphasis on consumer privacy. Aligning with the NSTIC guiding principles, this solution enables individuals and organizations to experience improved trust and confidence in identities online. Pilot partners include a major Internet email provider, a major mobile operator and multiple e-commerce sites.
MorphoTrust USA (Billerica, Mass. $736,185)
MorphoTrust, in partnership with the North Carolina Departments of Transportation (DOT) and Health and Human Services (DHHS), will demonstrate how existing state-issued credentials such as driver’s licenses can be extended into the online world to enable new types of online citizen services. The pilot will leverage North Carolina’s state driver’s license solution to create a digital credential for those applying for the North Carolina (DHHS) Food and Nutrition Services (FNS) Program online. This solution will eliminate the need for people to appear in person to apply for FNS benefits, reducing costs to the state while providing applicants with faster, easier access to benefits.

Source:
http://www.nist.gov/itl/nstic-091714.cfm

DuckDuckNo: The privacy-focused search engine is blocked in China

DuckDuckGo, the privacy-focused search engine that lives in Google’s enormous shadow, has joined its big rival and plenty of other western tech firms in being blocked in China.

By Jon Russell
Source and read more:
http://thenextweb.com/asia/2014/09/22/duckduckno/

New Report: "Integrati​ng Privacy Approaches across the Research Lifecycle: Long-term Longitudin​al Studies"

Abstract:     

On September 24-25, 2013, the Privacy Tools for Sharing Research Data project at Harvard University held a workshop titled "Integrating Approaches to Privacy across the Research Data Lifecycle." Over forty leading experts in computer science, statistics, law, policy, and social science research convened to discuss the state of the art in data privacy research. The resulting conversations centered on the emerging tools and approaches from the participants’ various disciplines and how they should be integrated in the context of real-world use cases that involve the management of confidential research data.

This workshop report, the first in a series, provides an overview of the long-term longitudinal study use case. Long-term longitudinal studies collect, at multiple points over a long period of time, highly-specific and often sensitive data describing the health, socioeconomic, or behavioral characteristics of human subjects. The value of such studies lies in part in their ability to link a set of behaviors and changes to each individual, but these factors tend to make the combination of observable characteristics associated with each subject unique and potentially identifiable.

Using the research information lifecycle as a framework, this report discusses the defining features of long-term longitudinal studies and the associated challenges for researchers tasked with collecting and analyzing such data while protecting the privacy of human subjects. It also describes the disclosure risks and common legal and technical approaches currently used to manage confidentiality in longitudinal data. Finally, it identifies urgent problems and areas for future research to advance the integration of various methods for preserving confidentiality in research data.

Alexandra Wood


Harvard University - Berkman Center for Internet & Society

David O'Brien


Harvard University - Berkman Center for Internet & Society

Micah Altman


MIT Libraries; The Brookings Institution

Alan Karr



National Institute of Statistical Sciences

Urs Gasser


Harvard University - Berkman Center for Internet & Society; University of St. Gallen

Michael Bar-Sinai


 
Harvard University - Institute for Quantitative Social Science; Ben-Gurion University of the Negev

Kobbi Nissim



Ben-Gurion University of the Negev

Jonathan Ullman


Columbia University Department of Computer Science

Salil Vadhan


 
Harvard University - Center for Research on Computation and Society

Michael John Wojcik


 
Harvard University - Center for Research on Computation and Society
Source and read the full report:
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2469848

INTERNET GOVERNANCE FORUM

İSTANBUL, 2-5 September 2014

"Connecting Continents for Enhanced Multi-StakeholderInternet Governance"


See:
http://www.igf2014.org.tr/index.html

Experimental evidence of massive-scale emotional contagion through social networks

Abstract

Emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness. Emotional contagion is well established in laboratory experiments, with people transferring positive and negative emotions to others. Data from a large real-world social network, collected over a 20-y period suggests that longer-lasting moods (e.g., depression, happiness) can be transferred through networks [Fowler JH, Christakis NA (2008) BMJ 337:a2338], although the results are controversial. In an experiment with people who use Facebook, we test whether emotional contagion occurs outside of in-person interaction between individuals by reducing the amount of emotional content in the News Feed. When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred. These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks. This work also suggests that, in contrast to prevailing assumptions, in-person interaction and nonverbal cues are not strictly necessary for emotional contagion, and that the observation of others’ positive experiences constitutes a positive experience for people.

By Adam D. I. Kramer, Jamie E. Guillory and   Jeffrey T. Hancock
Source and read the full paper:
http://www.pnas.org/content/111/24/8788.full 

Starting to test Google Domains

It's 2014 and it seems obvious, but across laptops, tablets and mobile devices, a website is one of the first places people go to find information about a business. But amazingly, our research shows that 55% of small businesses still don't have one. 

So as we explore ways to help small businesses succeed online (through tools like Google My Business [http://goo.gl/Ajvbn5] ), we thought it made sense to look more closely at the starting point of every business’s online presence - a website. And that starts with a domain name.

We’re beginning to invite a small number of people to kick the tires on Google Domains [http://goo.gl/pHvjoO], a domain registration service we’re in the process of building. Businesses will be able to search, find, purchase and transfer the best domain for their business - whether it’s .com, .biz, .org, or any of the wide range of new domains that are being released to the Web. 
 
Google Domains isn’t fully-featured yet, but we’re giving a small group of people the ability to buy and transfer domains through it and send feedback on their experience. (You currently need an invitation code to do so, sorry!) We want input on all the ways we can help make finding, buying, transferring and managing a domain a simple and transparent experience. We also want to make sure our customer support and infrastructure works flawlessly, and that we have the right additional services (like mobile website creation tools and hosting services from a range of providers, as well as domain management support). We're working with some of the top website building providers like +Shopify+Squarespace+Weebly, and+Wix.com  to help make that happen.

While we’re still building out all of the features, our goal is to make Google Domains more widely available soon. You can check out the first cut of what we’re working on at www.google.com/domains.

Source:
https://plus.google.com/+GoogleBusiness/posts/Dkhw41XJigw

Net Neutrality Shouldn’t Be Up To The FCC, Republicans Say

House Republicans on Friday challenged the existing framework for how net neutrality rules are set and enforced. At a hearing held by the House Judiciary subcommittee on antitrust law, the Republican members said the Federal Trade Commission, not the Federal Communications Commission, should have authority when it comes to net neutrality.
Net neutrality, which requires ISPs to allow all legal content to move through networks uninhibited, has made waves on the Hill since the FCC proposed a rule that would create “fast lanes” for companies that can pay more.
House Judiciary Committee chairman Rep. Bob Goodlatte said the Internet has grown because it is deregulated, but noted companies should not be permitted to engage in “discriminatory or anticompetitive activities.”
“I believe that vigorous application of the antitrust laws can prevent dominant Internet service providers from discriminating against competitors’ content or engaging in anticompetitive pricing practices,” the Virginia Republican said.  “Furthermore, antitrust laws can be applied uniformly to all Internet market participants, not just to Internet service providers, to ensure that improper behavior is prevented and prosecuted.”
The Republicans’ call for a shift to the FTC is likely spurred by the FCC’s history of questionable commitment to net neutrality, with some criticizing the agency for backtracking on the issue in recent years. In May, Rep. Bob Latta introduced a bill that would “limit” the FCC’s ability to regulate the Internet.
During the hearing, both former FCC Commissioner Robert McDowell and FTC member Joshua Wright said that antitrust laws were better equipped to promote net neutrality than the FCC.
By Cat Zakrzewski
Source and read more:
http://techcrunch.com/2014/06/23/net-neutrality-shouldnt-be-up-to-the-fcc-republicans-say/

INET İstanbul Meeting

Internet: Privacy and Digital Content in a Global Context

İstanbul Bilgi University IT Law Institute and Internet Society (ISOC) hold jointly INET İstanbul Meeting on 21 May.
Following the opeining speechs by Frederic Donck (Regional Bureau Director - Europe, Internet Society) and Leyla Keser (Director, IT law Institute); first panel on Privacy and Data Protection: rebuilding Trust started.





  



Introductory Keynote: Giovanni Buttarelli, Assistant European Data Protection Supervisor; Gonenc Gurkaynak, Esq., Managing Partner, ELIG Attorneys-at-Law; Sophie Kwasny, Head of Data Protection, Council of Europe; Mustafa Taşkın, Associate Professor Nilgün Başalp, Assistant Professor, Bilgi University; Moderated by Ben Rooney, Journalist

The second panel was on: The Ever-Evolving Relationship Between IPR and Innovation
(6 mins) (part 1) Wendy Seltzer, World Wide Web Consortium (W3C) ; Robin Gross, Executive Director, IP Justice; Dr. Emre Bayamlıoğlu, Assistant Professor, Koç University Law School; Ece Güner, Founder & Managing Partner, Güner Law Office; Konstantinos Komaitis, Policy Advisor, Internet Society; Moderated by Ben Rooney, Journalist
 
 
 
Watch the Meeting:
http://new.livestream.com/internetsociety/inetistanbul2014

Event on the Distributed, Collaborative Internet Governance Ecosystem


 
Global Network of Interdisciplinary Internet&Society Research Centers (Noc) organized a panel on Multistakeholder Internet Governance on 22 May in İstanbul. The Panel took place at the İstanbul Bilgi University Kuştepe Campus. Markus Kummer (Isoc) and Khaled Koubaa (Google) delivered keynote speech. Wolfgang Schulz (NoC) moderated the Panel and Gönenç Gürkaynak (Elig Law Firm), Mehmet Bedii Kaya (IT Law Institute), Marilia Maciel (NoC), David Olive (Icann) and Yasin Beceni (Tübisad) shared their opinions on the ongoing debate as to multistakeholder internetgovernance.
 
Watch the Panel:
 
 
 
 
 
Khaled Koubaa (Google)



Markus Kummer (Isoc)


From left to right: Marilia Maciel, David Olive, Gönenç Gürkaynak, Mehmet Bedii Kaya, Yasin Beceni

Global Network of Interdisciplinary Internet&Society Research Centers (Noc) organized a panel on Multistakeholder Internet Governance on 22 May in İstanbul. The Panel took place at the İstanbul Bilgi University Kuştepe Campus. Markus Kummer (Isoc) and Khaled Koubaa (Google) delivered keynote speech. Wolfgang Schulz (NoC) moderated the Panel and Gönenç Gürkaynak (Elig Law Firm), Mehmet Bedii Kaya (IT Law Institute), Marilia Maciel (NoC), David Olive (Icann) and Yasin Beceni (Tübisad) shared their opinions on the ongoing debate as to multistakeholder internetgovernance.
At the closed-door meetings as NoC we determined  our roadmap as to possible academic contributions for shaping multistakeholder internet governance model.










 



 

Türkiye’de Kişisel Verilerin Korunmasının Hukuki ve Ekonomik Analizi

DP Rapor - Türkçe optimize (2)_Page_01


İstanbul Bilgi Üniversitesi Bilişim ve Teknoloji Hukuku Enstitüsü ile Türkiye Ekonomik Araştırmalar Vakfı (TEPAV) tarafından hazırlanan, "Türkiye’de Kişisel Verilerin Korunmasının Hukuki ve Ekonomik Analizi" adlı Raporumuzun Taslak versiyonuna ulaşmak için:
http://www.nocistanbul.com/turkiyede-kisisel-verilerin-korunmasinin-hukuki-ve-ekonomik-analizi/

INet İstanbul 2014 Meeting&Multistakeholder Internet Governance Meeting


İstanbul Bilgi Üniversitesi Bilişim ve Teknoloji Hukuku Enstitüsü olarak Sizleri, programlarınıza uygun olduğu takdirde aşağıda belirttiğimiz konferanslarımıza davet etmekten mutluluk duyarız:

21 Mayıs günü Internet Society (ISOC) ile “Internet: Privacy and Digital Content in a Global Context” adlı Konferans düzenlenecektir. 21 Mayıs tarihli Konferans Intercontinental Hotel İstanbul'da yapılacak olup programa https://www.internetsociety.org/inet-istanbul/ linkinden erişebilirsiniz. Konferansta, veri koruması ve fikri mülkiyet’in tartışılacağı 2 panel yer almaktadır. Keynote konuşmaları ve Veri Koruması Panelinde, Adalet Bakanlığı’nın hazırlamış olduğu Kişisel Verilerin Korunması Hakkında Kanun Tasarısı, AB’den gelen üst düzey misafirlerimiz tarafından değerlendirilecektir. Ayrıca Açılış Konuşması sırasından Enstitümüz ile TEPAV tarafından hazırlanan Veri Koruması Yasa Tasarısının yürürlüğe girmesinin ülkemize, şirketlerimize farklı perspektiflerde kazandıracaklarının vurgulandığı Raporumuz lanse edilecektir.


22 Mayıs'ta da ICANN, ISOC, Berkman Center ve yine Enstitümüzün ortaklaşa düzenlediği Çok Paydaşlı İnternet Yönetişimi (Multistakeholder Internet Governance) Konferansı gerçekleştirlecektir. 22 Mayıs tarihli konferans ise İstanbul Bilgi Üniversitesi Kuştepe Kampüsü BS adlı salonda 09:00-11:30 saatleri arasında yapılacaktır. Bu Konferansın ajandasına www.nocistanbul.com adresinden erişebilirsiniz. İçinde bulunduğumuz yıllarda dünyada tartışılan önemli konu başlıklarından biri olan Çok Paydaşlı İnternet Yönetişimi; internete veya genel olarak bilişime ilişkin olarak yapılacak düzenlemelerde her bir paydaşın eşit söz hakkına sahip olarak, şeffaf olarak yürütülen süreçlerde yer almasını ve elbirliği ile düzenleme yapılması modelini ifade etmektedir. Bu modelin çerçevesi ve ilkelerinin belirlenmesi amacıyla Nisan ayında Brezilya’da yapılan Net Mundial toplantısının bir devamı olarak yapılacak 22 Mayıs Konferansımızda da Çok Paydaşlı İnternet Yönetişimi Modelinin çerçevesi daha da belirginleştirilmeye çalışılacaktır.

 
Her iki konferansa katılım için kayıt yaptırmak zorunlu olup, Konferansların yukarıda belirttiğimiz web sitelerinden kayıt yaptırabilirsiniz.

 

Bütün "Can"'larımız İçin Yas'tayız!



Manisa'da Soma'da yaşamını yitiren bütün insanlarımıza Allah'tan rahmet, acılı ailelerine de sabırlar diliyoruz. Bu acı hepimizin, bu üzünü hepimizin üzüntüsü!

Blogs as an Alternative Public Sphere: The Role of Blogs, Mainstream Media, and TV in Russia's Media Ecology

Abstract:

Applying a combination of quantitative and qualitative methods, we investigate whether Russian blogs represent an alternative public sphere distinct from web-based Russian government information sources and the mainstream media. Based on data collected over a one-year period (December 2010 through December 2011) from thousands of Russian political blogs and other media sources, we compare the cosine similarity of the text from blogs, mainstream media, major TV channels, and official government websites. We find that, when discussing a selected set of major political and news topics popular during the year, blogs are consistently the least similar to government sources compared to TV and the mainstream media. We also find that the text of mainstream media outlets in Russia (primarily traditional and web-native newspapers) are more similar to government sources than one would expect given the greater editorial and financial independence of those media outlets, at least compared to largely state-controlled national TV stations. We conclude that blogs provide an alternative public sphere: a space for civic discussion and organization that differs significantly from that provided by the mainstream media, TV, and government.

By Bruce Etling/Hal Roberts/Robert Farris
Source and read the paper:
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2427932

INET Istanbul 2014 Conference

INET Istanbul

Location: InterContinental Istanbul
Date: 21 May 2014

Join us Wednesday 21 May for INET Istanbul. This unique event will explore privacy and digital content in a global context.  With experts from across public policy, technology, and academia, our agenda will focus on key issues such as managing privacy and data protection in the face of massive Government surveillance programs; and the complex inter-play of intellectual property rights and innovation.

For Registration:
https://www.internetsociety.org/form/inet-istanbul
Source:
https://www.internetsociety.org/inet-istanbul/home

Opinion 04/2014 on surveillance of electronic communications for intelligence and national security purposes


ARTICLE 29-DATA PROTECTION WORKING PARTY

Executive Summary

Since the summer of 2013, several international media outlets have reported widely on surveillance activities from intelligence services, both in the United States and in the European Union based on documents primarily provided by Edward Snowden. The revelations have sparked an international debate on the consequences of such large-scale surveillance for citizens’ privacy. The way intelligence services make use of data on our day-to-day communications as well as the content of those communications underlines the need to set limits to the scale of surveillance.

The right to privacy and to the protection of personal data is a fundamental right enshrined in the International Covenant on Civil and Political Rights, the European Convention on Human rights and the European Union Charter on Fundamental Rights. It follows that respecting the rule of law necessarily implies that this right is afforded the highest possible level of protection.

From its analysis, the Working Party concludes that secret, massive and indiscriminate surveillance programs are incompatible with our fundamental laws and cannot be justified by the fight against terrorism or other important threats to national security. Restrictions to the fundamental rights of all citizens could only be accepted if the measure is strictly necessary and proportionate in a democratic society.

This is why the Working Party recommends several measures in order for the rule of law to be guaranteed and respected.

First, the Working Party calls for more transparency on how surveillance programmes work. Being transparent contributes to enhancing and restoring trust between citizens and governments and private entities. Such transparency includes better information to individuals when access to data has been given to intelligence services. In order to better inform individuals on the consequences the use of online and offline electronic communication services may have as well as how they can better protect themselves, the Working Party intends to organise a conference on surveillance in the second half of 2014 bringing together all relevant stakeholders.

In addition, the Working Party strongly advocates for more meaningful oversight of surveillance activities. Effective and independent supervision on the intelligence services, including on processing of personal data, is key to ensure that no abuse of these programmes will take place. Therefore, the Working Party considers that an effective and independent supervision of intelligence services implies a genuine involvement of the data protection authorities.

The Working Party further recommends enforcing the existing obligations of EU Member States and of Parties to the ECHR to protect the rights of respect for private life and to protection of one's personal data. Moreover the Working Party recalls that controllers subject to EU jurisdiction shall comply with existing applicable EU data protection legislation. The Working Party furthermore recalls that data protection authorities may suspend data flows and should decide according to their national competence if sanctions are in order in a specific situation.

Neither Safe Harbor, nor Standard Contractual Clauses, nor BCRs could serve as a legal basis to justify the transfer of personal data to a third country authority for the purpose of massive and indiscriminate surveillance. In fact, the exceptions included in these instruments are limited in scope and should be interpreted restrictively. They should never be implemented to the detriment of the level of protection guaranteed by EU rules and instruments governing transfers.

The Working Party urges the EU institutions to finalise the negotiations on the data protection reform package. It welcomes in particular the proposal of the European Parliament for a new article 43a, providing for mandatory information to individuals when access to data has been given to a public authority in the last twelve months. Being transparent about these practices will greatly enhance trust.

Furthermore, the Working Party considers that the scope of the national security exemption should be clarified in order to give legal certainty regarding the scope of application of EU law. To date, no clear definition of the concept of national security has been adopted by the European legislator, nor is the case law of the European courts conclusive.

Finally, the Working Party recommends the quick start of negotiations on an international agreement to grant adequate data protection safeguards to individuals when intelligence activities are carried out. The Working Party also supports the development of a global instrument providing for enforceable, high level privacy and data protection principles.

Source and read more:
http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2014/wp215_en.pdf

Coupling Functions Enable Secure Communications

Abstract

Secure encryption is an essential feature of modern communications, but rapid progress in illicit decryption brings a continuing need for new schemes that are harder and harder to break. Inspired by the time-varying nature of the cardiorespiratory interaction, here we introduce a new class of secure communications that is highly resistant to conventional attacks. Unlike all earlier encryption procedures, this cipher makes use of the coupling functions between interacting dynamical systems. It results in an unbounded number of encryption key possibilities, allows the transmission or reception of more than one signal simultaneously, and is robust against external noise. Thus, the information signals are encrypted as the time variations of linearly independent coupling functions. Using predetermined forms of coupling function, we apply Bayesian inference on the receiver side to detect and separate the information signals while simultaneously eliminating the effect of external noise. The scheme is highly modular and is readily extendable to support different communications applications within the same general framework.

By Tomislav Stankovski, Peter V. E. McClintock, and Aneta Stefanovska

Source and read the Paper:
http://journals.aps.org/prx/abstract/10.1103/PhysRevX.4.011026

Answering the Critical Question: Can You Get Private SSL Keys Using Heartbleed?

Heartbleed

The widely-used open source library OpenSSL revealed on Monday it had a major bug, now known as “heartbleed". By sending a specially crafted packet to a vulnerable server running an unpatched version of OpenSSL, an attacker can get up to 64kB of the server’s working memory. This is the result of a classic implementation bug known as a Buffer over-read
There has been speculation that this vulnerability could expose server certificate private keys, making those sites vulnerable to impersonation. This would be the disaster scenario, requiring virtually every service to reissue and revoke its SSL certificates. Note that simply reissuing certificates is not enough, you must revoke them as well.
Unfortuntately, the certificate revocation process is far from perfect and was never built for revocation at mass scale. If every site revoked its certificates, it would impose a significant burden and performance penalty on the Internet. At CloudFlare scale the reissuance and revocation process could break the CA infrastructure. So, we’ve spent a significant amount of time talking to our CA partners in order to ensure that we can safely and successfully revoke and reissue our customers' certificates.
While the vulnerability seems likely to put private key data at risk, to date there have been no verified reports of actual private keys being exposed. At CloudFlare, we received early warning of the Heartbleed vulnerability and patched our systems 12 days ago. We’ve spent much of the time running extensive tests to figure out what can be exposed via Heartbleed and, specifically, to understand if private SSL key data was at risk.
Here’s the good news: after extensive testing on our software stack, we have been unable to successfully use Heartbleed on a vulnerable server to retrieve any private key data. Note that is not the same as saying it is impossible to use Heartbleed to get private keys. We do not yet feel comfortable saying that. However, if it is possible, it is at a minimum very hard. And, we have reason to believe based on the data structures used by OpenSSL and the modified version of NGINX that we use, that it may in fact be impossible.
To get more eyes on the problem, we have created a site so the world can challenge this hypothesis:
CloudFlare Challenge: Heartbleed
This site was created by CloudFlare engineers to be intentionally vulnerable to heartbleed. It is not running behind CloudFlare’s network. We encourage everyone to attempt to get the private key from this website. If someone is able to steal the private key from this site using heartbleed, we will post the full details here.
While we believe it is unlikely that private key data was exposed, we are proceeding with an abundance of caution. We’ve begun the process of reissuing and revoking the keys CloudFlare manages on behalf of our customers. In order to ensure that we don’t overburden the certificate authority resources, we are staging this process. We expect that it will be complete by early next week.
In the meantime, we’re hopeful we can get more assurance that SSL keys are safe through our crowd-sourced effort to hack them. To get everyone started, we wanted to outline the process we’ve embarked on to date in order to attempt to hack them.

The bug

A heartbeat is a message that is sent to the server just so the server can send it back. This lets a client know that the server is still connected and listening. The heartbleed bug was a mistake in the implementation of the response to a heartbeat message.
Here is the offending code
p = &s->s3->rrec.data[0]

[...]

hbtype = *p++;
n2s(p, payload); 
pl = p;

[...]

buffer = OPENSSL_malloc(1 + 2 + payload + padding);
bp = buffer;

[...]

memcpy(bp, pl, payload);
The incoming message is stored in a structure called rrec, which contains the incoming request data. The code reads the type (finding out that it's a heartbeat) from the first byte, then reads the next two bytes which indicate the length of the heartbeat payload. In a valid heartbeat request, this length matches the length of the payload sent in the heartbeat request.
The major problem (and cause of heartbleed) is that the code does not check that this length is the actual length sent in the heartbeat request, allowing the request to ask for more data than it should be able to retrieve. The code then copies the amount of data indicated by the length from the incoming message to the outgoing message. If the length is longer than the incoming message, the software just keeps copying data past the end of the message. Since the length variable is 16 bits, you can request up to 65,535 bytes from memory. The data that lives past the end of the incoming message is from a kind of no-man’s land that the program should not be accessing and may contain data left behind from other parts of OpenSSL.
diagram
When processing a request that contains a longer length than the request payload, some of this unknown data is copied into the response and sent back to the client. This extra data can contain sensitive information like session cookies and passwords, as we describe in the next section.
The fix for this bug is simple: check that the length of the message actually matches the length of the incoming request. If it is too long, return nothing. That’s exactly what the OpenSSL patch does.

Malloc and the Heap

So what sort of data can live past the end of the request? The technical answer is “heap data,” but the more realistic answer is that it’s platform dependent.
On most computer systems, each process has its own set of working memory. Typically this is split into two data structures: the stack and the heap. This is the case on Linux, the operating system that CloudFlare runs on its servers.
The memory address with the highest value is where the stack data lives. This includes local working variables and non-persistent data storage for running a program. The lowest portion of the address space typically contains the program’s code, followed by static data needed by the program. Right above that is the heap, where all dynamically allocated data lives.
Heap organization
Managing data on the heap is done with the library calls malloc (used to get memory) and free (used to give it back when no longer needed). When you call malloc, the program picks some unused space in the heap area and returns the address of the first part of it to you. Your program is then able to store data at that location. When you call free, memory space is marked as unused. In most cases, the data that was stored in that space is just left there unmodified.
Every new allocation needs some unused space from the heap. Typically this is chosen to be at the lowest possible address that has enough room for the new allocation. A heap typically grows upwards; later allocations get higher addresses. If a block of data is allocated early it gets a low address and later allocations will get higher addresses, unless a big early block is freed.
This is of direct relevance because both the incoming message request (s->s3->rrec.data) and the certificate private key are allocated on the heap with malloc. The exploit reads data from the address of the incoming message. For previous requests that were allocated and freed, their data (including passwords and cookies) may still be in memory. If they are stored less than 65,536 bytes higher in the address space than the current request, the details can be revealed to an attacker.
Requests come and go, recycling memory at around the top of the heap. This makes extracting previous request data very likely from this attack. This is a important in understanding what you can and cannot get at using the vulnerability. Previous requests could contain password data, cookies or other exploitable data. Private keys are a different story; due to the way the heap is structured. The good news is this means that it is much less likely private SSL keys would be exposed.

Source and more:
http://blog.cloudflare.com/answering-the-critical-question-can-you-get-private-ssl-keys-using-heartbleed

Cookies that give you away: The surveillance implications of web tracking

Over the past three months we’ve learnt that NSA uses third-party tracking cookies for surveillance (1, 2). These cookies, provided by a third-party advertising or analytics network (e.g. doubleclick.com, scorecardresearch.com), are ubiquitous on the web, and tag users’ browsers with unique pseudonymous IDs. In a new paper, we study just how big a privacy problem this is. We quantify what an observer can learn about a user’s web traffic by purely passively eavesdropping on the network, and arrive at surprising answers.
At first sight it doesn’t seem possible that eavesdropping alone can reveal much. First the eavesdropper on the Internet backbone sees millions of HTTP requests and responses. How can he associate the third-party HTTP request containing a user’s cookie with request to the first-party web page that the browser visited, which doesn’t contain the cookie? Second, how can visits to different first parties be linked to each other? And finally, even if all the web traffic for a single user can be linked together, how can the adversary go from a set pseudonymous cookies to the user’s real-world identity?

The diagram illustrates how the eavesdropper can use multiple third-party cookies to link traffic. When a user visits ‘www.exampleA.com,’ the response contains the embedded tracker X, with an ID cookie ‘xxx’. The visits to exampleA and to X are tied together by IP address, which typically doesn’t change within a single page visit [1]. Another page visited by the same user might embed tracker Y bearing the pseudonymous cookie ‘yyy’. If the two page visits were made from different IP addresses, an eavesdropper seeing these cookies can’t tell that the same browser made both visits. But if a third page, however, embeds both trackers X and Y, then the eavesdropper will know that IDs ‘xxx’ and ‘yyy’ belong to the same user. This method applied iteratively has the potential of tying together a lot of the traffic of a single user.

By Dillon Reisman
Source and read the full paper:
https://freedom-to-tinker.com/blog/dreisman/cookies-that-give-you-away-the-surveillance-implications-of-web-tracking/

NIST Launches a New U.S. Time Standard: NIST-F2 Atomic Clock

The U.S. Department of Commerce's National Institute of Standards and Technology (NIST) has officially launched a new atomic clock, called NIST-F2, to serve as a new U.S. civilian time and frequency standard, along with the current NIST-F1 standard.
NIST-F2 would neither gain nor lose one second in about 300 million years, making it about three times as accurate as NIST-F1, which has served as the standard since 1999. Both clocks use a "fountain" of cesium atoms to determine the exact length of a second.
NIST scientists recently reported the first official performance data for NIST-F2,* which has been under development for a decade, to the International Bureau of Weights and Measures (BIPM), located near Paris, France. That agency collates data from atomic clocks around the world to produce Coordinated Universal Time (UTC), the international standard of time. According to BIPM data, NIST-F2 is now the world's most accurate time standard.**
NIST-F2 is the latest in a series of cesium-based atomic clocks developed by NIST since the 1950s. In its role as the U.S. measurement authority, NIST strives to advance atomic timekeeping, which is part of the basic infrastructure of modern society. Many everyday technologies, such as cellular telephones, Global Positioning System (GPS) satellite receivers, and the electric power grid, rely on the high accuracy of atomic clocks. Historically, improved timekeeping has consistently led to technology improvements and innovation.
"If we've learned anything in the last 60 years of building atomic clocks, we've learned that every time we build a better clock, somebody comes up with a use for it that you couldn't have foreseen," says NIST physicist Steven Jefferts, lead designer of NIST-F2.
For now, NIST plans to simultaneously operate both NIST-F1 and NIST-F2. Long-term comparisons of the two clocks will help NIST scientists continue to improve both clocks as they serve as U.S. standards for civilian time. The U.S. Naval Observatory maintains military time standards.
Both NIST-F1 and NIST-F2 measure the frequency of a particular transition in the cesium atom—which is 9,192,631,770 vibrations per second, and is used to define the second, the international (SI) unit of time. The key operational difference is that F1 operates near room temperature (about 27 ºC or 80 ºF) whereas the atoms in F2 are shielded within a much colder environment (at minus 193 ºC, or minus 316 ºF). This cooling dramatically lowers the background radiation and thus reduces some of the very small measurement errors that must be corrected in NIST-F1. (See backgrounder on clock operation and accompanying animation of NIST-F2.)
Primary standards such as NIST-F1 and NIST-F2 are operated for periods of a few weeks several times each year to calibrate NIST timescales, collections of stable commercial clocks such as hydrogen masers used to keep time and establish the official time of day. NIST clocks also contribute to UTC. Technically, both F1 and F2 are frequency standards, meaning they are used to measure the size of the SI second and calibrate the "ticks" of other clocks. (Time and frequency are inversely related.)
NIST provides a broad range of timing and synchronization measurement services to meet a wide variety of customer needs. NIST official time is used to time-stamp hundreds of billions of dollars in U.S. financial transactions each working day, for example. NIST time is also disseminated to industry and the public through the Internet Time Service, which as of early 2014 received about 8 billion automated requests per day to synchronize clocks in computers and network devices; and NIST radio broadcasts, which update an estimated 50 million watches and other clocks daily.
At the request of the Italian standards organization, NIST fabricated many duplicate components for a second version of NIST-F2, known as IT-CsF2 to be operated by Istituto Nazionale di Ricerca Metrologica (INRIM), NIST's counterpart in Turin, Italy. Two co-authors from Italy contributed to the new report on NIST-F2.
The cesium clock era officially dates back to 1967, when the second was defined based on vibrations of the cesium atom. Cesium clocks have improved substantially since that time and are likely to improve a bit more. But clocks that operate at microwave frequencies such as those based on cesium or other atoms are likely approaching their ultimate performance limits because of the relatively low frequencies of microwaves. In the future, better performance will likely be achieved with clocks based on atoms that switch energy levels at much higher frequencies in or near the visible part of the electromagnetic spectrum. These optical atomic clocks divide time into smaller units and could lead to time standards more than 100 times more accurate than today's cesium standards. Higher frequency is one of a variety of factors that enables improved precision and accuracy.
For the media: High-definition video b-roll available on request.
*T.P. Heavner, E.A. Donley, F. Levi, G. Costanzo, T.E. Parker, J.H. Shirley, N. Ashby, S.E. Barlow and S.R. Jefferts. First Accuracy Evaluation of NIST-F2. Metrologia. Forthcoming. See http://iopscience.iop.org/0026-1394/page/Forthcoming%20articles.
**These data are reported monthly in BIPM's Circular T, available online at http://www.bipm.org/jsp/en/TimeFtp.jsp?TypePub=publication#nohref. NIST-F2 is scheduled to be listed for the first time in the March 2014 edition. The value of interest is Type B (systematic) uncertainty.
Source:
http://www.nist.gov/pml/div688/nist-f2-atomic-clock-040314.cfm

Turkish ISPs Hijacking Traffic: This is How an Internet Breaks

While we may be tired of hearing about blocked Internet access, the most recent move in Turkey should make us sit up and take notice again, as it represents an attack not just on the DNS infrastructure, but on the global Internet routing system itself.
I would argue that people in Turkey haven’t had real Internet service since mid-March when the Turkish government banned access to, and required the blocking of, Twitter and subsequently YouTube. As reported, in the most recent effort to comply with the Turkish government mandate, Turkish ISPs have taken aim at the open public DNS services provided by companies such as Google. This is fragmenting the Internet — destroying its very purpose — and the Internet Society has been clear in its position that it should be undone.
This latest move attempts to address a perceived problem: many Turkish Internet users were using the well-known IP address of the Google Public DNS service to circumvent the crippled DNS services offered by their ISP. And with that, they could again access Twitter and YouTube.
While the service that is being blocked is an actual DNS server, the blocking is being performed at a lower level, in the routing system itself. To block access to Google Public DNS servers, Turkish ISPs’ routers are announcing an erroneous and very specific Internet route that includes the well-known IP address. With this modification, the Turkish routers are now lying about how to get to the Google Public DNS service, and taking all the traffic to a different destination. They are lying about where the Google service resides — by hijacking the traffic. Apparently, the ISPs are not just null-routing it (sending into oblivion) — but rather sending the traffic to their own DNS servers which then (wait for it) give out the wrong answers. So, these servers are masquerading as the Google Public DNS service.
Both (a) blocking Twitter and YouTube by returning false DNS results and (b) the use of false routing announcements are attacks on the integrity of the Internet’s infrastructure — DNS and routing. Both of these infrastructure services are imperative to have a global Internet, and they are operated by collective agreement to adhere to Internet protocols and best practices — that’s what puts the “inter” in inter-network.
In 2012, when the US government was contemplating laws that would require ISPs to falsify DNS results in an effort to curtail access to websites offering counterfeit goods (SOPA — “Stop Online Piracy Act” and PIPA — “Protect IP Act”), we put together a whitepaper outlining the pitfalls of such DNS filtering. Those concerns apply in the case of the DNS blocking of Twitter and YouTube in Turkey, and there are analogs for the route hijacking approach, too:
Easily circumvented
Users who wish to download filtered content can simply use IP addresses instead of DNS names. As users discover the many ways to work around DNS filtering, the effectiveness of filtering will be reduced. ISPs will be required to implement stronger controls, placing them in the middle of an unwelcome battle between Internet users and national governments.
Doesn’t solve the problem
Filtering DNS or blocking the name does not remove the illegal content. A different domain name pointing to the same Internet address could be established within minutes.
Incompatible with DNSSEC and impedes DNSSEC deployment
DNSSEC is a new technology designed to add confidence and trust to the Internet. DNSSEC ensures that DNS data are not modified by anyone between the data owner and the consumer. To DNSSEC, DNS filtering looks the same as a hacker trying to impersonate a legitimate web site to steal personal information—exactly the problem that DNSSEC is trying to solve.
DNSSEC cannot differentiate legally sanctioned filtering from cybercrime.
Causes collateral damage
When both legal and illegal content share the same domain name, DNS filtering blocks access to everything. For example, blocking access to a single Wikipedia article using DNS filtering would also block millions of other Wikipedia articles.
Puts users at-risk
When local DNS service is not considered reliable and open, Internet users may use alternative and non-standard approaches, such as downloading software that redirects their traffic to avoid filters. These makeshift solutions subject users to additional security risks.
Encourages fragmentation
A coherent and consistent structure is important to the successful operation of the Internet. DNS filtering eliminates this consistency and fragments the DNS, which undermines the structure of the Internet.
Drives service underground
If DNS filtering becomes widespread, “underground” DNS services and alternative domain namespaces will be established, further fragmenting the Internet, and taking the content out of easy view of law enforcement.
Raises human rights and due process concerns
DNS filtering is a broad measure, unable to distinguish illegal and legitimate content on the same domain. Implemented carelessly or improperly, it has the potential to restrict free and open communications and could be used in ways that limit the rights of individuals or minority groups.
The kicker is that this sort of approach to blocking use of (parts of) the Internet just doesn’t work. There are always workarounds, although they are becoming increasingly tortuous (dare I say “byzantine”?) and impede the future growth of the Internet’s technology. If Internet technology is like building blocks, this is like sawing the corners off your whole set of blocks and then trying to build a model with them.
All that this escalation of Internet hostility achieves is: a broken Internet.
In 2010, the Internet Society published a paper based on a thought exercise about what would become of the Internet if different forces prevailed in the Internet’s evolution. We’re seeing escalations on all vectors of the quadrants we outlined in the 2010 scenarios and while we believed it was a thought-experiment at the time, it’s amazing to see how much of the then-barely-imaginable is becoming real in one way or another. Collectively, we should take heed of the outcomes that those scenarios paint — and work together to get beyond this.
In the immediate term, there are technologies available to provide better security of (and, therefore, confidence in) DNS and routing infrastructures — see our related post on the Deploy360 site: Turkish Hijacking of DNS Providers Shows Clear Need For Deploying BGP And DNS Security.

By Leslie Daigle
Source:
http://www.internetsociety.org/blog/tech-matters/2014/04/turkish-isps-hijacking-traffic-how-internet-breaks