SecureCloud2014

Amsterdam 02.04.2014

The key note speeches of the 2nd day of the event - Richard Clarke (Chairman & CEO of Good Harbor, Member of President Obama’s Review Group on Intelligence and Communications Technology), -Jim Reavis (CEO Cloud Security Alliance) and Prof. Reinhard Posch (CIO Austrian Federal Gov).
Panels of day 2:
Certification and Cloud Security
  • Daniele Catteddu (Cloud Security Alliance)
  • Ken Ducatel (EC)
  • Hing-Yan LEE PhD (Infocomm Development Authority of Singapore)
  • Tom Nicholls (BSI)
Cloud Provider Panel
  • Ralph Salomon (SAP)
  • Peter Dickaman (Google)
  • Paul Nicholas (Microsoft)
  • Peleus Uhley (Adobe)
Innovative solution for cloud security
  • Raj Samani (McAfee)
  • Frank van Dam (Ministry of Economic Affairs of the Netherlands)
  • Michaela Iorga (NIST)
  • Alen Pennetrat (Cloud Security Alliance)
  • Richard Mogull (Securosis)

SecureCloud2014 in numbers :
  • 47 speakers,
  • 5 high level key notes,
  • 6 cloud computing related panels,
  • 4 break out sessions and
  • more than 100 attendants.
Less than one week left for the biggest Cloud event for 2014. Register here
Follow us on #SecureCloud2014


Background: SecureCloud2014 count down – Day 1 Panels and Key notes
Richard Clarke (former US Cybersecurity Coordinator) at SecureCloud2014

Source:
http://www.enisa.europa.eu/media/news-items/securecloud2014-final-countdown-day-2-panels-and-key-notes

Instagram now has more mobile users in the US than Twitter, according to a new report

In a new report that will make painful reading for those at Twitter, E-Marketer claims that Instagram, the Facebook-owned photo app, now has more smartphone-based users in the US than Twitter.
According to the figures — reported by the Financial Times — Instagram has 35 million US mobile users, while Twitter has 30.8 million. More generally, Twitter claims 240 million active users worldwide each month — across both desktop and mobile devices — which keeps it ahead of Instagram, which counts 200 million, for now at least.
The research comes at an interesting time for Twitter. The company is pushing its multimedia credentials after it added multiple photo uploads and image tagging to its mobile app, a change that has received a mixed reaction from users.

By Jon Russell
Source:
http://thenextweb.com/twitter/2014/03/27/instagram-now-mobile-users-us-twitter-according-new-report/?fromcat=all

Understanding Online Threats with ThreatData

Helping keep the Internet free of threats is a huge challenge that has never been more important. For us to do our part effectively, we must continually search for new types of attacks and deeply understand existing ones. Given the pace of criminals today, one of the hard parts is actually keeping track of all the data related to malware, phishing, and other risks. We wanted an easier way to organize our work and incorporate new threat information we receive so that we can do more to protect people.

When we began sketching out a system to solve this problem, we encountered issues others have faced: every company or vendor uses their own data formats, a consistent vocabulary is rare, and each threat type can look very different from the next. With that in mind, we set about building what we now call ThreatData, a framework for importing information about badness on the Internet in arbitrary formats, storing it efficiently, and making it accessible for both real-time defensive systems and long-term analysis.



Design, Starting With Feeds



The ThreatData framework is comprised of three high-level parts: feeds, data storage, and real-time response. Feeds collect data from a specific source and are implemented via a light-weight interface. The data can be in nearly any format and is transformed by the feed into a simple schema we call a ThreatDatum. The datum is capable of storing not only the basics of the threat (e.g., evil-malware-domain.biz) but also the context in which it was bad. The added context is used in other parts of the framework to make more informed, automatic decisions.


Here are some examples of feeds we have implemented:
  • Malware file hashes from VirusTotal [0];
  • Malicious URLs from multiple open source blogs and malware tracking sites;
  • Vendor-generated threat intelligence we purchase;
  • Facebook's internal sources of threat intelligence; and
  • Browser extensions for importing data as a Facebook security team member reads an article, blog, or
    other content.

Data Storage


Once a feed has transformed the raw data, it is fed into two of our existing data repository technologies: Hive[1] and Scuba[2].

We use Hive storage to answer questions based on long-term data:
  • Have we ever seen this threat before?
  • What type of threat is more prevalent from our perspective: malware or phishing?

Scuba gives us the opposite end of the analysis spectrum:
  • What new malware are we seeing today?
  • Where are most of the new phishing sites?

Real-time Response

Maintaining accurate threat databases is great and can help answer challenging questions, but that's only part of the challenge in protecting the graph. We also need to quickly and consistently address threats that come to our attention. To help us, we built a processor to examine ThreatDatum at the time of logging and act on each of these new threats. Here are some examples we've implemented so far:
  • All malicious URLs collected from any feed are sent to the same blacklist used to protect people on facebook.com;
  • Interesting malware file hashes are automatically downloaded from known malware repositories, stored, and sent for automated analysis; and
  • Threat data is propagated to our homegrown security event management system, which is used to protect Facebook's corporate networks.


Discoveries

Now that we have the ThreatData framework in place, we continue to iterate on it, more Facebook engineers are hacking on it, and we are bringing in new types of threats. Along the way, we've had some interesting discoveries.


Feature Phone Malware

In the summer of 2013, we noticed a spike in malware samples containing the string 'J2ME' in the anti-virus signature. Further investigation revealed a spam campaign using fake Facebook accounts to send links to malware designed for feature phones. The malware, specifically the Trojan:J2ME/Boxer family [3], was capable of stealing a victim's address book, sending premium SMS spam, and using the phone's camera to take pictures. With this discovery, we were able to analyze the malware, disrupt the spam campaign, and work with partners to disrupt the botnet's infrastructure. Below is chart of a similar campaign attempted in December 2013.



December 2013 spam campaign attempting to spread Trojan.J2ME.Boxer malware;
Blue is unique URLs, Red is unique binaries


'Super' Anti-Virus

In a typical corporate environment, a single anti-virus product is deployed to all devices and used as a core defense. In reality, however, no single anti-virus product will detect all threats. Some vendors are great at detecting certain types of malware, while others can detect a wide array of threats but are more likely to mislabel them. We decided we would employ our framework to construct a light-weight set of hashes expressly not detected by our chosen anti-virus product and feed those hashes directly into our custom security event management system. The results have been impressive: we've detected both adware and malware installed on visiting vendor computers that no single anti-virus product could have found for us.

Understand the source of threats

As part of the ThreatData framework, we have growing capabilities to decorate the data with additional context at logging time. For example, we add Autonomous System, ISP, and country-level geocoding on every malicious or victimized IP address logged to the repository. As a result, we can understand where threats are coming from, arranged by type of attack, time, and frequency. The map below shows a heat map of one month's worth of data with the ASN/ISP/Country data decoration, including color shading where one shade reflects the combined volume of both malicious and victimized IP addresses in one view. The inset pie chart breaks out U.S. IP addresses by ISP. Charts like this, which an analyst can build in under a minute, are used by Facebook's security teams to drive the relationships we build with other companies and daily remediation actions.



World map of malicious and victimized IP addresses, with inset of United States IP addresses broken out by Internet service provider
Discoveries and detection capabilities like these are just the tip of the iceberg. We're constantly finding new ways to improve and extend the ThreatData framework to encompass new threats and make smarter decisions with the ones we've already identified. We realize that not all aspects of this approach are entirely novel, but we wanted to share what has worked for us to help spark new ideas. We've found that the framework lets us easily incorporate fresh types of data and quickly hook into new and existing internal systems, regardless of their technology stack or how they conceptualize threats.


[0] http://www.virustotal.com/
[1] https://www.facebook.com/note.php?note_id=89508453919
[2] https://www.facebook.com/publications/148418812023978
[3] https://www.virustotal.com/en/file/f7a4de7f70740fbb806d130b2ad19f6cb5a3c737e659e6f38aec119b201a0d2d/analysis/

By Mark Hammell
Source:
https://www.facebook.com/notes/protect-the-graph/understanding-online-threats-with-threatdata/1438165199756960

Microsoft makes source code for MS-DOS and Word for Windows available to public

On Tuesday, we dusted off the source code for early versions of MS-DOS and Word for Windows. With the help of the Computer History Museum, we are making this code available to the public for the first time.
The museum has done an excellent job of curating some of the most significant historical software programs in computing history. As part of this ongoing project, the museum will make available two of the most widely used software programs of the 1980’s, MS DOS 1.1 and 2.0 and Microsoft Word for Windows 1.1a, to help future generations of technologists better understand the roots of personal computing.
In 1980, IBM approached Microsoft to work on a project code-named “Chess.” What followed was a significant milestone in the history of the personal computer. Microsoft, at the time, provided the BASIC language interpreter for IBM. However, they had other plans and asked Microsoft to create an operating system. Without their own on hand, Microsoft licensed an operating system from Seattle Computer Products which would become the foundation for PC-DOS and MS-DOS.
IBM and Microsoft developed a unique relationship that paved the way for advancements in the nascent personal computer industry, and subsequent advancements in personal computing.
Bill Gates was interviewed by David Bunnell just after the launch of the IPM PC in the early 1980s for PC Magazine’s inaugural issue, and provided the backstory: “For more than a year, 35 of Microsoft's staff of 100 worked fulltime (and plenty of overtime) on the IBM project. Bulky packages containing computer gear and other goodies were air-expressed almost daily between the Boca Raton [IBM] laboratory and Seattle [Microsoft]. An electronic message system was established and there was almost always someone flying the arduous 4,000 mile commute.”
Following closely on the heels of MS DOS, Microsoft released the first DOS-based version of Microsoft Word in 1983, which was designed to be used with a mouse. However, it was the 1989 release of Word for Windows that became a blockbuster for the company and within four years it was generating over half the revenue of the worldwide word-processing market. Word for Windows was a remarkable engineering and marketing achievement, and we are happy to provide its source code to the museum.
It’s mind-boggling to think of the growth from those days when Microsoft had under 100 employees and a Microsoft product (MS-DOS) had less than 300KB (yes, kilobytes) of source code. From those roots we’ve grown in a few short decades to become a company that has sold more than 200 million licenses of Windows 8 and has over 1 billion people using Microsoft Office. Great things come from modest beginnings, and the great Microsoft devices and services of the future will probably start small, just as MS-DOS and Word for Windows did.
Thanks to the Computer History Museum, these important pieces of source code will be preserved and made available to the community for historical and technical scholarship.

By Roy Levin
Source:
http://blogs.technet.com/b/microsoft_blog/archive/2014/03/25/microsoft-makes-source-code-for-ms-dos-and-word-for-windows-available-to-public.aspx

Governments and Cloud Computing: Roles, Approaches, and Policy Considerations

Abstract:

Governments from Bogota to Beijing are engaging with emerging cloud computing technologies and its industry in a variety of overlapping contexts. Based on a review of a representative number of advanced cloud computing strategies developed by governments from around the world, including the United States, United Kingdom, the European Union, and Japan, we observed that these governments – mostly implicitly – have taken on several different “roles” with respect to their approaches to cloud computing. In particular, we identify six distinguishable but overlapping roles assumed by governments: users, regulators, coordinators, promoters, researchers, and service providers. In this paper, we describe and discuss each of these roles in detail using examples from our review of cloud strategies, and share high-level observations about the roles as well as the contexts in which they arise. The paper concludes with a set of considerations for policymakers to take into account when developing approaches to the rapidly evolving cloud computing technologies and industry.


By Urs Gasser and David O'Brien
Source and download:
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2410270

ICANN Seeks Public Comment on 2013 RAA Data Retention Specification Data Elements and Legitimate Purposes for Collection and Retention

ICANN has been in discussions with a number of Registrars regarding data retention waiver requests (“Waiver Requests”) submitted under the 2013 Registrar Accreditation Agreement (the "2013 RAA"). Some Registrars are seeking an exemption from certain collection and/or retention requirements under the Data Retention Specification (the "Specification") of the 2013 RAA. Section 2 of the Data Retention Specification sets forth requirements regarding the written materials a Registrar must submit in support of its good faith determination that the collection and/or retention of any data element specified in the Specification violates applicable law, and provides that following notice to ICANN of the Waiver Request, ICANN and the applicable Registrar shall discuss the matter in good faith in an effort to reach a mutually acceptable resolution of the matter. An update on the 2013 RAA and the data retention waiver process can be found here: http://blog.icann.org/2014/02/update-on-2013-raa-and-data-retention-waiver-process/
ICANN understands that personal data should be treated in accordance with applicable data protection laws, which generally permit gathering and retention of personal data for legitimate purpose(s). ICANN also understands that the law may vary from country to country as to (i) what is considered a legitimate purpose, (ii) whether the personal data is adequate, relevant and not excessive in relation to the legitimate purpose for which they are collected and (iii) for how long certain data elements may be retained. In other words, what is considered a legitimate purpose for collection of certain data in one country may not be considered a legitimate purpose in another country.
During ICANN’s discussions in an effort to reach a mutually acceptable resolution of the matter, some Registrars have requested that ICANN (a) clarify and better define certain data elements described in the Data Retention Specification that the Registrars maintain are not clearly defined; and (b) describe potentially legitimate purposes for collection and retention of each data element that would help provide guidance for Registrars both as to whether such elements may be lawfully collected, and, if so, for how long such elements might lawfully be retained..
In response to these requests from some Registrars, ICANN is posting for public comment a document seeking to clarify what is meant by certain data elements described in the Data Retention Specification and describing potentially legitimate purposes for collection and retention of those data elements. That document can be found here [PDF, 116 KB]. The document will be posted for a period of thirty (30) days to seek feedback and input from the community on (i) whether the data elements are appropriately described, (ii) whether the cited purposes for collection and retention are appropriate and legitimate, and (iii) whether there are other potentially legitimate purposes for collection and retention of such data elements. After the thirty (30) day period following this posting has expired, ICANN will consider all feedback and input received in connection with ICANN’s ongoing discussions to reach a mutually acceptable resolution of Waiver Requests. In the interim, ICANN will continue its ongoing discussions to reach a mutually acceptable resolution of Waiver Requests with individual Registrars with the goal of granting additional Waiver Requests as and when appropriate.
A public comment period will remain open until 23:59 p.m. PDT/California, 21 April 2014. Public comments will be available for consideration by ICANN staff and the ICANN Board.
Source:
https://www.icann.org/en/news/announcements/announcement-3-21mar14-en.htm

“Cloud Innovation and the Law: Issues, Approaches, and Interplay”

“Cloud Innovation and the Law: Issues, Approaches, and Interplay,” authored by Berkman Center Executive Director and Harvard Law School Professor of Practice Urs Gasser, draws from and builds upon previous contributions by the author and his collaborators from the Berkman Center’s cloud computing initiative.
Using cloud computing as a lens, the paper seeks to distill higher-level insights about regulation of emerging technologies in digitally networked environments. It starts by introducing and framing cloud computing as both a technological innovation and innovation-enabling technology – in short: cloud innovation. The paper then explores how the legal and regulatory system interacts with cloud computing by identifying, clustering, and analyzing reactions from the legal and regulatory systems in response to the emergence of cloud computing. It ends with general observations regarding the design of interfaces - technical, organizational, and human - between innovative and innovation-enabling technologies, like cloud computing, and the legal and regulatory system.

Download paper:
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2410271
Source:
http://cyber.law.harvard.edu/node/9070

Staying at the forefront of email security and reliability: HTTPS-only and 99.978 percent availability

Your email is important to you, and making sure it stays safe and always available is important to us. As you go about your day reading, writing and checking messages, there are tons of security measures running behind the scenes to keep your email safe, secure, and there whenever you need it.

Starting today, Gmail will always use an encrypted HTTPS connection when you check or send email. Gmail has supported HTTPS since the day it launched, and in 2010 we made HTTPS the default. Today's change means that no one can listen in on your messages as they go back and forth between you and Gmail’s servers—no matter if you're using public WiFi or logging in from your computer, phone or tablet.

In addition, every single email message you send or receive—100 percent of them—is encrypted while moving internally. This ensures that your messages are safe not only when they move between you and Gmail's servers, but also as they move between Google's data centers—something we made a top priority after last summer’s revelations.

Of course, being able to access your email is just as important as keeping it safe and secure. In 2013, Gmail was available 99.978 percent of the time, which averages to less than two hours of disruption for a user for the entire year. Our engineering experts look after Google's services 24x7 and if a problem ever arises, they're on the case immediately. We keep you informed by posting updates on the Apps Status Dashboard until the issue is fixed, and we always conduct a full analysis on the problem to prevent it from happening again.

Our commitment to the security and reliability of your email is absolute, and we’re constantly working on ways to improve. You can learn about additional ways to keep yourself safe online, like creating strong passwords and enabling 2-step verification, by visiting the Security Center: https://www.google.com/help/security.
By Nicolas Lidzborski
Source:
http://googleblog.blogspot.com.tr/2014/03/staying-at-forefront-of-email-security.html

Missed Alarms and 40 Million Stolen Credit Card Numbers: How Target Blew It

The biggest retail hack in U.S. history wasn’t particularly inventive, nor did it appear destined for success. In the days prior to Thanksgiving 2013, someone installed malware in Target’s (TGT) security and payments system designed to steal every credit card used at the company’s 1,797 U.S. stores. At the critical moment—when the Christmas gifts had been scanned and bagged and the cashier asked for a swipe—the malware would step in, capture the shopper’s credit card number, and store it on a Target server commandeered by the hackers.
On Saturday, Nov. 30, the hackers had set their traps and had just one thing to do before starting the attack: plan the data’s escape route. As they uploaded exfiltration malware to move stolen credit card numbers—first to staging points spread around the U.S. to cover their tracks, then into their computers in Russia—FireEye spotted them. Bangalore got an alert and flagged the security team in Minneapolis. And then …

Nothing happened.
For some reason, Minneapolis didn’t react to the sirens. Bloomberg Businessweek spoke to more than 10 former Target employees familiar with the company’s data security operation, as well as eight people with specific knowledge of the hack and its aftermath, including former employees, security researchers, and law enforcement officials. The story they tell is of an alert system, installed to protect the bond between retailer and customer, that worked beautifully. But then, Target stood by as 40 million credit card numbers—and 70 million addresses, phone numbers, and other pieces of personal information—gushed out of its mainframes.

By Michael Riley, Ben Elgin, Dune Lawrence, and Carol Matlack
Source and read more:
http://www.businessweek.com/articles/2014-03-13/target-missed-alarms-in-epic-hack-of-credit-card-data

NIST’s FY 2015 Budget Request Focuses on Innovation, Expands Technology Transfer and Economic Growth Priorities

The U.S. Department of Commerce released details today about the President’s fiscal year (FY) 2015 budget request to Congress for the National Institute of Standards and Technology (NIST). The FY 2015 budget request of $900 million aligns with the agency’s vision for expanding and strengthening NIST programs in a number of key national priority areas such as forensic science, lightweight vehicle alloys and bioengineering measurement tools. The request is a $50 million increase from FY 2014 enacted levels.
"NIST’s laboratories, research and standards development, and manufacturing services are crucial to boosting American innovation and competitiveness," said Under Secretary of Commerce for Standards and Technology and NIST Director Patrick D. Gallagher. "NIST’s FY 2015 budget will allow us to expand and strengthen critical programs, such as increasing capabilities to move technology from the lab to the market more efficiently and effectively. These requested investments will also help us support public-private partnerships that address some of today’s biggest challenges."
The budget reflects the Administration’s continued commitment to enhancing innovation and economic growth through NIST’s broad array of research, standards development and services. The FY 2015 budget request will help NIST increase regional and national capacity for innovative manufacturing, advance the cybersecurity of critical infrastructure and the digital economy, and remain at the forefront of measurement science and technology research and development.
The total FY 2015 budget request includes the following:
Scientific and Technical Research and Services (STRS), $680 million
The STRS request includes $29 million above FY 2014 enacted levels to allow NIST’s laboratory programs to conduct measurement research and services that are central to innovation, productivity, trade and public safety. Funding requests include:
Industrial Technology Services (ITS), $161 million
The ITS request includes an increase of $18 million above FY 2014 enacted levels to support the following efforts:
  • Hollings Manufacturing Extension Partnership (MEP) (+$13 million). MEP is a federal-state-industry partnership that strengthens the competitiveness of small and mid-size U.S. manufacturers by providing them with access to technology, resources and industry experts.
  • Manufacturing Innovation Institutes Coordination (+$5 million). The budget requests funding to coordinate the growing number of manufacturing innovation institutes, including four existing institutes and five new institutes for which funding has been committed, as well as five planned with FY 2015 funding. The Advanced Manufacturing Program Office, hosted by NIST, will coordinate up to 45 institutes that would form the National Network for Manufacturing Innovation (NNMI).
Construction of Research Facilities (CRF), $59 million
The CRF request includes an increase of $3 million above FY 2014 enacted levels and the following projects:
  • Building 1 of the NIST Boulder, Colo., laboratories. With a total request of $11.1 million (a decrease of $0.7 million from FY 2014 enacted levels), NIST will continue to renovate the 60-year-old building, which houses the majority of NIST Boulder research and measurement facilities. Specifically, NIST will complete seismic reinforcement (interior and exterior), remediate hazardous materials, connect utilities to the site utility distribution system, and construct a new building envelope and service galley.
  • Radiation Physics Building, Gaithersburg, Md. An increase of $3.7 million above FY 2014 enacted levels will help NIST begin efforts to modernize the 1964 building to ensure measurements and research are not compromised due to the condition of the facility.
The President’s proposed Opportunity, Growth and Security Initiative, a government-wide appropriation request that will be fully paid for with a balanced package of spending and tax reforms, would provide an additional $115 million to strengthen NIST’s research and development capabilities and facilities. The initiative would support industry and government efforts to address today’s biggest challenges in advanced manufacturing, cybersecurity, advanced communications, quantum science and other areas of critical national importance. The initiative would also provide NIST with $2.4 billion to support the National Network for Manufacturing Innovation.
To view the U.S. Commerce Department’s full FY 2015 budget request, visit: http://www.osec.doc.gov/bmi/budget/FY15CBJ.html. For the Budget in Brief, visit http://www.osec.doc.gov/bmi/budget/FY15BiB/EntireBiB.pdf.
As a non-regulatory agency of the U.S. Department of Commerce, NIST promotes U.S. innovation and industrial competitiveness by advancing measurement science, standards and technology in ways that enhance economic security and improve our quality of life. To learn more about NIST, visit www.nist.gov.

Source:
http://www.nist.gov/public_affairs/releases/fy15_budgetrequest_3-13-2014.cfm

Volunteers in metadata study called gun stores, strip clubs, and more: Stanford research shows even when offering up metadata, it's very revealing

Since November 2013, researchers at Stanford University have been asking: What’s in your metadata?
Specifically, the study encouraged volunteers who also used Facebook to install an app called MetaPhone on their Android phones. The app was designed to act as a sort of slimmed-down version of the National Security Agency by attempting to gather the same metadata collected by telecom firms, and in turn, intelligence agencies. Volunteers who chose to participate allowed the researchers access to their calling and texting data, the date and time, and the duration of the call.
Since late last year, the team has been releasing interim results from the 546 people that chose to participate. On Wednesday, the team released its latest and most complete findings and was startled by what it found.
“At the outset of this study, we shared the same hypothesis as our computer science colleagues—we thought phone metadata could be very sensitive,” Jonathan Mayer, a graduate student leading the project, wrote on Wednesday.
“We did not anticipate finding much evidence one way or the other, however, since the MetaPhone participant population is small, and participants only provide a few months of phone activity on average. We were wrong. We found that phone metadata is unambiguously sensitive, even in a small population and over a short time window. We were able to infer medical conditions, firearm ownership, and more, using solely phone metadata.”
Mayer explained to Ars by phone that given the small sample size and the study duration of only a few months, the team had originally hypothesized that the information gathered would not be as revealing.
“I think it's very certainly strongly suggestive that a larger pool that spans more time would have remarkably more sensitive information in it,” he added.
The new results provide a strong, research-based analytical counterweight to the government assertion that metadata is somehow less revelatory than capturing actual call data.

A likely abortion?

So what was revealed, precisely? Mayer and his team showed that participants called public numbers of “Alcoholics Anonymous, gun stores, NARAL Pro-Choice, labor unions, divorce lawyers, sexually transmitted disease clinics, a Canadian import pharmacy, strip clubs, and much more.”
The researchers were even surprised that they had real-world results to support a classic nightmare scenario feared by many civil libertarians and privacy activists.
Participant A communicated with multiple local neurology groups, a specialty pharmacy, a rare condition management service, and a hotline for a pharmaceutical used solely to treat relapsing multiple sclerosis.
Participant B spoke at length with cardiologists at a major medical center, talked briefly with a medical laboratory, received calls from a pharmacy, and placed short calls to a home reporting hotline for a medical device used to monitor cardiac arrhythmia.
Participant C made a number of calls to a firearm store that specializes in the AR semiautomatic rifle platform. They also spoke at length with customer service for a firearm manufacturer that produces an AR line.
In a span of three weeks, Participant D contacted a home improvement store, locksmiths, a hydroponics dealer, and a head shop.
Participant E had a long, early morning call with her sister. Two days later, she placed a series of calls to the local Planned Parenthood location. She placed brief additional calls two weeks later, and made a final call a month after.
And the most surprising second step was the fact that these privacy researchers decided not to follow up with some of these willing voluntary participants.
“We were able to corroborate Participant B’s medical condition and Participant C’s firearm ownership using public information sources,” the team added. “Owing to the sensitivity of these matters, we elected to not contact Participants A, D, or E for confirmation.”

“Metadata surveillance endangers privacy”

Privacy activists and lawyers immediately lauded the Stanford findings.
Jennifer Granick, the director of civil liberties at the Stanford Center for Internet and Society where Mayer is affiliated, concluded that this study “adds important empirical evidence to support what is now a growing consensus. Metadata surveillance endangers privacy.”
Meanwhile, Brian Pascal, who is a non-resident fellow at the Stanford Center for Internet and Society, told Ars that it’s surprising that even those who knew they were being monitored appeared to not “skew calling habits towards the bland.”
“However, this does not appear to be the case,” he added. “For example, 2 percent of participants called ‘adult establishments,’ knowing that their calling metadata was being recorded. It’s not difficult to imagine that some users, knowing that MetaPhone gathers this information, might change their calling habits. Without a control group, though, it’s impossible to know just how much MetaPhone (or surveillance in general) changes behavior. Admittedly, MetaPhone focuses more on illustrating just how powerful metadata can be, rather than on the impact of surveillance on personal choice, but it’s an interesting implication nonetheless.”
Others drew a clear line between this work and the NSA’s rationale for collect-it-all.
“This just confirms what everyone's intuition suggested—phone metadata is incredibly revealing. It's great to have some empirical evidence to back up that intuition, and it only reinforces the intrusiveness of the NSA's mass collection of Americans' call records.”
“This is striking,” Fred Cate, a law professor at Indiana University, told Ars by e-mail.
“It highlights three key points. First, that the key part of the NSA’s argument—we weren’t collecting sensitive information so what is the bother?—is factually wrong. Second, that the NSA and the [Foreign Intelligence Surveillance Act] Court failed to think this through; after all, it only takes a little common sense to realize that sweeping up all numbers called will inevitably reveal sensitive information. Of course the record of every call made and received is going to implicate privacy. And third, it lays bare the fallacy of the Supreme Court’s mind-numbingly broad wording of the third-party doctrine in an age of big data: just because I reveal data for one purpose—to make a phone call—does not mean that I have no legitimate interest in that information, especially when combined with other data points about me.”

By Cyrus Farivar
Source:
http://arstechnica.com/tech-policy/2014/03/volunteers-in-metadata-study-called-gun-stores-strip-clubs-and-more/
 

Global Multi-stak​eholder Process to Transition the Role of USG Relating to IANA

March 14, 2014 marked a historical moment in the history of the Internet where the National Telecommunication and Information Administration (NTIA) announced its intent to transition key Internet domain name functions to the global multistakeholder community, and has called upon ICANN to convene this multistakeholder process to develop the transition plan.
ICANN, both as the IANA functions administrator and as the global coordinator for the DNS Root Zone, is uniquely positioned to convene this multistakeholder process to develop a plan to transition the USG role. The Internet's global multi-stakeholder community will determine the framework under which the community will hold stewardship over these technical functions. In this regard, ICANN proudly accepts this responsibility with renewed commitment and remains dedicated to keeping the Internet secure, stabile and resilient.
To achieve this objective, ICANN will launch a process that allows the global community to design a framework in a bottom-up, multistakeholder manner. The process will be open and include a set of transparent mechanisms. The process will begin with public consultations at the ICANN 49 meeting in Singapore (March 23-27, 2014). The public will be invited to provide input through online forums, webinars, social networks, and ICANN’s industry events. These meetings include ICANN 49 in Singapore (March 23-27, 2014) which will mark the official launch, ICANN 50 in London (June 22-26, 2014), ICANN 51 in Los Angeles (October 12-16, 2014), ICANN 52 in “Location TBA” (March 2015), and ICANN 53 in “Location TBA” (June 2015).
Depending on the progress of this process and flow of community consultation, ICANN and the community could be ready to complete the transition before the current contract with the USG expires in September 2015.
Within our region, ICANN’s and IANA’s relationship with the US Government has long been a debate, and this debate has had its effect on the global Internet Governance dialogue. As means to strengthen both the IG ecosystem and the DNS industry in our region, you are cordially invited and strongly encouraged to participate in this process. For more information, visit www.icann.org.

İnternet Yasası ve Etkileri Paneli

Satır içi resim 2

Mark Zuckerberg called Barack Obama to express his frustration over US government ‘damage’ to the Internet

Facebook co-founder and CEO Mark Zuckerberg today posted on the social network to share his thoughts about the Internet, why it’s important to keep it secure, and how angry he is with US government surveillance. Just a couple days after news surfaced that the NSA posed as Facebook to infect computers with malware, Zuckerberg says he called Obama himself and wasn’t pleased to learn the broader problem won’t be solved quickly.
Here’s the crux of the post:
The US government should be the champion for the internet, not a threat. They need to be much more transparent about what they’re doing, or otherwise people will believe the worst.
I’ve called President Obama to express my frustration over the damage the government is creating for all of our future. Unfortunately, it seems like it will take a very long time for true full reform.
Zuckerberg says he’s been “confused and frustrated” by the regular reports about the US government’s behavior (he doesn’t explicitly mention the word “surveillance”) and that his engineers imagine protecting users “against criminals, not our own government.” He ends with a call to arms that it’s up to the people to continue building the Internet that is safe and secure, emphasizing that Facebook will do its part.

By Emil Portalinski
Source:
http://thenextweb.com/facebook/2014/03/13/mark-zuckerberg-called-barack-obama-express-frustration-damage-government-creating/?fromcat=all#!zBUnT

An online Magna Carta: Berners-Lee calls for bill of rights for web

The inventor of the world wide web believes an online "Magna Carta" is needed to protect and enshrine the independence of the medium he created and the rights of its users worldwide.
Sir Tim Berners-Lee told the Guardian the web had come under increasing attack from governments and corporate influence and that new rules were needed to protect the "open, neutral" system.
Speaking exactly 25 years after he wrote the first draft of the first proposal for what would become the world wide web, the computer scientist said: "We need a global constitution – a bill of rights."
Berners-Lee's Magna Carta plan is to be taken up as part of an initiative called "the web we want", which calls on people to generate a digital bill of rights in each country – a statement of principles he hopes will be supported by public institutions, government officials and corporations.
"Unless we have an open, neutral internet we can rely on without worrying about what's happening at the back door, we can't have open government, good democracy, good healthcare, connected communities and diversity of culture. It's not naive to think we can have that, but it is naive to think we can just sit back and get it."
Berners-Lee has been an outspoken critic of the American and British spy agencies' surveillance of citizens following the revelations by National Security Agency whistleblower Edward Snowden. In the light of what has emerged, he said, people were looking for an overhaul of how the security services were managed.
His views also echo across the technology industry, where there is particular anger about the efforts by the NSA and Britain's GCHQ to undermine encryption and security tools – something many cybersecurity experts say has been counterproductive and undermined everyone's security.
Principles of privacy, free speech and responsible anonymity would be explored in the Magna Carta scheme. "These issues have crept up on us," Berners-Lee said. "Our rights are being infringed more and more on every side, and the danger is that we get used to it. So I want to use the 25th anniversary for us all to do that, to take the web back into our own hands and define the web we want for the next 25 years."
The web constitution proposal should also examine the impact of copyright laws and the cultural-societal issues around the ethics of technology.
While regional regulation and cultural sensitivities would vary, Berners-Lee said he believed a shared document of principle could provide an international standard for the values of the open web.
He is optimistic that the "web we want" campaign can be mainstream, despite the apparent lack of awareness of public interest in the Snowden story.
"I wouldn't say people in the UK are apathetic – I would say that they have greater trust in their government than other countries. They have the attitude that we voted for them, so let them get on and do it.
"But we need our lawyers and our politicians to understand programming, to understand what can be done with a computer. We also need to revisit a lot of legal structure, copyright law – the laws that put people in jail which have been largely set up to protect the movie producers … None of this has been set up to preserve the day to day discourse between individuals and the day to day democracy that we need to run the country," he said.
Berners-Lee also spoke out strongly in favour of changing a key and controversial element of internet governance that would remove a small but symbolic piece of US control. The US has clung on to the Iana contract, which controls the dominant database of all domain names, but has faced increased pressure post-Snowden.
He said: "The removal of the explicit link to the US department of commerce is long overdue. The US can't have a global place in the running of something which is so non-national. There is huge momentum towards that uncoupling but it is right that we keep a multi-stakeholder approach, and one where governments and companies are both kept at arm's length."
Berners-Lee also reiterated his concern that the web could be balkanised by countries or organisations carving up the digital space to work under their own rules, whether for censorship, regulation or commerce.
We all have to play a role in that future, he said, citing resistance to proposed copyright theft regulation.
He said: "The key thing is getting people to fight for the web and to see the harm that a fractured web would bring. Like any human system, the web needs policing and of course we need national laws, but we must not turn the network into a series of national silos."
Berners-Lee also starred in the London 2012 Olympics, typing the words "this is for everyone" on a computer in the centre of the arena. He has stuck firmly to the principle of openness, inclusivity and democracy since he invented the web in 1989, choosing not to commercialise his model. Rejecting the idea that government and commercial control of such a powerful medium was inevitable, Berners-Lee said it would be impossible: "Not until they prise the keyboards from our cold, dead fingers."

By Jemima Kiss
Source and read more:
http://www.theguardian.com/technology/2014/mar/12/online-magna-carta-berners-lee-web

Snowden: The NSA Is Setting Fire To The Future Of The Internet

Today at the SXSW conference, NSA whistleblower Edward Snowden joined the event digitally to speak about mass surveillance. Since his revelations began to spill last summer, Snowden has been a lightning rod for discussion regarding the proper role of government, and how we handle privacy as a kind.
In his remarks regarding the need for more consumer-friendly encryption, Snowden condemned the NSA, his former employer, and its leaders.
Painting Director of the National Security Agency General Keith Alexander and Director of National Intelligence James Clapper with a single stroke, Snowden said that they have done more harm than anyone else to our national and Internet security. The NSA, in Snowden’s view, is “setting fire to the Internet,” and those in charge of the operation bear that guilt.
Snowden’s argument is simple: By “eroding our protections of communications to get an attack advantage,” the NSA is harming the integrity of the Internet itself, a field in which the United States has a global advantage in terms of innovation. Using an analogy of a vault, Snowden asked why the nation that has the most in their vault would build a backdoor in the vault itself instead of working to protect it. This correlates with Snowden’s view that the NSA is harming security by putting offense ahead of defense; making sure that there is a way into the vault instead of the opposite may not be the best way to keep its contents safe.
Does the disclosure that the NSA acts in that manner, and the methods by which it does so harm our national security? Snowden, unsurprisingly, doesn’t think so. In fact, he thinks that his work does the opposite, that it actually improves the nation’s safety. We rely on the ability to trust our communications, he said, and without that we have nothing. So, provided that we are moving towards more secure communication, we are moving towards safety.
Snowden’s argument is predicated on the idea that the integrity of your and my communications is tied to the nation’s larger interest. The connection between the two isn’t directly apparent, so let’s unpack the idea a bit. If the United States government can access the communications of its citizens on a pervasive basis, it implies that the communication itself is either insecure enough by default, or insecure enough by direct action to be accessed chronically.

By Alex Wilhelm
Source and read more:
http://techcrunch.com/2014/03/10/snowden-the-nsa-is-setting-fire-to-the-future-of-the-internet/

Resourceful Computing Advances Chemistry at Caltech

In the 21st century, it seems impossible to imagine a group of researchers sharing just one computer. However, several decades ago—when computers required big budgets and lots of space—this hypothetical scenario was just the day-to-day reality of research. In the early 1970s, Caltech researcher Aron Kuppermann—seeking an alternative to this often-crowded arrangement—found additional computer resources in an unlikely place: a local religious organization. In the same spirit of creativity, Caltech researchers today have also found ways to practice resourceful computing.

Source and more:
http://www.caltech.edu/content/resourceful-computing-advances-chemistry-caltech

There's No Real Difference Between Online Espionage and Online Attack

Back when we first started getting reports of the Chinese breaking into U.S. computer networks for espionage purposes, we described it in some very strong language. We called the Chinese actions cyber-attacks. We sometimes even invoked the word cyberwar, and declared that a cyber-attack was an act of war.
When Edward Snowden revealed that the NSA has been doing exactly the same thing as the Chinese to computer networks around the world, we used much more moderate language to describe U.S. actions: words like espionage, or intelligence gathering, or spying. We stressed that it's a peacetime activity, and that everyone does it.
The reality is somewhere in the middle, and the problem is that our intuitions are based on history.
Electronic espionage is different today than it was in the pre-Internet days of the Cold War. Eavesdropping isn't passive anymore. It's not the electronic equivalent of sitting close to someone and overhearing a conversation. It's not passively monitoring a communications circuit. It's more likely to involve actively breaking into an adversary's computer network—be it Chinese, Brazilian, or Belgian—and installing malicious software designed to take over that network.
In other words, it's hacking. Cyber-espionage is a form of cyber-attack. It's an offensive action. It violates the sovereignty of another country, and we're doing it with far too little consideration of its diplomatic and geopolitical costs.

By Bruce Schneier
Source and read more:
http://www.theatlantic.com/technology/archive/2014/03/theres-no-real-difference-between-online-espionage-and-online-attack/284233/

Artificial Intelligence could kill us all. Meet the man who takes that risk seriously

Thinking about the end of the world is something that most people try to avoid; for others, it’s a profession. The Future of Humanity Institute at the University of Oxford, UK specializes in looking at the ‘big-picture’ future of the human race, and notably, the risks that could wipe us out entirely.
As you’d probably imagine, the risks considered by the Institute include things like nuclear war and meteor strikes, but one perhaps unexpected area that it’s looking into is the potential threat posed by artificial intelligence. Could computers become so smart that they become our rivals, take all our jobs and eventually wipe us all out? This Terminator-style scenario used to seem like science fiction, but it’s starting to be taken seriously by those who watch the way technology is developing.
“I think there’s more academic papers published on either dung beetles or Star Trek than about actual existential risk,” says Stuart Armstrong, a philosopher and Research Fellow at the institute, whose work has lately been focused on AI. “There are very few institutes of any sorts in the world looking into these large-scale risks…. there is so little research… compared to other far more minor risks – traffic safety and things like that.”

By Martin Bryant
Source and read more:
http://thenextweb.com/insider/2014/03/08/ai-could-kill-all-meet-man-takes-risk-seriously/#!y1YIV

TCK'daki Kişisel Verilerin Korunmasına İlişkin Hükümlerdeki Hapis Cezaları Artırıldı

6 Mart 2014 PERŞEMBE
Resmî Gazete
Sayı : 28933 (Mükerrer)
KANUN
TERÖRLE MÜCADELE KANUNU VE CEZA MUHAKEMESİ KANUNU İLE
BAZI KANUNLARDA DEĞİŞİKLİK YAPILMASINA DAİR KANUN
Kanun No. 6526 Kabul Tarihi: 21/02/2014
 
MADDE 3 – 26/9/2004 tarihli ve 5237 sayılı Türk Ceza Kanununun 135 inci maddesinin birinci fıkrasında yer alan “altı aydan” ibaresi “bir yıldan” şeklinde değiştirilmiştir.
MADDE 4 – 5237 sayılı Kanunun 136 ncımaddesinin birinci fıkrasında yer alan “bir yıldan” ibaresi “iki yıldan” şeklinde değiştirilmiştir.
MADDE 5 – 5237 sayılı Kanunun 138 inci maddesinin birinci fıkrasında yer alan “altı aydan bir yıla kadar hapis” ibaresi “bir yıldan iki yıla kadar hapis” şeklinde değiştirilmiş ve maddeye aşağıdaki fıkra eklenmiştir.
“(2) Suçun konusunun Ceza Muhakemesi Kanunu hükümlerine göre ortadan kaldırılması veya yok edilmesi gereken veri olması hâlinde verilecek ceza bir kat artırılır.”

NISTIR 7849, A Methodology for Developing Authentication Assurance Level Taxonomy for Smart Card-based Identity Verification

NIST announces the release of NIST Interagency Report (IR) 7849, A Methodology for Developing Authentication Assurance Level Taxonomy for Smart Card-based Identity Verification. Smart cards (smart identity tokens) are now extensively deployed for identity verification, and are used in controlling access to both IT and physical resources. This publication presents a methodology for assigning authentication strengths based on the strength of pair wise bindings between the five entities involved in smart card based authentications – the card (token), the token secret, the card holder, the card issuer, and the person identifier stored in the card. NISTIR 7849 also illustrates how to use the methodology for developing an authentication assurance level taxonomy for two real-world smart identity token deployments.

Source:
http://csrc.nist.gov/news_events/#mar6

Rethink Robotics

Rethink Robotics Logo Baxter


http://www.rethinkrobotics.com/

Body Odor ID: Your New Smelly Password

Facial recognition, fingerprints and iris scans could soon take a back seat to the newest biometric identification method on the block: body odor. Researchers at Spain’s Universidad Politecnica de Madrid, in collaboration with tech firm IIia Sistemas SL, are developing a system that can verify people by their scent signatures.
Recognizable body odor patterns remain constant enough over time to allow people to be identified with an accuracy rate of 85 percent. Researchers believe this result is enough to create less aggressive ways to ID people than intrusive measures currently being used today.
While iris and fingerprint scan may have a higher accuracy rate, the researchers contend these techniques are commonly associated with criminal records, perhaps making people reluctant to participate with the process. On the other hand, facial recognition has a high error rate. Therefore, the development of scent sensors that could identify a person as they walk through a system stall could provide less invasive solutions with a relatively high accuracy rate.

By Nic Halverson
Source and more:
http://news.discovery.com/tech/biotechnology/body-odor-id-your-new-smelly-password-140205.htm

Detection and Analysis of the Chameleon WiFi Access Point Virus

This paper analyses and proposes a novel detection strategy for the 'Chameleon’ WiFi AP-AP virus. Previous research has considered virus construction, likely virus behaviour and propagation methods. The research here describes development of an objective measure of virus success, the impact of product susceptibility, the acceleration of infection and the growth of the physical area covered by the virus. An important conclusion of this investigation is that the connectivity between devices in the victim population is a more significant influence on virus propagation than any other factor. The work then proposes and experimentally verifies the application of a detection method for the virus. This method utilises layer 2 management frame information which can detect the attack while maintaining user privacy and user confidentiality, a key requirement in many security solutions.

By Jonny Milliken, Valerio Selis and Alan Marshall
Source:
http://jis.eurasipjournals.com/content/2013/1/2

Does your private data really need to be that private?

When it comes to medical or genomics data, the public good outweighs the benefits of keeping information private, said two academics speaking at the Big Data Privacy Workshop at MIT on Monday.
“I think most people fear death or the death of a loved one more than a loss of privacy,” said John Guttag, professor at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL). In his view, patients or would-be patients would be well served to share their medical data — about hospital stays, treatments, procedures, etc. — in service of preventing things like the clostridium difficile (C. diff) infection.
Five percent of all U.S. patients suffer an infection unrelated to their admission and of those infections, C.diff is one of the most common, affecting 200,000 people per year, he said.

By Barb Darrow
Source and more:
http://gigaom.com/2014/03/03/does-your-private-data-really-need-to-be-that-private/

Which top-level domain names are right for you?

ew domain names are launching at a rate of seven per week just this past month alone. As more new domains become available, it may be difficult to figure out which is best for you to purchase.
Here’s a look at various industries and available domains that are right for you.

1. If you’re in finance

Finance-related domain names that have become available to pre-register include .bank, .loans or .ventures. Any strategy toward domain names that you develop should be primarily search-oriented, allowing your business to be pushed higher in Google.
According to NetNames, a popular site for gTLD info, users are more likely to develop what’s called a “domain name bias” – thinking certain domains are more legitimate simply because of the domain assigned to it.
Something like .ventures or .bank can be ideal for startups or small companies in personal and private banking sectors. Think Venmo, Simple (formerly BankSimple), and LearnVest-type services.

2. If you’re in the retail or fashion

Your best bets are going to be generic, near-basic gTLDs, like .clothing if you’re e-commerce, or .fashion or .style for more fashion-focused brands that are looking to amp up their brand and search ranking.
Elisa Cooper,  director of product marketing for brand protection firm Mark Monitor, advises that professionals should have clear policies set in place that determine when new domains should be registered, like a new brand product launch or brand campaign.
Highly specific domains in the retail industry are great for analytics-based startups and small companies who want to gain recognition in the same space as well-loved fashion brands, like Rebecca Taylor, Steven Alan, and the Alexander Wang’s of the world.
Registering a gTLD may not be a surefire way to make your brand as well-trafficked as these icons, but it can get you a leg up in the space.

3. If you’re a non-profit

Go for .ngo, .give, or .donate. The more basic, the better, as these businesses typically have the most uphill battles for being found. The non-profit industry is incredibly saturated and these brands often have limited budgets.
Risks involved in this sector include hackers who’ll buy similar gTLDs and incriminate your brand. They’re ever-popular for each of these categories, but especially for feel-good organizations whose Internet security policies typically aren’t high-level.
A simple domain like .give should be fairly popular, so it’s ideal to pre-register early on before demand drives up prices for smaller non-profits with conservative budgets.

4. If you’re an IT company

Your search strategy is probably down pat or at least halfway there, but if you need another layer of SEO optimization, choose a gTLD like .data, .search or .network as a social networking site.
It may be sort of a no-brainer, but defer this one to your tech team: high ROI and little effort. Offering a .support domain can also direct the right audience to the department they need.
Domains with strong IT keywords are best for everyone from data analytics software companies to small startups with your cat and your co-founder. Although you can keep optimizing it, remember to think long-term to ensure your investments will be worth the outcome.

5. If you’re in healthcare

The rise of ZocDoc and tech industries revolutionizing the medical industry mean there are more eyes than ever looking for health resources on the Web. Domains like .doctor, .care, .phd, and .dds can be useful in identifying the particular sector you represent.
Domains under the healthcare industry extend beyond strictly medicine. Wellness gTLDs like .yoga, .rehab., and  .diet can also help lifestyle brands increase their SEO capacities and authoritative voice.

6. If you’re an individual/consultant/job-seeker

Don’t skimp on the small stuff (but don’t sweat it, either!) It might be worth hiring a marketing consultant to analyze and come up with the best possible domain name for you, but personalize as much as possible.
Unfortunately, there’s no right answer or shortcut for this one since it depends on your work field. However, think about specific adjectives that make your brand special or that you want to highlight.
Domains such as .band (for music), .ads (for ad consultants), and .kitchen (chefs, specialty interior designers, or restaurants) are all examples of words that well represent an association with your industry.

7. If you’re thinking about launching a startup

Go for it, kid. gTLDs are most valuable to small businesses looking to make a name for themselves, so the more you can break out of your no-name startup with SEO and optimized keywords for people to find you, the better.
The only risk in this space are your competition, especially if you’ve got a common name in the English language. Buy before you’re bought out, and happy hunting!

By Martha Pierce
Source:
http://thenextweb.com/dd/2014/03/03/which-domain-names-are-right-for-you/?fromcat=all#!ycEG1

Twitter’s Root Injustice

The most significant problem with Twitter is that it’s hard to get followers if you’re a new user.

This seems like a small problem at first, but it’s a core one that extends to every corner of the product. The problem is so big because it builds on itself.

Twitter operates as an unregulated market for social capital.

The reason joining Twitter was so exciting in 2007 is the same for which it was exciting to register a domain name and start an Internet business in the 1990s — the first people who show up to open markets can win big. Yes, you can get a good username, but the real issue is the network effects that come from being first. It’s a classic platform problem.
Every time you’re followed it gets easier for others to follow you because you have a bigger audience more likely to spread your message to more people.
It’s easy to make an argument that what happened to blogging, and what is happening to Twitter, is a network effect and destined to happen to any platform, but that’s the easy and cynical way out.

You can design a social media platform that is more fair to new users just as you can design a government that is more fair to its less privileged citizens.

If you want a platform to thrive you need to do what good governments do: help newcomers, break up monopolies, and keep it competitive.

How To Make Twitter More Fair And Competitive

The story of my experience on Twitter is the story that is celebrated — but rarely the case for most people. When I joined Twitter in 2009 it was like showing up to a half-settled frontier town. I could talk with people who would normally never talk to me, I could gain a following in fields where I was years younger than most, and I could do this all from a laptop in my underwear.
Twitter was magical in 2009 because the platform was getting a daily flood of users of all stripes eager to find accounts to follow. We’ll never be able to get back to that era of Twitter, but there are a ton of things Twitter can do to help users.

Reward Engagement Over Inheritance

Twitter has a built-in method to reward users and it’s called the Suggested User List. The list comes in many flavors. You get a list curated by Twitter’s editorial team when you first join. This list is full of celebrities and recognizable news sources — in 2009, Anil Dash wrote a good post about what it’s like to be on this list. You then get a mix of personalized suggestions when you follow someone new in your email and in the left column after you follow someone. The algorithm for this is hidden but I get the same suggested users over and over, and I’m willing to bet most of these suggestions are fueled by overlapping followers of the people I follow.
The Suggested User List is a mechanism that can make Twitter more competitive but only if it rewards a behavior new users have a shot at. Imagine if it the list were generated more like this:
  • Twitter builds a long list of potential followers based off the people who you are already following.
  • Twitter orders that list based on their recent follower engagement.
By Cody Brown
Source and more:
http://techcrunch.com/2014/03/02/twitters-root-injustice/

Bitcoin vs. Coin: Which will have the most success in 2014?

The recent attention given to Bitcoin and Coin - the two newest stars in the financial technology world – vividly illustrates that there exists a major demand to simplify the financial lives of consumers. Coin packages up to eight cards (debit, reward, membership, etc.) into one “Coin” swipeable card. Bitcoin is the Internet of Money and makes online purchases anywhere faster, simpler and anonymous.
While the popularity of both innovations has grown quickly (even though Coin is not yet available), they still face key uphill battles because of their unconventional path to success by not partnering with a trusted financial service provider first.
I’ve worked with hundreds of companies in the financial space – and I’ve seen what common denominators need to be addressed for a service or product to be successful. Let’s take a look at three key obstacles to success that will affect Bitcoin and Coin’s quest for consumer adoption – and which technology currently has the advantage in each category.

1. Security

If you’re not confident that a product or service that is handling your money is totally secure and legitimate, then you’re unlikely to use it. When we’re talking money, consumers want an endorsement from someone they trust before they use a new service.
Square Wallet’s partnership with Visa gave it a huge advantage over competitors in the mobile payments space due to public validation from such a well-known financial services brand. 
Looking at Coin:
Upon launching, Coin hit its campaign goal in 40 minutes, trending on Twitter and receiving 200,000 mentions on Facebook. The viral sensation has seemingly tapped into deep market demand, even though it is in a prototype stage and is not publicly available.
Coin’s adoption could easily spike as friends see friends using the product as long as it delivers on the experience that matches its vision.
However, a major concern remains: will credit card companies want to validate Coin by allowing the company to mimic their cards, without any of the branding that is so important to them? This is an important question mark in the future of Coin.
If Coin can secure one partnership with one of the big three credit card companies, like Square did, it would start the snowball effect of shoring up consumer confidence.
Looking at Bitcoin:
Many trusted technology leaders have already publicly backed Bitcoin and an increasing number of retailers are accepting the new currency.
Marc Andreessen, a venture capital titan, has poured $50 million into Bitcoin-focused startups alone, while The Social+Capital Partnership founder, Chamath Palihapitiya, who owns $5 million in Bitcoins, is also bullish on the service.
Congress seems interested in legitimizing the positive uses of cryptocurrencies. New companies like Coinbase are launching, highlighting the opportunity and interest for simple and useful services built on top of the Bitcoin protocol.
Although these companies and solutions are quite legitimate, the upstart nature of Bitcoin means that it is still in a big grey area, which in turn naturally makes consumers wary of Bitcoin’s security. U.S. banks have so far tacitly allowed converting dollars into Bitcoin – but they could easily cut off this access.
Meanwhile there have been a string of hackings and thefts in which thousands of dollars worth of Bitcoin are stolen, with little legal recourse, which will cause serious security concerns for consumers.
Additionally, the price of Bitcoin is highly volatile and much of the enthusiasm comes from its sky-high price, ranging anywhere between $500 and $1200 per Bitcoin in the last two months. If this wave of favoritism crashes, Bitcoin’s value may also see a subsequent crash.
After the People’s Bank of China banned financial institutions from trading in Bitcoin, the cryptocurrency’s value dropped by 50 percent. Other countries such as India were not far behind. While the value bounced back, the volatility of the currency is problematic for average investors.
Who’s got the advantage? Coin
Coin is more likely to drive consumer confidence in the near term. Bitcoin is both a completely new concept, and can have a much larger impact on our financial system, however it is still poorly understood and not well regulated.
The U.S. government will likely wait and watch for some time before making any big regulatory moves around Bitcoin. In addition, Coin isn’t threatening credit card usage and, in fact, it may be promoting usage by simplifying the use of multiple credit cards.

2. Overcoming preconditioned consumer behaviors

For over two hundred years Americans have used printed money for bartering. The introduction of credit cards in the mid-twentieth century garnered mass adoption, however, outside the US, many countries remained more cash-oriented, or developed other forms of cash-less payments for many years after the availability of credit cards.
In truth, mass-market adoption of new payments technologies and systems can take decades.

By Joe Polverari
Source and read more:
http://thenextweb.com/insider/2014/03/01/bitcoin-vs-coin-success-2014/?fromcat=all#!x2oEk

71% of U.S. households would switch from providers that attempt to interfere with Internet

Consumer Reports survey finds broadband subscribers don't want ISPs to block or slow popular services such as Netflix, Pandora, and Skype

Seventy-one percent of respondents to a survey* conducted by the Consumer Reports National Research Center said they would attempt to switch to a competing Internet service provider (ISP) if their provider were to try to block, slow down, or charge more for bandwith-heavy services such as Amazon Instant Video, Netflix, Pandora, and Skype. (See the full survey results below.)
Last month, a federal court ruling in Verizon v. Federal Communications Commission essentially dismantled much of the FCC’s Open Internet rules, which forbid ISPs from blocking or discriminating between different types of traffic over the network connections they provide to customers. Specifically, those rules meant that ISPs such as Verizon, Comcast, and Cablevision couldn’t treat popular high-bandwidth video and audio streaming services differently from any other Internet traffic. Now, technically, ISPs can manage high-bandwidth traffic as they see fit.
The FCC is now considering whether to appeal the decision, revise the Open Internet rules, or maybe even reclassify ISPs as “common carriers,” which would subject them to a different level of regulatory scrutiny.
There is sure to be more debate and turbulence on this issue (and changes may come as soon as this week). But as both regulators and service providers plot their next moves, the public has officially weighed in.

Avrupa İnsan Hakları Sözleşmesi İhlallerinin Önlenmesine İlişkin Eylem Planı

1 Mart 2014 CUMARTESİ
Resmî Gazete
Sayı : 28928
BAKANLAR KURULU KARARI
Karar Sayısı : 2014/5984
Ekli “Avrupa İnsan Hakları Sözleşmesi İhlallerinin Önlenmesine İlişkin Eylem Planı”nın kabulü; Adalet Bakanlığının 8/2/2014 tarihli ve 16338 sayılı yazısı üzerine, Bakanlar Kurulu’nca 24/2/2014 tarihinde kararlaştırılmıştır.