How to Use Indistinguishability Obfuscation: Deniable Encryption, and More

Abstract
We introduce a new technique, that we call punctured programs, to apply indistinguishability obfuscation towards cryptographic problems. We use this technique to carry out a systematic study of the applicability of indistinguishability obfuscation to a variety of cryptographic goals. Along the way, we resolve the 16-year-old open question of Deniable Encryption, posed by Canetti, Dwork, Naor, and Ostrovsky in 1997: In deniable encryption, a sender who is forced to reveal to an adversary both her message and the randomness she used for encrypting it should be able to convincingly provide \fake" randomness that can explain any alternative message that she would like to pretend that she sent. We resolve this question by giving the first construction of deniable encryption that does not require any pre-planning by the party that must later issue a denial. In addition, we show the generality of our punctured programs technique by also constructing a variety of core cryptographic objects from indistinguishability obfuscation and one-way functions (or close variants). In particular we obtain: public key encryption, short \hash-and-sign" selectively secure signatures, chosen-ciphertext secure public key encryption, non-interactive zero knowledge proofs (NIZKs),  injective trapdoor functions, and oblivious transfer. These results suggest the possibility of indistinguishability obfuscation becoming a \central hub" for cryptography.
By Amit Sahai and Brent Waters
Source and read the full article:

Cryptographic Key Management Workshop 2014

Purpose:

NIST is conducting a two-day Key Management Workshop on March 4-5, 2014. The workshop is being held to discuss a draft of NIST Special Publication (SP) 800-152 (“A Profile for U.S. Federal CKMS”) that will be available for public comment prior to the workshop. This draft is based on the requirements in SP 800-130 (“A Framework for Designing Cryptographic Key Management Systems”), but extends beyond SP 800-130 to establish specific requirements for Federal organizations desiring to use or operate a CKMS, either directly or under contract; recommends augmentations to these requirements for those Federal CKMSs requiring additional security; and suggests additional features for consideration. This Profile addresses the topics included in SP 800-130, and also includes discussions on CKMS testing, procurement, installation, administration, operation, maintenance and use.

While the Profile is intended for use by the U.S. Federal government, it may also be used by other public or private sectors as a model for the development of their own profile.

Input from the workshop participants will be solicited regarding the utility and feasibility of these requirements, recommended augmentations and suggested features. This input, along with comments received during the public comment period will be incorporated into the next version of SP 800-152.

Webcast: The event will be webcast live on March 4-5, 2014. Registration is not required to view the webcast. Details will be posted when available.
Reference Documentation: Printed copies of NIST SP 800-152 will not be available at the workshop. If you would like to reference the document while at the workshop, please bring an electronic or printed copy of the document. Note that internet access will be available to the attendees.

Preliminary Agenda Printable Agenda
Tuesday, March 4, 2014
9:00am - 9:15amWelcome and administrative information
Elaine Barker, NIST
9:15am - 10:30amSESSION 1: Introduction
(Sections 1-3) – Dennis Branstad
  • Cryptographic Key Management Project Overview
  • Profile Introduction, Scope, Goals, Audience
  • Framework Requirements (FRs), Profile Requirements (PRs), Profile Augmentations (PAs) and Profile Features (PFs)
  • Terminology
  • Framework and Profile Documents (Structure, Differences)
  • Questions/Comments
10:30am - 11:00amBREAK
11:00am - 12:30pmSESSION 2: Basic Concepts, Security Policies and Roles
(Sections 4 & 5) – Elaine Barker and Dennis Branstad
  • Designers, Implementers, FCKMS Service Providers, FCKMS Service usersFCKMS vs. CKMS
  • FCKMS Modules
  • Security Policies
  • Security Domains
  • Roles
  • Questions/Comments
12:30pm - 1:30pmLUNCH
1:30pm - 3:00pmSESSION 3: Secure Architectures
(Sections 6 and 10) - Miles Smid
  • Key and Metadata Protection and Management Functions
  • Access Control
  • Compromise Recovery
  • Disaster Recovery
  • Possible Network Configurations
  • Questions/Comments
3:00pm - 3:30pmBREAK
3:30pm - 5:00pmSESSION 4: Spectrum of ApplicationsElaine Barker and others
  • Intended Scope
  • Email
  • Mobile – Lily Chen
  • Cloud Security – Michaela Iorga
  • Key and Metadata Storage
  • Key Establishment
  • Questions/Comments

Wednesday, March 5, 2014
9:00am - 10:30amSESSION 5: Measures and Security Controls
(Sections 6 and 8) – Elaine Barker and Ron Ross
  • Security Strength
  • FIPS 140-2 Security Level (Cryptographic Modules)
  • Impact/Sensitivity Level of Data (per FIPS 199, FIPS 200, and SP 800-53) – Ron Ross
  • Low, Moderate, High Requirements
    Security Controls
  • Questions/Comments
10:30am- 11:00amBREAK
11:00am - 12:30pmSESSION 6: Testing, Evaluation, and Validation
(Sections 9 and 11) – Dennis Branstad, Ron Ross, Miles Smid, Elaine Barker
  • Types of Testing
  • Maintenance
  • FIPS 199, FIPS 200, and SP 800-53
  • Evaluation
  • Validation
  • Questions/Comments
12:30pm - 1:30pmLUNCH
1:30pm - 3:00pmSESSION 7: Interoperability and Transitioning (Section 7) - Elaine Barker
  • Interoperability Defaults and Recommendations
  • Transitioning
  • Questions/Comments
3:00pm - 3:30pmBREAK
3:30pm - 5:00pmSESSION 8: Comments and FeedbackElaine Barker
  • Presentation and Discussion of Comments Received to Date – Elaine Barker, Dennis Branstad, Miles Smid
  • Outstanding Unresolved Issues
  • Test Cases
  • Where do we go from here?
  • Wrap-up

'How Well Did You Sequence that Genome?' NIST, Consortium Partners Have Answer

In December 2013, the U.S. Food and Drug Administration approved the first high-throughput DNA sequencer (also known commonly as a "gene sequencer"), an instrument that allows laboratories to quickly and efficiently sequence a person's DNA for genetic testing, medical diagnoses and perhaps one day, customized drug therapies. Helping get the new device approved was another first: the initial use of a reference set of standard genotypes, or "coded blueprints" of a person's genetic traits. The standard genotypes were created by the National Institute of Standards and Technology (NIST) and collaborators within the NIST-hosted Genome in a Bottle consortium.
"Two years ago, NIST hosted Genome in a Bottle—a group that includes stakeholders from industry, academia and the federal government—to develop reference materials that could measure the performance of equipment, reagents and mathematical algorithms used for clinical human genome sequencing," says NIST biomedical engineer Justin Zook. "Our goal is to provide well-characterized, whole genome standards that will tell a laboratory how well its sequencing process is working, sort of a 'meter stick of the genome.'"
Modern DNA sequencers take a genetic sample in the form of long strings of DNA and randomly chop the DNA into small pieces that can be individually analyzed to determine their sequence of letters from the genetic alphabet. Then, bioinformaticians apply complex mathematical algorithms to identify from which part of the genome the pieces originated. These pieces can then be compared to a defined "reference sequence" to identify where mutations have occurred in specific genes.
There are several different DNA sequencing technologies and computer algorithms to do this very complex analysis, and it's known that for any given sample, they will produce similar, but not identical results. Built-in biases as well as what are essentially "blind spots" for certain possible sequences contribute to uncertainties or errors in the sequence analysis. "These biases can lead to hundreds of thousands of differences between sequencing technologies and algorithms for the same human genome," Zook says.
In a recent paper in Nature Biotechnology,* Zook and his colleagues describe the methods used to make the Genome in a Bottle consortium's pilot set of genotype reference materials. The source DNA, known as NA12878, was taken from a single person. The reference set is essentially the first complete human genome to have been extensively sequenced and re-sequenced by multiple techniques, with the results weighted and analyzed to eliminate as much variation and error as possible.
"We minimized bias in our reference materials toward any specific DNA sequencing method by comparing and integrating data from 14 sequencing experiments generated by five different sequencing platforms," Zook says.
The findings in the Nature Biotechnology paper are publicly available from the Genome in a Bottle website, www.genomeinabottle.org. In addition, the Genome Comparison and Analytic Testing (GCAT) website enables real-time benchmarking of any DNA sequencing method using the paper's results. The research was conducted by a team of scientists at NIST; Harvard University; the Virginia Bioinformatics Institute at Virginia Tech University; and an Austin, Texas, genetic company, Arpeggi Inc. (now part of Gene by Gene Ltd.).
After characterizing the NA12878 pilot, samples of the DNA will be issued as a NIST Reference Material. The Genome in a Bottle consortium also plans to develop well-characterized whole genome reference materials from two genetically diverse groups: Asians and Ashkenazi Jews. Both reference sets will include sequenced genes from father-mother-child "trios" to utilize genetic links between family members.
For more information on the Genome in a Bottle consortium, go to www.genomeinabottle.org.
*J.M. Zook, B. Chapman, J. Wang, D. Mittelman, O. Hofmann, W. Hide and M. Salit. Integrating human sequence data sets provides a resource of benchmark SNP and indel genotype calls. Nature Biotechnology Published online Feb. 16, 2014. doi:10.1038/nbt.2835
Source:
http://www.nist.gov/mml/bbd/dna-022514.cfm

NIST Cryptographic Standards and Guidelines Development Process (Draft)

See:
http://csrc.nist.gov/publications/drafts/nistir-7977/nistir_7977_draft.pdf

Framework for Improving Critical Infrastructure Cybersecurity


Version 1.0
National Institute of Standards and Technology

Source and read the Framework:
http://www.nist.gov/cyberframework/upload/cybersecurity-framework-021214.pdf

What we talk about when we talk about security and privacy

Security and privacy are a constant in every Internet of Things conference. In public institutions, security and privacy could be ranked as the number one concern. People are simply not comfortable with the idea of having 50 billion connected devices posing 50 billion potential threats.
But I’ve found that talks that start with the words security and privacy are soon blended with the concepts of data integrity and liability. I’d like to drill down into these concepts to clarify what each of them mean for the IoT and open the discussion on the most important issues.

Security

There was a time when having a Linux laptop meant being virus-free. Now you find news about malware like the Linux.Darlioz worm that can infect home routers, security cameras or other consumer devices connected to the Internet.
While encryption and authentication algorithms and procedures exist, they are usually expensive in power performance. Many IoT appliances and devices run Linux or other open-source code that may not be regularly updated and can be vulnerable. Thus, the biggest challenge is not in finding new methods for security, but in making sure that the new ultra-low power devices connected to the Internet can be run and patched efficiently.

Privacy

I heard about the Green Button project, an application that gives utility customers access to their energy usage information, for the first time at Connectivity Week in Santa Clara. I was terrified. The idea of opening your data to third-party companies triggered all my privacy alarms. What if someone used that data to track your habits to rob your house or kidnap your kids? I raised my hand at the conference session and the speaker listened to me patiently. After asking if I came from Europe, he just said “Are any of those threats enabled because of this kind of initiative, or would they exist anyway?”
Maybe this particular example cannot be generalized, but we have to admit the speaker has a point. The Green Button project did not create the danger, and it isn’t promoting the use of personal information outside of tracking energy use. But we have to consider that privacy is also a cultural issue. In Europe the public utilities don’t have permission to disclose information on their subscribers. There, to protect your privacy, your phone bill doesn’t list the full phone numbers you called during the month. This data is not universally considered to be public information.
In the U.S. and in Europe, people don’t like being tracked in a shopping mall just to receive customized ads, but if data sharing can give them a benefit they value, they are totally in. Waze users share their location in exchange for traffic congestion information. After all, in a world where everyone is already sharing their updates on Facebook and Twitter, our ideas about privacy will never be the same. The question is all about the trade-off, the quid pro quo.

Integrity

Many devices means many data sources that include individuals—so, we have to ask, to what extent can someone introduce spurious data? When citizens shared radiation levels in Japan after the Fukushima disaster, the crowd-sourced network served as both data source and control because the more people contribute data, the sooner anomalies can be detected. This type of community sharing is not new, it is the same type of use case as a well-proven system that we trust every day: Wikipedia.
When it comes to Smart Cities, one of the first questions our city council customers raise is how they can prevent unauthorized personnel from injecting bad data into the network. This is not surprising: compared to other IoT projects, Smart Cities potentially deal with the largest amount of data.

Liability

In the aftermath of Hurricane Sandy, there was a lot of discussion at an Internet of Things conference in Washington, DC, about using the IoT in disasters, such as in flood warning system. Someone in the audience raised the question that if the sensor data were wrong, who is liable? Is it the sensor manufacturer, the device company or the government?
This was the most accurate question about the topic I have ever heard. It’s too bad nobody in the room dared to answer; that showed the long road we still have ahead of us.
At a recent IoT workshop held by the U.S. Federal Trade Commission, not even Vint Cerf, one of the fathers of the Internet, would pronounce how best to legislate on liability. Who is going to decide what factors should be taken into account? Who is liable in the case of error? There was a similar liability debate in the early days of the Internet—was the ISP or network provider responsible for malicious content on a site?

A gradual resolution

Privacy and security are not trivial subjects, but I know people who swore 20 years ago they wouldn’t carry a cell phone because it invaded their privacy and who now would never go anywhere without one in their pocket. Utility won over privacy. Again.
Along the same lines, we will get used to including security procedures in our routine, the same as we have learned how to update our computers by following a downloaded wizard.
As the Internet of Things is built out and gains ground, we may even find new ways to implement privacy and security. The good thing is that this is now clearly a societal matter and governments will be forced to regulate these issues at last.

By Alicia Asin, Libelium
Source:
http://gigaom.com/2014/02/23/what-we-talk-about-when-we-talk-about-security-and-privacy/

A New Laser for a Faster Internet

A new laser developed by a research group at Caltech holds the potential to increase by orders of magnitude the rate of data transmission in the optical-fiber network—the backbone of the Internet.The study was published the week of February 10–14 in the online edition of the Proceedings of the National Academy of Sciences. The work is the result of a five-year effort by researchers in the laboratory of Amnon Yariv, Martin and Eileen Summerfield Professor of Applied Physics and professor of electrical engineering; the project was led by postdoctoral scholar Christos Santis (PhD '13) and graduate student Scott Steger.Light is capable of carrying vast amounts of information—approximately 10,000 times more bandwidth than microwaves, the earlier carrier of long-distance communications. But to utilize this potential, the laser light needs to be as spectrally pure—as close to a single frequency—as possible. The purer the tone, the more information it can carry, and for decades researchers have been trying to develop a laser that comes as close as possible to emitting just one frequency.Today's worldwide optical-fiber network is still powered by a laser known as the distributed-feedback semiconductor (S-DFB) laser, developed in the mid 1970s in Yariv's research group. The S-DFB laser's unusual longevity in optical communications stemmed from its, at the time, unparalleled spectral purity—the degree to which the light emitted matched a single frequency. The laser's increased spectral purity directly translated into a larger information bandwidth of the laser beam and longer possible transmission distances in the optical fiber—with the result that more information could be carried farther and faster than ever before.At the time, this unprecedented spectral purity was a direct consequence of the incorporation of a nanoscale corrugation within the multilayered structure of the laser. The washboard-like surface acted as a sort of internal filter, discriminating against spurious "noisy" waves contaminating the ideal wave frequency. Although the old S-DFB laser had a successful 40-year run in optical communications—and was cited as the main reason for Yariv receiving the 2010 National Medal of Science—the spectral purity, or coherence, of the laser no longer satisfies the ever-increasing demand for bandwidth."What became the prime motivator for our project was that the present-day laser designs—even our S-DFB laser—have an internal architecture which is unfavorable for high spectral-purity operation. This is because they allow a large and theoretically unavoidable optical noise to comingle with the coherent laser and thus degrade its spectral purity," he says.The old S-DFB laser consists of continuous crystalline layers of materials called III-V semiconductors—typically gallium arsenide and indium phosphide—that convert into light the applied electrical current flowing through the structure. Once generated, the light is stored within the same material. Since III-V semiconductors are also strong light absorbers—and this absorption leads to a degradation of spectral purity—the researchers sought a different solution for the new laser.The high-coherence new laser still converts current to light using the III-V material, but in a fundamental departure from the S-DFB laser, it stores the light in a layer of silicon, which does not absorb light. Spatial patterning of this silicon layer—a variant of the corrugated surface of the S-DFB laser—causes the silicon to act as a light concentrator, pulling the newly generated light away from the light-absorbing III-V material and into the near absorption-free silicon.This newly achieved high spectral purity—a 20 times narrower range of frequencies than possible with the S-DFB laser—could be especially important for the future of fiber-optic communications. Originally, laser beams in optic fibers carried information in pulses of light; data signals were impressed on the beam by rapidly turning the laser on and off, and the resulting light pulses were carried through the optic fibers. However, to meet the increasing demand for bandwidth, communications system engineers are now adopting a new method of impressing the data on laser beams that no longer requires this "on-off" technique. This method is called coherent phase communication.In coherent phase communications, the data resides in small delays in the arrival time of the waves; the delays—a tiny fraction (10-16) of a second in duration—can then accurately relay the information even over thousands of miles. The digital electronic bits carrying video, data, or other information are converted at the laser into these small delays in the otherwise rock-steady light wave. But the number of possible delays, and thus the data-carrying capacity of the channel, is fundamentally limited by the degree of spectral purity of the laser beam. This purity can never be absolute—a limitation of the laws of physics—but with the new laser, Yariv and his team have tried to come as close to absolute purity as is possible.These findings were published in a paper titled, "High-coherence semiconductor lasers based on integral high-Q resonators in hybrid Si/III-V platforms." In addition to Yariv, Santis, and Steger, other Caltech coauthors include graduate student Yaakov Vilenchik, and former graduate student Arseny Vasilyev (PhD, '13). The work was funded by the Army Research Office, the National Science Foundation, and the Defense Advanced Research Projects Agency. The lasers were fabricated at the Kavli Nanoscience Institute at Caltech. - See more at: http://www.caltech.edu/content/new-laser-faster-internet#sthash.IiB53hKN.dpuf

Written by Jessica Stoller-Conrad
Source:
http://www.caltech.edu/content/new-laser-faster-internet

Mapping Hacking Team’s “Untraceable” Spyware

Summary

  • Remote Control System (RCS) is sophisticated computer spyware marketed and sold exclusively to governments by Milan-based Hacking Team.1 Hacking Team was first thrust into the public spotlight in 2012 when RCS was used against award-winning Moroccan media outlet Mamfakinch,2 and United Arab Emirates (UAE) human rights activist Ahmed Mansoor.3 Most recently, Citizen Lab research found that RCS was used to target Ethiopian journalists in the Washington DC area.4
  • In this post, we map out covert networks of “proxy servers” used to launder data that RCS exfiltrates from infected computers, through third countries, to an “endpoint,” which we believe represents the spyware’s government operator. This process is designed to obscure the identity of the government conducting the spying. For example, data destined for an endpoint in Mexico appears to be routed through four different proxies, each in a different country. This so-called “collection infrastructure” appears to be provided by one or more commercial vendors—perhaps including Hacking Team itself.
  • Hacking Team advertises that their RCS spyware is “untraceable” to a specific government operator. However, we claim to identify a number of current or former government users of the spyware by pinpointing endpoints, and studying instances of RCS that we have observed. We suspect that agencies of these twenty-one governments are current or former users of RCS: Azerbaijan, Colombia, Egypt, Ethiopia, Hungary, Italy, Kazakhstan, Korea, Malaysia, Mexico, Morocco, Nigeria, Oman, Panama, Poland, Saudi Arabia, Sudan, Thailand, Turkey, UAE, and Uzbekistan. Nine of these countries receive the lowest ranking, “authoritarian,” in The Economist’s 2012 Democracy Index.5 Additionally, two current users (Egypt and Turkey) have brutally repressed recent protest movements.
  • We also study how governments infect a target with the RCS spyware. We find that this is often through the use of “exploits”—code that takes advantage of bugs in popular software. Exploits help to minimize user interaction and awareness when implanting RCS on a target device. We show evidence that a single commercial vendor may have supplied Hacking Team customers with exploits for at least the past two years, and consider this vendor’s relationship with French exploit provider VUPEN.
Authors: Bill Marczak, Claudio Guarnieri, Morgan Marquis-Boire, and John Scott-Railton.
Source and more:
https://citizenlab.org/2014/02/mapping-hacking-teams-untraceable-spyware/

Project Tango

Source and more:
http://www.google.com/atap/projecttango/

The Formation of Love

This week, Facebook Data Science is shipping love in the form of a series of blog posts. This is installment 5 of 6; see the entire series here! This research has been conducted on anonymized, aggregated data.


Love is in the air! Or rather, it's written on your Facebook timeline. Couples are formed, and the news is shared with the world on Facebook by changing statuses from "Single" to "In a relationship". We explored interactions between couples before and after the relationship begins.


Relationships start with a period of courtship: on Facebook, messages are exchanged, profiles are visited, posts are shared on each other's timelines. The following graph shows the average number of timeline posts exchanged between two people who are about to become a couple. We studied the group of people who changed their status from "Single" to "In a relationship" and also stated an anniversary date as the start of their relationship. During the 100 days before the relationship starts, we observe a slow but steady increase in the number of timeline posts shared between the future couple. When the relationship starts ("day 0"), posts begin to decrease. We observe a peak of 1.67 posts per day 12 days before the relationship begins, and a lowest point of 1.53 posts per day 85 days into the relationship. Presumably, couples decide to spend more time together, courtship is off, and online interactions give way to more interactions in the physical world.




However, don't be discouraged by the decrease in online interactions, as the content of the interactions gets sweeter and more positive. We used statistical methods to automatically analyze a set of aggregated, anonymized timeline interactions. For each timeline interaction, we counted the proportion of words expressing positive emotions (like "love", "nice", "happy", etc.) minus the proportion of words expressing negative ones (like "hate", "hurt", "bad", etc.). The following graph shows the proportion of positive over negative feelings being expressed in timeline posts before and after the beginning of a relationship. We observe a general increase after the relationship's "day 0", with a dramatic increase in days 0 and 1!



This study was performed on a sample of anonymized, aggregated Timeline posts exchanged by couples worldwide, although the positive versus negative emotions focuses on posts written in English. We only considered couples who declared an anniversary date (as opposed to just changing their relationship status) between 04/11/2010 and 10/21/2013, and remained "Single" 100 days before and "In a Relationship" 100 days after their anniversary date.

Tune in tomorrow for our final installment, where we will address the question of breakups.

By Carlos Diuk
https://www.facebook.com/notes/facebook-data-science/the-formation-of-love/10152064609253859

Kickstarter Coins

There’s No Reason

Why are cryptocurrencies valuable? In the case of Bitcoin, because it was the first and is by far the largest, receiving the majority of monetary and media attention, with the one harmoniously building the other. Look at the second most popular coin, Litecoin, and the value is not as clear. The technical changes made compared to Bitcoin are slight. Unless you really value a larger monetary base (4x more than bitcoin) or faster confirmation times that provide illusory benefit (you just need 4x more of them for the same amount of certainty), there isn’t much you can do with Litecoin that you can’t do at far more places or with far more people than Bitcoin.
This is because the only reason to buy Litecoin is an expectation that its value will increase over time due to more people buying them speculating the same thing. Lacking a good reason or first-mover and earned-press advantage, you’re relying on the greater fool theory of investing which is a recipe for long-term disappointment. Indeed, the great hope for Litecoin or any standard alt-coin (any cryptocurrency that is not Bitcoin) is to find its way to an exchange that benefits from easier trading than in forums or chat rooms, where trust can be hard to find.
The thing we’re missing is a reason. The greater fool suffers from there always being a limit to the number of fools before the gains start to slide and buyers turn to sellers trying to lock in gains. Once upon a time the value of dollars was secured by gold. You could trade in a minimum amount with the issuer of the currency (the federal government, or private banks depending on the time period) and this secured the purchasing power of the dollars. Fixing a price to gold isn’t because gold is the only thing you can peg it to, but rather because it’s thought of as something valuable and can be universal in nature. But not everything needs to be universal in nature, and some things that are universal in nature don’t need to be valuable themselves as money.
Goods and services are valuable in monetary terms but not in a monetary sense. By fixing issuance of a custom cryptocurrency to a good or service, with a company ready to accept it at a fixed or floating rate, a single individual or group — even only capable of local distribution — can support a globally tradable cryptocurrency should the story be compelling enough or the private issuance small enough.

Why We Mine

So the question is, why mine alt-coins instead of just creating them and selling them into the market? Because of the excitement of the gold rush. The people who mine the first of your currency will be your distribution channels and have a huge incentive to sell them for as much as possible. It’s easy to mine in the beginning, that’s the elegant way it sucks in those who become the biggest advocates as they have the most to gain from mass adoption. In the case of greater-fool investing, this leads to dishonesty. But for genuinely useful things it’s fierce and unapologetic advocacy and education.In a deflationary currency with a predictable rate, everyone gains and so there is incentive to tell everyone you know and get them involved. The more of us in, the more valuable each one is.Unlike a currency, you want to issue new alt-coins fast: two months with difficulty scaling based on the number participating. After all have been issued, you want a period of time where they are tradable but not redeemable so the future value can be speculated on. After a certain point they should become redeemable at a fixed high rate that drops over times.This means that even in a currency with the entire money supply in circulation, you can have a long-term controlled deflationary trend simply by lowering the exchange rate at predictable intervals allowing people to buy or sell in advance of the change based on their situation.
So mining really is a marketing decision; you pay the price of not being the first seller of the currency, instead relegating yourself to 50 percent while allowing those doing the work to run away with wild profits. They’re the ones who get to tell everyone how great it was doing it.

By Adam B. Levine
Source and read more:
http://techcrunch.com/2014/02/15/kickstarter-coins-2/

After US squashes no-spy hopes, European leaders discuss ways to protect citizens’ data

German Chancellor Angela Merkel has thrown her weight behind the idea of keeping European online communications within Europe where possible.
The show of support for the plan, which was initially floated by European telecoms giant Deutsche Telekom, follows the collapse of hopes in Germany that the U.S. might agree a no-spy agreement with the country. In a separate move, Germany is reportedly planning to step up its own surveillance of embassies belonging to the U.S. and the U.K.
“We’ll talk with France about how we can maintain a high level of data protection,” Merkel said on Saturday. “Above all, we’ll talk about European providers that offer security for our citizens, so that one shouldn’t have to send emails and other information across the Atlantic. Rather, one could build up a communication network inside Europe.”

“Within Europe”

Deutsche Telekom’s proposal, for keeping German data within German borders, is partly a marketing exercise based on an existing agreement between it and the other big German webmail provider, United Internet. The two providers already had a so-called “De-Mail” alliance (playing on the country code “DE” for “Deutschland”) that involved a shared infrastructure for handling emails linked to the user’s offline identity.
In August, they added encryption across the system and started promoting “E-mail made in Germany” as a shield against prying American and British eyes. Deutsche Telekom said it also wanted to avoid communications sent between German parties being routed via the U.S., where possible.
On Saturday, Merkel said in a podcast that she and French President Francois Hollande would this week discuss which European providers might help create such a framework on a regional basis.
Merkel also hinted at her dissatisfaction with the way in which U.S. web giants such as Google and Facebook base their operations in the European country “where data protection is weakest” (that would be Ireland, then). “That is a situation which we also cannot countenance forever,” she said.
The European Commission reacted warmly to the idea of promoting European communications and data storage services. The office of digital agenda commissioner Neelie Kroes highlighted various pieces of legislation that are being worked on, such as the new Data Protection Regulation, as well as Commission-funded encryption research.
“We support Chancellor Merkel’s calls for better networks, and better data protection and security on those networks, as part of a broader digital industrial policy,” Kroes’s office said in a statement. “We hope that that Franco-German discussion on Wednesday, and the discussion with leading industrialists, will lead to an acceleration of work on important European legislation in this domain.”
Of course, keeping communications within European borders is no guarantee of data protection. The NSA’s British counterpart, GCHQ, has proven very adept at tapping most of the world’s communications infrastructure, so data could certainly be monitored even if it doesn’t pass through the U.S. That said, there is more jurisdictional protection for data that doesn’t head out that way, compared with data that is stored on or passes through U.S. systems.

“Treat them all the same way”

The surveillance of Germans by the NSA and its partners has generally raised more alarm among citizens and German businesses, which fear economic espionage, than it has in government. Indeed, apart from outrage over the surveillance of German politicians, Merkel’s main response has been to lobby for Germany to be included in the spy pact that binds the U.S., U.K., Canada, Australia and New Zealand.
A key feature of the “Five Eyes” pact, more properly known as the UKUSA agreement, has supposedly been that its members don’t spy on one another (though there’s plenty to suggest that they spy on one another’s citizens when that can help bypass national privacy laws).
However, that meme was unceremoniously blown out of the water last week when President Barack Obama said the U.S. had no no-spy agreement with any country.
Obama had just come out of talks with Hollande, who also wanted in on the pact. He somewhat patronizingly told reporters: “I have two daughters and they are both gorgeous and wonderful, and I would never choose between them. And that’s how I feel about my outstanding European partners.”
Germany will now treat the U.S. with the same suspicion it applies to China, Russia and North Korea. “We need to cease the differentiation and treat them all the same way,” Der Spiegel quoted the chairman of the German parliamentary intelligence oversight committee as saying.

By David Meyer
Source:
http://gigaom.com/2014/02/17/after-us-squashes-no-spy-hopes-european-leaders-discuss-ways-to-protect-citizens-data/

RButR

rbutr logo

rbutr tells you when the webpage you are viewing has been disputed,
rebutted or contradicted elsewhere on the internet. 

Source and more:
http://rbutr.com/

WikiLeaks now offers a search engine to help you find documents linked to any keyword

Searching WikiLeaks for documents about a particular topic, event or individual just got a little bit easier. The whistle-blowing site now offers a search engine where you can query its entire database of published documents for a specific phrase or keyword of your choosing.
Just like Google, you can also refine the nature of your search for more accurate and focused results. Filters allow you to request that Wikileaks ignore documents with certain words, or only if your search terms appear within the body of the page. A series of check-boxes, meanwhile, gives you the ability to find files from a specific WikiLeaks release, such as the Kissinger Cables.
Screen Shot 2014 02 17 at 15.40.51 730x479 WikiLeaks now offers a search engine to help you find documents linked to any keyword
WikiLeaks was, until now, a daunting site for some people. A straight-forward search tool such as this one should go a long way to help newcomers leverage and learn from the mass of information that WikiLeaks now offers on the Web.

By Nick Summers
Source:
http://thenextweb.com/insider/2014/02/17/wikileaks-now-offers-google-style-search-engine-help-find-documents-topic/#!weVVp

TSA Carry-On Baggage Scanners Easy To Hack

A widely deployed carry-on baggage X-ray scanner used in most airports could easily be manipulated by a malicious TSA insider or an outside attacker to sneak weapons or other banned items past airline security checkpoints.
Billy Rios, director of threat intelligence at Qualys, here today said he and colleague Terry McCorkle purchased a secondhand Rapiscan 522 B X-ray system via eBay and found several blatant security weaknesses that leave the equipment vulnerable to abuse: It runs on the outdated Windows 98 operating system, stores user credentials in plain text, and includes a feature called Threat Image Projection used to train screeners by injecting .bmp images of contraband, such as a gun or knife, into a passenger carry-on in order to test the screener's reaction during training sessions. The weak logins could allow a bad guy to project phony images on the X-ray display.
"The worst-case scenario is someone manipulates this in a way that the operator doesn't know a threat is in the bag ... by design, the software allows you to manipulate the image for training [purposes]," he says.
"The TSA requires this super-dangerous feature on all of these baggage scanners," Rios says.The researchers have reported the flaws to ICS-CERT. Rapiscan Systems had not responded to a press inquiry for this article at the time of this posting."This reminded me a lot of voting machines. When you design these government systems under procurement rules, you end up using old stuff. No one is paying attention to updating it, so security is crap because no one is analyzing it," says Bruce Schneier, CTO of Co3 Systems. "Stuff done in secret gets really shoddy security ... We know what gives us security is the constant interplay between the research community and vendors."The Rapiscan vulnerabilities only scratch the surface of security weaknesses in the TSA screening systems in U.S. airports, Rios says. He and McCorkle also plan to experiment with other equipment used at TSA security checkpoints, and to explore whether the so-called TSANet network that links major hubs like Atlanta, Chicago, and LAX airports could be accessed via a WiFi or cable in the airport, for example. "If we can get to that network from WiFi [or cable], that would be pretty interesting," Rios says.Rapiscan has a rocky history with TSA: Last year, it lost its contract with the feds for its backscatter body scanners after failing to address privacy issues raised about the detailed body images the system produced and stored. Most recently, the baggage scanner system contract was canceled after TSA learned that the X-ray machines contain a light bulb that was manufactured by a Chinese company. (TSA systems cannot include foreign-made parts). Rapiscan's baggage scanners remain in most airports, meanwhile, even though its contract with TSA is now defunct.
Rios and McCorkle were able to bypass the login screen merely by typing in a user name with a special character, which forced an error and then logged them in. In addition, they were able to see stored user credentials in clear text in the simple database store. A screener, which is a lower-level user of the system, could easily escalate his privileges by grabbing one of those logins from an unprotected file in the system, or via the login bypass flaw. "There's no two-factor" authentication in the console, Rios says.
"These bugs are actually embarrassing. It was embarrassing to report them to DHS -- the ability to bypass the login screen. These are really lame bugs," Rios says. But it's not really the vendor's fault when it comes to these types of weaknesses, he says. "The TSA had no device cybersecurity policy. It's the TSA's fault," he says. "The TSA operators have no expertise if the device is compromised, and they could be put in very precarious positions."The good news is that the researchers have seen no evidence of the TSA carry-on baggage screening systems being connected to the Internet. Even so, Rios says, it would only take one malicious insider from one airport to wreak havoc on TSA checkpoint security.

By Kelly Jackson Higgins
Source:
http://www.darkreading.com/attacks-breaches/tsa-carry-on-baggage-scanners-easy-to-ha/240166058

The Burglary: The Discovery of J. Edgar Hoover's Secret FBI





The never-before-told full story of the history-changing break-in at the FBI office in Media, Pennsylvania, by a group of unlikely activists—quiet, ordinary, hardworking Americans—that made clear the shocking truth and confirmed what some had long suspected, that J. Edgar Hoover had created and was operating, in violation of the U.S. Constitution, his own shadow Bureau of Investigation.
It begins in 1971 in an America being split apart by the Vietnam War . . . A small group of activists—eight men and women—the Citizens Commission to Investigate the FBI, inspired by Daniel Berrigan’s rebellious Catholic peace movement, set out to use a more active, but nonviolent, method of civil disobedience to provide hard evidence once and for all that the government was operating outside the laws of the land.

The would-be burglars—nonpro’s—were ordinary people leading lives of purpose: a professor of religion and former freedom rider; a day-care director; a physicist; a cab driver; an antiwar activist, a lock picker; a graduate student haunted by members of her family lost to the Holocaust and the passivity of German civilians under Nazi rule.

Betty Medsger's extraordinary book re-creates in resonant detail how this group of unknowing thieves, in their meticulous planning of the burglary, scouted out the low-security FBI building in a small town just west of Philadelphia, taking into consideration every possible factor, and how they planned the break-in for the night of the long-anticipated boxing match between Joe Frazier (war supporter and friend to President Nixon) and Muhammad Ali (convicted for refusing to serve in the military), knowing that all would be fixated on their televisions and radios.

Medsger writes that the burglars removed all of the FBI files and, with the utmost deliberation, released them to various journalists and members of Congress, soon upending the public’s perception of the inviolate head of the Bureau and paving the way for the first overhaul of the FBI since Hoover became its director in 1924. And we see how the release of the FBI files to the press set the stage for the sensational release three months later, by Daniel Ellsberg, of the top-secret, seven-thousand-page Pentagon study on U.S. decision-making regarding the Vietnam War, which became known as the Pentagon Papers.

At the heart of the heist—and the book—the contents of the FBI files revealing J. Edgar Hoover’s “secret counterintelligence program” COINTELPRO, set up in 1956 to investigate and disrupt dissident political groups in the United States in order “to enhance the paranoia endemic in these circles,” to make clear to all Americans that an FBI agent was “behind every mailbox,” a plan that would discredit, destabilize, and demoralize groups, many of them legal civil rights organizations and antiwar groups that Hoover found offensive—as well as black power groups, student activists, antidraft protestors, conscientious objectors.

The author, the first reporter to receive the FBI files, began to cover this story during the three years she worked for The Washington Post and continued her investigation long after she'd left the paper, figuring out who the burglars were, and convincing them, after decades of silence, to come forward and tell their extraordinary story.

The Burglary
is an important and riveting book, a portrait of the potential power of non­violent resistance and the destructive power of excessive government secrecy and spying.

Author: Betty Medsger
Publication date: January 7, 2014
Source and more:
http://www.amazon.com/The-Burglary-Discovery-Hoovers-Secret/dp/0307962954/

New ‘Mask’ APT Campaign Called Most Sophisticated Yet

A group of high-level, nation-state attackers has been targeting government agencies, embassies, diplomatic offices and energy companies with a cyber-espionage campaign for more than five years that researchers say is the most sophisticated APT operation they’ve seen to date. The attack, dubbed the Mask, or “Careto” (Spanish for “Ugly Face” or “Mask”) includes a number of unique components and functionality and the group behind it has been stealing sensitive data such as encryption and SSH keys and wiping and deleting other data on targeted machines.
The Mask APT campaign has been going on since at least 2007 and it is unusual in a number of ways, not the least of which is that it doesn’t appear to have any connection to China. Researchers say that the attackers behind the Mask are Spanish-speaking and have gone after targets in more than 30 countries around the world. Many, but not all, of the victims are in Spanish-speaking countries, and researchers at Kaspersky Lab, who uncovered the campaign, said that the attackers had at least one zero-day in their arsenal, along with versions of the Mask malware for Mac OS X, Linux, and perhaps even iOS and Android.
“These guys are better than the Flame APT group because of the way that they managed their infrastructure,” said Costin Raiu, head of the Global Research Analysis Team at Kaspersky. “The speed and professionalism is beyond that of Flame or anything else that we’ve seen so far.”
Raiu revealed the details of the Mask attack campaign during the Kaspersky Security Analyst Summit here Monday.
Interestingly, the Kaspersky researchers first became aware of the Mask APT group because they saw the attackers exploiting a vulnerability in one of the company’s products. The attackers found a bug in an older version of a Kaspersky product, which has been patched for several years, and were using the vulnerability as part of their method for hiding on compromised machines. Raiu said that the attackers had a number of different tools at their disposal, including implants that enabled them to maintain persistence on victims’ machines, intercept all TCP and UDP communications in real time and remain invisible on the compromised machine. Raiu said all of the communications between victims and the C&C servers were encrypted.
The attackers targeted victims with spear-phishing emails that would lead them to a malicious Web site where the exploits were hosted. There were a number of exploits on the site and they were only accessible through the direct links the attackers sent the victims. One of the exploits the attackers used was for CVE-2012-0773, an Adobe Flash vulnerability that was discovered by researchers at VUPEN, the French firm that sells exploits and vulnerability information to private customers. The Flash bug was an especially valuable one, as it could be used to bypass the sandbox in the Chrome browser. Raiu said the exploit for this Flash bug never leaked publicly.
On Monday, Chaouki Bekrar, CEO of VUPEN, said that the exploit used by the Mask crew was not the one developed by VUPEN.
“The exploit is not our, probably it was found by diffing the patch released by Adobe after Pwn2Own,” Bekrar wrote on Twitter.
While most APT campaigns tend to target Windows machines, the Mask attackers also were interested in compromising OS X and Linux machines, as well as some mobile platforms. Kaspersky researchers found Windows and OS X samples and some indications of a Linux versions, but don’t have a Linux sample. There also is some evidence that there may be versions for both iOS and Android. Raiu said there was one victim in Morocco who was communicating with the C&C infrastructure over 3G.
Kaspersky researchers have sinkholed about 90 of the C&C domains the attackers were using, and the operation was shut down last week within a few hours of a short blog post the researchers published with a few details of the Mask campaign. Raiu said that after the post was published, the Mask operators rolled up their campaign within about four hours.
However, Raiu said that the attackers could resurrect the operation without much trouble.
“They could come back very quickly if they wanted,” he said.

By Dennis Fischer
Source and more:
http://threatpost.com/new-mask-apt-campaign-called-most-sophisticated-yet/104148

Security Protocols and Evidence: Where Many Payment Systems Fail


Abstract

As security protocols are used to authenticate more transactions, they end up being relied on in legal proceedings. Designers often fail to anticipate this. Here we show how the EMV protocol { the dominant card payment system worldwide { does not produce adequate evidence for resolving disputes. We propose five principles for designing systems to produce robust evidence. We apply these to other systems such as Bitcoin, electronic banking and phone payment apps. We finally propose specific modifications to EMV that could allow disputes to be resolved more efficiently and fairly.

By Steven J. Murdoch and Ross Anderson
Source and read the full paper:
http://www.cl.cam.ac.uk/~sjm217/papers/fc14evidence.pdf

The Currency Cloud: A World Where Moving Money is Easy

Source and more:
http://www.thecurrencycloud.com/

Bitcoin Conversion Now Live

For years, you have been able to convert currencies in Bing. Whether you’re looking to compare dollars to pounds, euros to pesos or yuan to rupee, we’ve got over 50 currencies in our index that we’ll display following a few quick keystrokes.
As Bitcoin (the peer-to-peer payment system and digital currency) makes headlines and captures the world's attention, we thought it was only natural to give you an easy way to track real-time fluctuations. Starting today, you will find instant Bitcoin conversions at the top of your Bing results.* How many Bitcoins do you need for that $71,000 Tesla?
screenshot with exchanges_new
To check it out for yourself, head over to www.bing.com and try a search. To learn more about the data, check out Coinbase's post here.

Source:
http://www.bing.com/blogs/site_blogs/b/search/archive/2014/02/10/coinbit.aspx

Google posts large privacy violation notice on French homepage

A French court on Friday refused Google’s last-minute plea to suspend an order imposed by a privacy watchdog, meaning the search giant has to post a notice on its homepage for a period of 48 hours informing users that the company was fined €150,000 ($204,000) for violating data collection laws. Google complied with the order on Google.fr as of Saturday morning, as seen below:
Screen Shot Google privacy fine
The Friday court decision came in response to Google’s emergency appeal this week to the Conseil d’Etat, France’s highest appeals court for administrative law. The company argued that the penalty — which specified that the notice had to be printed in 13-point Arial font and appear in the center the screen below the search box — was too severe and that Google’s reputation would be irreparably damaged.
In a decision and related press release issued on Friday, the French Court explained that Google had failed to show the order would cause permanent damage to its financial interests or its reputation. It added that Google had failed to show that that the privacy agency’s order was illegal, or that the public interest would be harmed by going forward with the order.
As a result, Google had to post the full paragraph set out in the agency’s original order, which informs consumers about the fine and requires a link to the decision on the privacy agency’s website.
Google can continue to appeal the underlying fine, which was imposed for the company’s failure to respect data protection and consent rules, but could not avoid the order to post the notice on Google.fr within seven days.
The €150,000 fine, which is the maximum the agency could impose, is meaningless to a company of Google’s size, but the search giant appears anxious to prevent governments telling it what to post on its homepage. Google did not immediately return an after-hours request for comment, but the Wall Street Journal reported earlier this week that the company told the court that it always “maintained that [homepage] page in a virgin state.”
This is not the first time that a European court has required an American company to post a notice on its homepage; last year, a judge ordered Apple to post a notice on its website that Samsung did not violate a design patent for its iPad. And, as MarketingLand notes, the Belgians imposed an even more draconian order on Google in 2006.
Here are the relevant parts of today’s order. I’m posting the original French with a Google Translate version below.
Here’s a portion of today’s ruling summarizing the contents of the original order:
[The agency] a décidé de prononcer à l’encontre de cette société une sanction pécuniaire de 150 000 euros, de rendre cette décision publique sur le site de la CNIL et d’ordonner à la société de publier à sa charge, sur son site internet http://www.google.fr, pendant une durée de 48 heures consécutives, le septième jour suivant la notification de sa décision, selon des modalités définies par celle-ci, le texte du communiqué suivant : « la formation restreinte de la Commission nationale de l’informatique et des libertés a condamné la société Google à 150 000 euros d’amende pour manquements aux règles de protection des données personnelles consacrées par la loi « informatique et libertés ».
[the agency] decided to vote against this company a fine of 150,000 euros, to make this decision public on the website of the CNIL and order the company to publish its charge on its website http://www.google . fr, for a period of 48 consecutive hours, the seventh day following the notification of the decision, according to procedures laid down by the latter, the text of the following statement: “the limited formation of the National Commission on Informatics and freedoms condemned the Google company 150,000 euros fine for breaches of data protection enshrined in the “Informatique et Libertés” law
Here’s the part where the court says the notice won’t permanently harm Google’s reputation or the public interest:
par ailleurs, que la société ne saurait soutenir et n’allègue d’ailleurs pas qu’une atteinte grave et immédiate pourrait être portée, par la sanction dont la suspension est demandée, à la poursuite même de son activité ou à ses intérêts financiers et patrimoniaux ou encore à un intérêt public
Moreover, the company can not and will not support also alleges a serious and immediate harm could be increased by the sanction which the suspension is requested, the same pursuit of its business or its financial interests and property or in the public interest.

By Jeff John Roberts
Source:
http://gigaom.com/2014/02/07/google-must-post-news-of-privacy-fine-after-french-court-refuses-to-suspend-order/

Pinpointing the Brain’s Arbitrator

We tend to be creatures of habit. In fact, the human brain has a learning system that is devoted to guiding us through routine, or habitual, behaviors. At the same time, the brain has a separate goal-directed system for the actions we undertake only after careful consideration of the consequences. We switch between the two systems as needed. But how does the brain know which system to give control to at any given moment? Enter The Arbitrator.
Researchers at the California Institute of Technology (Caltech) have, for the first time, pinpointed areas of the brain—the inferior lateral prefrontal cortex and frontopolar cortex—that seem to serve as this "arbitrator" between the two decision-making systems, weighing the reliability of the predictions each makes and then allocating control accordingly. The results appear in the current issue of the journal Neuron.
According to John O'Doherty, the study's principal investigator and director of the Caltech Brain Imaging Center, understanding where the arbitrator is located and how it works could eventually lead to better treatments for brain disorders, such as drug addiction, and psychiatric disorders, such as obsessive-compulsive disorder. These disorders, which involve repetitive behaviors, may be driven in part by malfunctions in the degree to which behavior is controlled by the habitual system versus the goal-directed system.
"Now that we have worked out where the arbitrator is located, if we can find a way of altering activity in this area, we might be able to push an individual back toward goal-directed control and away from habitual control," says O'Doherty, who is also a professor of psychology at Caltech. "We're a long way from developing an actual treatment based on this for disorders that involve over-egging of the habit system, but this finding has opened up a highly promising avenue for further research."
In the study, participants played a decision-making game on a computer while connected to a functional magnetic resonance imaging (fMRI) scanner that monitored their brain activity. Participants were instructed to try to make optimal choices in order to gather coins of a certain color, which were redeemable for money.
During a pre-training period, the subjects familiarized themselves with the game—moving through a series of on-screen rooms, each of which held different numbers of red, yellow, or blue coins. During the actual game, the participants were told which coins would be redeemable each round and given a choice to navigate right or left at two stages, knowing that they would collect only the coins in their final room. Sometimes all of the coins were redeemable, making the task more habitual than goal-directed. By altering the probability of getting from one room to another, the researchers were able to further test the extent of participants' habitual and goal-directed behavior while monitoring corresponding changes in their brain activity.
With the results from those tests in hand, the researchers were able to compare the fMRI data and choices made by the subjects against several computational models they constructed to account for behavior. The model that most accurately matched the experimental data involved the two brain systems making separate predictions about which action to take in a given situation. Receiving signals from those systems, the arbitrator kept track of the reliability of the predictions by measuring the difference between the predicted and actual outcomes for each system. It then used those reliability estimates to determine how much control each system should exert over the individual's behavior. In this model, the arbitrator ensures that the system making the most reliable predictions at any moment exerts the greatest degree of control over behavior.
"What we're showing is the existence of higher-level control in the human brain," says Sang Wan Lee, lead author of the new study and a postdoctoral scholar in neuroscience at Caltech. "The arbitrator is basically making decisions about decisions."
In line with previous findings from the O'Doherty lab and elsewhere, the researchers saw in the brain scans that an area known as the posterior putamen was active at times when the model predicted that the habitual system should be calculating prediction values. Going a step further, they examined the connectivity between the posterior putamen and the arbitrator. What they found might explain how the arbitrator sets the weight for the two learning systems: the connection between the arbitrator area and the posterior putamen changed according to whether the goal-directed or habitual system was deemed to be more reliable. However, no such connection effects were found between the arbitrator and brain regions involved in goal-directed learning. This suggests that the arbitrator may work mainly by modulating the activity of the habitual system.
"One intriguing possibility arising from these findings, which we will need to test in future work, is that being in a habitual mode of behavior may be the default state," says O'Doherty. "So when the arbitrator determines you need to be more goal-directed in your behavior, it accomplishes this by inhibiting the activity of the habitual system, almost like pressing the breaks on your car when you are in drive."
The paper in Neuron is titled "Neural computations underlying arbitration between model-based and model-free learning." In addition to O'Doherty and Lee, Shinsuke Shimojo, the Gertrude Baltimore Professor of Experimental Psychology at Caltech, is also a coauthor. The work was completed with funding from the National Institutes of Health, the Gordon and Betty Moore Foundation, the Japan Science and Technology Agency, and the Caltech-Tamagawa Global COE Program.
- See more at: http://www.caltech.edu/content/pinpointing-brain-s-arbitrator#sthash.PZAJDAhw.dpuf

By Kimm Fesenmeier
Source:
http://www.caltech.edu/content/pinpointing-brain-s-arbitrator

Code Blue 2014: International Security Conference

Japan
17-18 February

Source and more details:
http://codeblue.jp/en-index.html

Data Science Symposium 2014

Purpose:

Given the explosion of data production, storage capabilities, communications technologies, computational power, and supporting infrastructure, data science is now recognized as a highly-critical growth area with impact across many sectors including science, government, finance, health care, manufacturing, advertising, retail, and others. Since data science technologies are being leveraged to drive crucial decision making, it is of paramount importance to be able to measure the performance of these technologies and to correctly interpret their output. The NIST Information Technology Laboratory is forming a cross-cutting data science program focused on driving advancements in data science through system benchmarking and rigorous measurement science.
 
Symposium Topics:
Understanding the Data Science Technical Landscape:
  • Primary challenges in and technical approaches to complex workflow components of Big Data systems, including ETL, lifecycle management, analytics, visualization & human-system interaction.
  • Major forms of analytics employed in data science.
Improving Analytic System Performance via Measurement Science
  • Generation of ground truth for large datasets and performance measurement with limited or no ground truth.
  • Methods to measure the performance of data analytic workflows where there are multiple subcomponents, decision points, and human interactions.
  • Methods to measure the flow of uncertainty across complex data analytic systems.
  • Approaches to formally characterizing end-to-end analytic workflows.
Datasets to Enable Rigorous Data Science Research
  • Useful properties for data science reference datasets.
  • Leveraging simulated data in data science research.
  • Efficient approaches to sharing research data.
Source and for more details:
http://www.nist.gov/itl/iad/data-science-symposium-2014.cfm

JILA Strontium Atomic Clock Sets New Records in Both Precision and Stability

Heralding a new age of terrific timekeeping, a research group led by a National Institute of Standards and Technology (NIST) physicist has unveiled an experimental strontium atomic clock that has set new world records for both precision and stability—key metrics for the performance of a clock.
The clock is in a laboratory at JILA, a joint institute of NIST and the University of Colorado Boulder.
Described in a new paper in Nature,* the JILA strontium lattice clock is about 50 percent more precise than the record holder of the past few years, NIST’s quantum logic clock.** Precision refers to how closely the clock approaches the true resonant frequency at which its reference atoms oscillate between two electronic energy levels. The new strontium clock is so precise it would neither gain nor lose one second in about 5 billion years, if it could operate that long. (This time period is longer than the age of the Earth, an estimated 4.5 billion years old.)
The strontium clock’s stability—the extent to which each tick matches the duration of every other tick—is about the same as NIST’s ytterbium atomic clock, another world leader in stability unveiled in August, 2013.*** Stability determines in part how long an atomic clock must run to achieve its best performance through continual averaging. The strontium and ytterbium lattice clocks are so stable that in just a few seconds of averaging they outperform other types of atomic clocks that have been averaged for hours or days.

Source and more:
http://www.nist.gov/pml/div689/20140122_strontium.cfm

Bitcoin’s Emerging Price Stability

Earlier in January Bitcoin, as it receded as recipient of an infinite press, began to see its trading range tighten after months of wild swings. I pointed this out as perhaps the start of new stability for Bitcoin, which in turn could help its platform mature, or indicate maturation thereof.
There was some squawking that the time frame I had selected as ‘enough’ to indicate a trend was too short. It was a reasonable complaint. Happily, Bitcoin has behaved and exonerated me by tacking on another tranche of time of generally stable prices.
The gist is simple: For nearly the entire month of January Bitcoin has traded in the 900s, with minor exceptions in 1,000s. For a currency that until very recently could see 50% of its value drop in a day and not have that day stand out all too much, it has been something of a calming of the seas.
Here’s the D1 chart you need (Mt.Gox data via Clark Moody):
Screen Shot 2014-02-01 at 12.27.08 PM
The white Y Axis here points to the 6th of Janurary, when things calmed down.
Why does it matter if Bitcoin is seeing increasing price stability? Essentially the more wild the swings in its value, the less useful Bitcoin is as a tool of commerce. This goes both ways: The more real uses there are for Bitcoin, the smaller the percentage of speculative trades in the currency; and, the smaller the changes in its price, the more people may start to accept Bitcoin as a payment option. It’s a self-reinforcing cycle.
Marc Andreessen recently summed this well: “Bitcoin is a classic network effect, a positive feedback loop. The more people who use Bitcoin, the more valuable Bitcoin is for everyone who uses it, and the higher the incentive for the next user to start using the technology.” Marc’s piece — it’s mandatory reading, by the way — lays out a bullish case for Bitcoin arguing that its core technological tenets provide large incentive for its use, which will drive adoption and long-term use.
Bitcoin volume on the Mt.Gox exchange is down sharply this year, a change I think that is roughly commensurate with the currency’s decline in media attention. If you were hoping for Bitcoin to shape up and fly right, it was a banner month.

By Alex Wilhelm
Source:
http://techcrunch.com/2014/02/01/bitcoins-emerging-price-stability/

BİLİŞİM ve TEKNOLOJİ HUKUKU MASTER PROGRAMI BAHAR DÖNEMİ

İstanbul Bilgi Üniversitesi Bilişim ve Teknoloji Hukuku Enstitü'nün düzenlediği, Bilişim ve Teknoloji Hukuku Master programı başvuruları 3 Şubat-10 Mart 2014 tarihleri arasındadır.

Programla ilgili ayrıntılı bilgi http://cyberlaw.bilgi.edu.tr adresinde yer almaktadır.

İletişim: meldao@bilgi.edu.tr

GoDaddy changing security policy after infamous social engineering attack on @N

Naoki Hiroshima’s scary tale of losing his single-character Twitter handle has captivated the internet over the last few days. First, we heard the story of how Naoki was held ransom for the rare handle, then GoDaddy admitted it was partially responsible for giving out details that lead to the compromise.
The change may appear small on the surface, but should help prevent a repeat of the same story. It would be extremely hard for an attacker to gain 8 digits of a credit card (unless the whole card was stolen) and by locking the account after 3 attempts the company is protecting itself from attackers that would just hang up the phone and try again with a new representative.
Unfortunately, Naoki still hasn’t received his Twitter account back with the handle now in the grips of yet another squatter. The story isn’t quite over yet.

By Owen Williams
Source and more:
http://thenextweb.com/socialmedia/2014/02/01/godaddy-changes-policies-n-hack/?fromcat=all#!uatwb

Worry on the Brain

According to the National Institute of Mental Health, over 18 percent of American adults suffer from anxiety disorders, characterized as excessive worry or tension that often leads to other physical symptoms. Previous studies of anxiety in the brain have focused on the amygdala, an area known to play a role in fear. But a team of researchers led by biologists at the California Institute of Technology (Caltech) had a hunch that understanding a different brain area, the lateral septum (LS), could provide more clues into how the brain processes anxiety. Their instincts paid off—using mouse models, the team has found a neural circuit that connects the LS with other brain structures in a manner that directly influences anxiety.

Source and read more:
http://www.caltech.edu/content/worry-brain

The Value of Online Privacy

Abstract:

We estimate the value of online privacy with a differentiated products model of the demand for Smartphone apps. We study the apps market because it is typically necessary for the consumer to relinquish some personal information through “privacy permissions” to obtain the app and its benefits. Results show that the representative consumer is willing to make a one-time payment for each app of $2.28 to conceal their browser history, $4.05 to conceal their list of contacts, $1.19 to conceal their location, $1.75 to conceal their phone’s identification number, and $3.58 to conceal the contents of their text messages. The consumer is also willing to pay $2.12 to eliminate advertising. Valuations for concealing contact lists and text messages for “more experienced” consumers are also larger than those for “less experienced” consumers. Given the typical app in the marketplace has advertising, requires the consumer to reveal their location and their phone’s identification number, the benefit from consuming this app must be at least $5.06.


By Scott Savage and Donald M. Waldman
Source and read the full paper:
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2341311

How I Lost My $50,000 Twitter Username

A story of how PayPal and GoDaddy allowed the attack and caused me to lose my $50,000 Twitter username.

By Naoki Hiroshima
Source and read:
https://medium.com/p/24eb09e026dd