Greasing the Wheels of the Internet Economy. The Connected World

Article image

Digitally driven economic growth continues to be one of the few bright spots in a sluggish global economy. Reducing or eliminating numerous factors that inhibit online interactions and exchange could cause this growth to be even faster and could have an even bigger impact. To better understand these sources of “e-friction” and how they constrain economic activity, the Internet Corporation for Assigned Names and Numbers (ICANN) commissioned The Boston Consulting Group to prepare this independent report. The results have been discussed with ICANN executives, but BCG is responsible for the analysis and conclusions.

Source and read the full report:
https://www.bcgperspectives.com/content/articles/digital_economy_telecommunications_greasing_wheels_internet_economy/

Introducing our smart contact lens project



You’ve probably heard that diabetes is a huge and growing problem—affecting one in every 19 people on the planet. But you may not be familiar with the daily struggle that many people with diabetes face as they try to keep their blood sugar levels under control. Uncontrolled blood sugar puts people at risk for a range of dangerous complications, some short-term and others longer term, including damage to the eyes, kidneys and heart. A friend of ours told us she worries about her mom, who once passed out from low blood sugar and drove her car off the road.

Many people I’ve talked to say managing their diabetes is like having a part-time job. Glucose levels change frequently with normal activity like exercising or eating or even sweating. Sudden spikes or precipitous drops are dangerous and not uncommon, requiring round-the-clock monitoring. Although some people wear glucose monitors with a glucose sensor embedded under their skin, all people with diabetes must still prick their finger and test drops of blood throughout the day. It’s disruptive, and it’s painful. And, as a result, many people with diabetes check their blood glucose less often than they should.

Over the years, many scientists have investigated various body fluids—such as tears—in the hopes of finding an easier way for people to track their glucose levels. But as you can imagine, tears are hard to collect and study. At Google[x], we wondered if miniaturized electronics—think: chips and sensors so small they look like bits of glitter, and an antenna thinner than a human hair—might be a way to crack the mystery of tear glucose and measure it with greater accuracy.
We’re now testing a smart contact lens that’s built to measure glucose levels in tears using a tiny wireless chip and miniaturized glucose sensor that are embedded between two layers of soft contact lens material. We’re testing prototypes that can generate a reading once per second. We’re also investigating the potential for this to serve as an early warning for the wearer, so we’re exploring integrating tiny LED lights that could light up to indicate that glucose levels have crossed above or below certain thresholds. It’s still early days for this technology, but we’ve completed multiple clinical research studies which are helping to refine our prototype. We hope this could someday lead to a new way for people with diabetes to manage their disease.

We’re in discussions with the FDA, but there’s still a lot more work to do to turn this technology into a system that people can use. We’re not going to do this alone: we plan to look for partners who are experts in bringing products like this to market. These partners will use our technology for a smart contact lens and develop apps that would make the measurements available to the wearer and their doctor. We’ve always said that we’d seek out projects that seem a bit speculative or strange, and at a time when the International Diabetes Federation (PDF) is declaring that the world is “losing the battle” against diabetes, we thought this project was worth a shot.
Source:
http://googleblog.blogspot.com/2014/01/introducing-our-smart-contact-lens.html

Quantum Physics Could Make Secure, Single-Use Computer Memories Possible

Computer security systems may one day get a boost from quantum physics, as a result of recent research from the National Institute of Standards and Technology (NIST). Computer scientist Yi-Kai Liu has devised a way to make a security device that has proved notoriously difficult to build—a "one-shot" memory unit, whose contents can be read only a single time.
The research, which Liu is presenting at this week's Innovations in Theoretical Computer Science conference,* shows in theory how the laws of quantum physics could allow for the construction of such memory devices. One-shot memories would have a wide range of possible applications such as protecting the transfer of large sums of money electronically. A one-shot memory might contain two authorization codes: one that credits the recipient's bank account and one that credits the sender's bank account, in case the transfer is canceled. Crucially, the memory could only be read once, so only one of the codes can be retrieved, and hence, only one of the two actions can be performed—not both.
"When an adversary has physical control of a device—such as a stolen cell phone—software defenses alone aren't enough; we need to use tamper-resistant hardware to provide security," Liu says. "Moreover, to protect critical systems, we don't want to rely too much on complex defenses that might still get hacked. It's better if we can rely on fundamental laws of nature, which are unassailable."
Unfortunately, there is no fundamental solution to the problem of building tamper-resistant chips, at least not using classical physics alone. So scientists have tried involving quantum mechanics as well, because information that is encoded into a quantum system behaves differently from a classical system.
Liu is exploring one approach, which stores data using quantum bits, or "qubits," which use quantum properties such as magnetic spin to represent digital information. Using a technique called "conjugate coding," two secret messages—such as separate authorization codes—can be encoded into the same string of qubits, so that a user can retrieve either one of the two messages. But as the qubits can only be read once, the user cannot retrieve both.
The risk in this approach stems from a more subtle quantum phenomenon: "entanglement," where two particles can affect each other even when separated by great distances. If an adversary is able to use entanglement, he can retrieve both messages at once, breaking the security of the scheme.
However, Liu has observed that in certain kinds of physical systems, it is very difficult to create and use entanglement, and shows in his paper that this obstacle turns out to be an advantage: Liu presents a mathematical proof that if an adversary is unable to use entanglement in his attack, that adversary will never be able to retrieve both messages from the qubits. Hence, if the right physical systems are used, the conjugate coding method is secure after all.
"It's fascinating how entanglement—and the lack thereof—is the key to making this work," Liu says. "From a practical point of view, these quantum devices would be more expensive to fabricate, but they would provide a higher level of security. Right now, this is still basic research. But there's been a lot of progress in this area, so I'm optimistic that this will lead to useful technologies in the real world."
*Y-K Liu. "Building one-time memories from isolated qubits." Paper presented at the ITCS 20-14 Innovations in Theoretical Computer Science meeting, Princeton University, Jan. 11-14, 2014. More info at http://itcs2014.wordpress.com/program/.
Source:
http://www.nist.gov/itl/math/onetime-011414.cfm

Learning Organizations: Extending the Field (Knowledge and Space)



This book is designed to extend the field of organizational learning in several ways. The contributors from three continents bring different perspectives on processes and outcomes of knowledge creation and sharing in and between organizations in diverse contexts. They use approaches and concepts from numerous disciplines including the arts, economics, geography, organizational studies, psychology, and sociology. The contributions enrich the spatial turn in organization studies by offering fresh insights for researchers who seek to attend to the contextual dimensions of the phenomena they are studying. They provide examples of organizational places and spaces that have not yet received sufficient attention, as diverse as temporary international organizations and computer screens.

Ariane Berthoin Antal , Peter Meusburger , Laura Suarsana
Publication date: December 3, 2013
Source:
http://www.amazon.com/Learning-Organizations-Extending-Field-Knowledge/dp/940077219X/ref=sr_1_1?s=books&ie=UTF8&qid=1390167066&sr=1-1&keywords=multistakeholder+internet+governance

Updated Techniques for Web Content Accessibility Guidelines (WCAG)

The Web Content Accessibility Guidelines Working Group (WCAG WG) requests review of draft updates to Notes that accompany WCAG 2.0: Techniques for WCAG 2.0 (Editors’ Draft) and Understanding WCAG 2.0 (Editors’ Draft). Comments are welcome through 14 February 2014. (This is not an update to WCAG 2.0, which is a stable document.) To learn more about the updates, see the Call for Review: WCAG 2.0 Techniques Draft Updates e-mail. Read about the Web Accessibility Initiative (WAI).

Source:
http://www.w3.org/

Tiny Constables and the Cost of Surveillance: Making Cents Out of United States v. Jones

In United States v. Jones, five Supreme Court Justices wrote that government surveillance of one’s public movements for twenty-eight days using a GPS device violated a reasonable expectation of privacy and constituted a Fourth Amendment search. Unfortunately, they didn’t provide a clear and administrable rule that could be applied in other government surveillance cases. In this Essay, Kevin Bankston and Ashkan Soltani draw together threads from the Jones concurrences and existing legal scholarship and combine them with data about the costs of different location tracking techniques to articulate a cost-based conception of the expectation of privacy that both supports and is supported by the concurring opinions in Jones.
Introduction
As Judge Richard Posner once said, “Technological progress poses a threat to privacy by enabling an extent of surveillance that in earlier times would have been prohibitively expensive,” thereby “giving the police access to surveillance techniques that are ever less expensive and ever more effective.”2 Among these “‘fantastic advances’”3 in surveillance technology is the Global Positioning System (GPS), which provides law enforcement with an inexpensive means to track the precise geographic locations of criminal suspects. The Supreme Court recently addressed this technology in United States v. Jones, which considered whether the police’s attachment of a GPS device to a suspect’s car, and the use of that device to monitor the car’s movements along public roads for twenty-eight days, constituted a search under the Fourth Amendment.4
All nine Justices answered that question in the affirmative, but they produced three different opinions. Five Justices, in an opinion authored by Justice Scalia, did not rule on the question of whether the monitoring of Jones’s movements via the GPS device constituted a search. Rather, the majority found that the attachment of the device to Jones’s car violated his Fourth Amendment expectation of privacy under a trespass-oriented theory of Fourth Amendment protection.5 Four other Justices signed a concurring opinion by Justice Alito, rejecting the majority’s trespass theory and arguing that the prolonged monitoring of the GPS device constituted a search by violating Jones’s expectation of privacy.6 And finally, Justice Sotomayor both joined the majority opinion and wrote her own concurring opinion, agreeing with the majority that the installation constituted a search but also agreeing with Justice Alito that “‘longer term GPS monitoring in investigations of most offenses impinges on expectations of privacy.’”7
The Jones concurrences, taken together, are potentially a watershed moment in the Court’s Fourth Amendment jurisprudence. Prior to Jones, the Court’s precedent on location tracking—regarding radio “beeper”-based vehicle tracking in the 1980s—indicated that one could have no reasonable expectation of privacy in one’s public movements.8 In Jones, five Justices rejected that proposition, at least with respect to prolonged government surveillance of one’s public movements. Unfortunately, those Justices stopped short of clarifying when one does have such an expectation or when surveillance violates it—other than Justice Alito’s conclusion that “the line was surely crossed before the 4-week mark.”9
Trying to make sense of the Jones concurrences and reduce them to a clear and administrable rule—or, alternatively, arguing that they make no sense and cannot be so reduced—has become something of a cottage industry amongst privacy law scholars.10 Building on the work of those who have come before us, this Essay is our attempt to make sense—and “cents”—out of United States v. Jones, by demonstrating how new technologies are continually reducing the cost of surveillance and by attempting to formulate a new approach to defining the Fourth Amendment’s protections based on those falling costs.
Specifically, we propose that a new surveillance technique is likely to violate an expectation of privacy when it eliminates or circumvents a preexisting structural right of privacy and disrupts the equilibrium of power between police and suspects by making it much less expensive for the government to collect information. We explain how courts might put that general proposition into practice by using estimates of the actual costs of particular modes of location tracking to apply a rough rule of thumb: if the new tracking technique is an order of magnitude less expensive than the previous technique, the technique violates expectations of privacy and runs afoul of the Fourth Amendment.
Although we derive this approach from the specific example of location tracking and limit our Essay to that topic, we are hopeful that it may also prove a useful tool in evaluating other surveillance techniques.

By Kevin S. Bankston and Ashkan Soltani
Source and read the full paper:
http://www.yalelawjournal.org/the-yale-law-journal-pocket-part/constitutional-law/tiny-constables-and-the-cost-of-surveillance:-making-cents-out-of-united-states-v.-jones

Coinbase adds two-factor security for when users send more than $100 of Bitcoin

Security remains a major pain point for Bitcoin — as a number of hacks on companies involved with the virtual currency prove — and that’s led Coinbase, the most heavily funded Bitcoin startup out there, to beef up its measures.
The company has extended its two-factor authentication security process when sending over $100 in Bitcoin (for both individual transactions and cumulative tallies over 24 hours). It is also adding the additional security layer when users make recurring sends, enable/disable their API key, and change their password/phone/Google authenticator or SMS pin number.
Coinbase — which uses ‘cold storage’ security to keep 97 percent of its users’ Bitcoins offline to help prevent hacking — says it has “two or three other projects in the pipeline” to help increase the safety for its 650,000 users and their Bitcoin troves.

By Jon Russell
Source:
http://thenextweb.com/insider/2014/01/15/coinbase-adds-two-factor-security-for-when-users-send-more-than-100-of-bitcoin/?fromcat=all#!seXiS

N.S.A. Devises Radio Pathway Into Computers

The National Security Agency has implanted software in nearly 100,000 computers around the world that allows the United States to conduct surveillance on those machines and can also create a digital highway for launching cyberattacks.
While most of the software is inserted by gaining access to computer networks, the N.S.A. has increasingly made use of a secret technology that enables it to enter and alter data in computers even if they are not connected to the Internet, according to N.S.A. documents, computer experts and American officials.
The technology, which the agency has used since at least 2008, relies on a covert channel of radio waves that can be transmitted from tiny circuit boards and USB cards inserted surreptitiously into the computers. In some cases, they are sent to a briefcase-size relay station that intelligence agencies can set up miles away from the target.                           
The radio frequency technology has helped solve one of the biggest problems facing American intelligence agencies for years: getting into computers that adversaries, and some American partners, have tried to make impervious to spying or cyberattack. In most cases, the radio frequency hardware must be physically inserted by a spy, a manufacturer or an unwitting user.
The N.S.A. calls its efforts more an act of “active defense” against foreign cyberattacks than a tool to go on the offensive. But when Chinese attackers place similar software on the computer systems of American companies or government agencies, American officials have protested, often at the presidential level.
Among the most frequent targets of the N.S.A. and its Pentagon partner, United States Cyber Command, have been units of the Chinese Army, which the United States has accused of launching regular digital probes and attacks on American industrial and military targets, usually to steal secrets or intellectual property. But the program, code-named Quantum, has also been successful in inserting software into Russian military networks and systems used by the Mexican police and drug cartels, trade institutions inside the European Union, and sometime partners against terrorism like Saudi Arabia, India and Pakistan, according to officials and an N.S.A. map that indicates sites of what the agency calls “computer network exploitation.”
“What’s new here is the scale and the sophistication of the intelligence agency’s ability to get into computers and networks to which no one has ever had access before,” said James Andrew Lewis, the cybersecurity expert at the Center for Strategic and International Studies in Washington. “Some of these capabilities have been around for a while, but the combination of learning how to penetrate systems to insert software and learning how to do that using radio frequencies has given the U.S. a window it’s never had before.”
 
By
Source and read more:
http://www.nytimes.com/2014/01/15/us/nsa-effort-pries-open-computers-not-connected-to-internet.html?_r=0

Google and Apple Surge Ahead in Race for Patents

Google, Apple, and other tech outfits continued their patent spree in 2013, surging past industrial giants like GM and General Electric in total number of patents awarded.
Each year, research outfit IFI compiles a list of the companies that receive the most patents from the U.S. Patent and Trademark Office, and in the year just ended, Google nearly broke the top 10, jumping from 21st place to 11th place in the rankings. Apple jumped from 22nd to 13th.
Google has long called for reforms to the patent systems, but like any large company, it has come to realize that it needs a vast patent portfolio, if only to defend itself against attack. It’s already under attack from Apple, Oracle, and others on the software front, and as it moves even further into hardware — a move exemplified by this week’s acquisition of home automation company Nest — the company will have to defend its gear as well.
In fact, Nest has already faced patent infringement suits from Honeywell and First Alert, and as the company expands its product line, it will surely face more.
As Google and Apple climbed the charts, most of the top 10 stood still. IBM once again took first place in the IFI patent ranking, and Korean electronics giants Samsung and LG held onto 2nd and 10th place respectively. But American chip maker Qualcomm jumped into the upper tier. Hard at work on computer chips that mimic the structure of the human brain, the company jumped from 19th place in the rankings to 9th place, staying ahead of Google and passing General Electric, GM, and HP to take its place in the top ten.
Here’s the full top 10:
  1. International Business Machines Corp
  2. Samsung Electronics Co Ltd
  3. Canon K K
  4. Sony Corp
  5. Microsoft Corp
  6. Panasonic Corp
  7. Toshiba Corp
  8. Hon Hai Precision Industry Co Ltd
  9. QUALCOMM Inc
  10. LG Electronics Inc KR
While patents aren’t necessarily a good indicator of company innovation, they are vitally important in the ongoing tussles between tech companies large and small — not mention the defense against patent trolls, outfits that exist solely to make money from intellectual property lawsuits. From Apple vs. Samsung to Yahoo vs. Facebook to just about everybody vs. Google, controlling patents has never been more important.
But that could be changing. Just this week, New Egg scored a major victory over frivolous patent suits when the U.S. Supreme Court decided not to hear an appeal from Soverain Software in its case against the e-commerce outfit. A circuit court had already ruled that Soverain’s electronic shopping cart patents should never have been awarded in the first place.
As it stands, however, the patent arms race is still on.

By Klint Finley
Source:
http://www.wired.com/wiredenterprise/2014/01/google-apple-patents/

Scholar Wins Court Battle to Purge Name From U.S. No-Fly List

A former Stanford University student who sued the government over her placement on a U.S. government no-fly list is not a threat to national security and was the victim of a bureaucratic “mistake,” a federal judge ruled today.
The decision (.pdf) makes Rahinah Ibrahim, 48, the first person to successfully challenge placement on a government watch list.
Ibrahim’s saga began in 2005 when she was a visiting doctoral student in architecture and design from Malaysia. On her way to Kona, Hawaii to present a paper on affordable housing, Ibrahim was told she was on a watch list, detained, handcuffed and questioned for two hours at San Francisco International Airport.
The month before, the FBI had visited the woman at her Stanford apartment, inquiring whether she had any connections to the Malaysian terror group Jemaah Islamiyah, according to the woman’s videotaped deposition played in open court.
U.S. District Judge William Alsup ordered the government to either purge her name from the list, or certify that it has already been removed. Federal watch lists contain some 875,000 names. (The judge is set to unseal a larger judicial order that discloses whether the woman is indeed currently on a watch list. However, he gave the government until April 15 to ask a federal appeals court to bar its publication.)
Ibrahim was not seeking monetary damages. She wanted to clear her name, her attorney, Elizabeth Marie Pipkin said in court last month.
Pipkin and a team of lawyers handled the case pro bono, spending $300,000 in court costs and racking up $3.8 million in legal fees covering some 11,000 hours of work, she said. “Why in the United States of America does it cost that much to clear a woman’s name?” she asked in a telephone interview.
The woman, who is now a professor in Malaysia, eventually was allowed to leave the United States but has been denied a return visit, even to her own civil trial.
The trial last month was shrouded in extraordinary secrecy, with closed court hearings and non-public classified exhibits. Judge Alsup today issued his full judgement under seal, but made public an abbreviated version that we’re allowed to know about.

By David Kravets
Source:
http://www.wired.com/threatlevel/2014/01/no-fly-ruling/

Transaction costs, privacy and trust: The laudable goals and ultimate failure of notice and choice to respect privacy online



Abstract
The goal of this paper is to outline the laudable goals and ultimate failure of notice and choice to respect privacy online and suggest an alternative framework to manage and research privacy. This paper suggests that the online environment is not conducive to rely on explicit agreements to respect privacy. Current privacy concerns online are framed as a temporary market failure resolvable through two options: (a) ameliorating frictions within the current notice and choice governance structure or (b) focusing on brand name and reputation outside the current notice and choice mechanism. The shift from focusing on notice and choice governing simple market exchanges to credible contracting where identity, repeated transactions, and trust govern the information exchange rewards firms who build a reputation around respecting privacy expectations. Importantly for firms, the arguments herein shift the firm’s responsibility from adequate notice to identifying and managing the privacy norms and expectations within a specific context.

By Kirsten Martin
Source and read the full paper:
http://firstmonday.org/ojs/index.php/fm/article/view/4838/3802#author

Reddit’s 2013 Performance Was Incredible, But Questionable

Something’s fishy with Reddit’s latest user metrics. In closing out 2013, the impressively vulgar social platform shared surprising full year stats. The highlights include:
  • 56 billion pageviews
  • 731 million uniques
Holy mother… really?! *High fives all around* What’s even more noteworthy is these numbers are up quite a bit from 2012. The year before, Reddit served:
  • 37 billion pageviews
  • 400 million uniques
Effectively, in 2013, Reddit saw a 51% increase in pageviews and an 83% increase in uniques year-over-year. That’s HUGE, right?
But wait…
We decided to look at our own numbers within the Shareaholic network, which supports 200,000+ websites reaching 250+ million people each month (last month we tracked 322 million people across the web), in order to see how this “growth” translated into referrals to publishers (click to enlarge):
Reddit Referrals Report (data) Jan 2014
Here’s what we have:
  • Reddit’s share of overall visits to websites dropped 35.96% year-over-year (comparing Dec’12 – Dec’13)
  • During December 2012, sites saw 0.33% of their overall traffic come from Reddit
  • Last month (Dec’13), sites saw only 0.21% of their overall traffic come from Reddit
Our data, collected over 13 months (Dec 2012 – Dec 2013), shows how referral traffic numbers trended. Now, the only question worth asking is: Reddit, WTF?!
Reddit Referrals Report (chart) Jan 2014
Naturally, we’re shocked when we dug up this data.
While we cannot verify or refute Reddit’s user metrics, what our data does show is a drop in referrals to publishers from Reddit. Reddit seems to be hoarding its users and keeping all of its traffic within the social news site; our data, essentially, suggests Reddit is referring less traffic out to sites, which is strange behavior for a social bookmarking site.
And just ’cause…
Reddit Referrals Report (snarky chart) Jan 2014
Although activity on Reddit.com may have increased dramatically from 2012 to 2013, sites around the web, sadly, aren’t seeing much of that traffic.
If Reddit is sending fewer visits to sites, web stores and publishers, where else should brands, blogs and businesses turn to for social traffic? Our money’s on Facebook, Pinterest and Twitter. At least, that’s what the data would suggest.
Ok, now it’s your turn to share your thoughts.

By Danny Wong
Source:
https://blog.shareaholic.com/reddit-wtf/

Berkman Center for Internet & Society Summer Internship Program 2014

The Berkman Center for Internet & Society at Harvard University is preparing to welcome another stellar crew of students to join us as summer interns!

We are looking to engage a diverse group of students who are interested in studying -- and changing the world through -- the Internet and new communications technologies; who are driven, funny, and kind; and who would like to join our amazing community in Cambridge this summer for 10 weeks of shared research and exchange.

Information about the summer program, eligibility, and links to the application procedures can be found below and at http://cyber.law.harvard.edu/getinvolved/internships_summer.

The application deadline for all students for Summer 2014 is Sunday, February 16, 2014 at 11:59 p.m. ET.


Please share word of the opportunity to great candidates, and help us continue developing our shared network of movers and shakers working to advance scholarship with impact.

When Misogynist Trolls Make Journalism Miserable for Women

How many talented women dropped out of the blogosphere rather than deal with hateful Internet feedback?

Online threats against women are the subject of a lengthy Pacific Standard article by Amanda Hess, who argues that gendered harassment has severe implications for women’s status on the Internet and their place in the digital era.
"Threats of rape, death, and stalking can overpower our emotional bandwidth, take up our time, and cost us money through legal fees, online protection services, and missed wages," she writes. "I’ve spent countless hours ... logging the online activity of one particularly committed cyberstalker ... And as the Internet becomes increasingly central to the human experience, the ability of women to live and work freely online will be shaped, and too often limited, by the technology companies that host these threats, the constellation of local and federal law enforcement officers who investigate them, and the popular commentators who dismiss them—all arenas that remain dominated by men, many of whom have little personal understanding of what women face online every day."
This subject deserves more attention–and I'd like to focus here on a small part of it. For years, I've been convinced that gendered nastiness and harassment was one factor responsible for the emergence of a blogosphere so disproportionately inhabited by men. And it's the biggest factor that changed my mind about how heavy-handed bloggers and editors ought to be about moderating comments sections.
By Conor Friedersdorf
Source:
http://www.theatlantic.com/politics/archive/2014/01/when-misogynist-trolls-make-journalism-miserable-for-women/282862/

Navigating the Brain's Mysteries

Source and read:
http://viewer.zmags.com/publication/e9a5c23c?page=26#/e9a5c23c/26

Assessing Others: Evaluating the Expertise of Humans and Computer Algorithms

How do we come to recognize expertise in another person and integrate new information with our prior assessments of that person's ability? The brain mechanisms underlying these sorts of evaluations—which are relevant to how we make decisions ranging from whom to hire, whom to marry, and whom to elect to Congress—are the subject of a new study by a team of neuroscientists at the California Institute of Technology (Caltech).
In the study, published in the journal Neuron, Antonio Rangel, Bing Professor of Neuroscience, Behavioral Biology, and Economics, and his associates used functional magnetic resonance imaging (fMRI) to monitor the brain activity of volunteers as they moved through a particular task. Specifically, the subjects were asked to observe the shifting value of a hypothetical financial asset and make predictions about whether it would go up or down. Simultaneously, the subjects interacted with an "expert" who was also making predictions.
Half the time, subjects were shown a photo of a person on their computer screen and told that they were observing that person's predictions. The other half of the time, the subjects were told they were observing predictions from a computer algorithm, and instead of a face, an abstract logo appeared on their screen. However, in every case, the subjects were interacting with a computer algorithm—one programmed to make correct predictions 30, 40, 60, or 70 percent of the time.
Subjects' trust in the expertise of agents, whether "human" or not, was measured by the frequency with which the subjects made bets for the agents' predictions, as well as by the changes in those bets over time as the subjects observed more of the agents' predictions and their consequent accuracy.
This trust, the researchers found, turned out to be strongly linked to the accuracy of the subjects' own predictions of the ups and downs of the asset's value.
"We often speculate on what we would do in a similar situation when we are observing others—what would I do if I were in their shoes?" explains Erie D. Boorman, formerly a postdoctoral fellow at Caltech and now a Sir Henry Wellcome Research Fellow at the Centre for FMRI of the Brain at the University of Oxford, and lead author on the study. "A growing literature suggests that we do this automatically, perhaps even unconsciously."
Indeed, the researchers found that subjects increasingly sided with both "human" agents and computer algorithms when the agents' predictions matched their own. Yet this effect was stronger for "human" agents than for algorithms.
This asymmetry—between the value placed by the subjects on (presumably) human agents and on computer algorithms—was present both when the agents were right and when they were wrong, but it depended on whether or not the agents' predictions matched the subjects'. When the agents were correct, subjects were more inclined to trust the human than algorithm in the future when their predictions matched the subjects' predictions. When they were wrong, human experts were easily and often "forgiven" for their blunders when the subject made the same error. But this "benefit of the doubt" vote, as Boorman calls it, did not extend to computer algorithms. In fact, when computer algorithms made inaccurate predictions, the subjects appeared to dismiss the value of the algorithm's future predictions, regardless of whether or not the subject agreed with its predictions.
Since the sequence of predictions offered by "human" and algorithm agents was perfectly matched across different test subjects, this finding shows that the mere suggestion that we are observing a human or a computer leads to key differences in how and what we learn about them.
A major motivation for this study was to tease out the difference between two types of learning: what Rangel calls "reward learning" and "attribute learning." "Computationally," says Boorman, "these kinds of learning can be described in a very similar way: We have a prediction, and when we observe an outcome, we can update that prediction."
Reward learning, in which test subjects are given money or other valued goods in response to their own successful predictions, has been studied extensively. Social learning—specifically about the attributes of others (or so-called attribute learning)—is a newer topic of interest for neuroscientists. In reward learning, the subject learns how much reward they can obtain, whereas in attribute learning, the subject learns about some characteristic of other people.
This self/other distinction shows up in the subjects' brain activity, as measured by fMRI during the task. Reward learning, says Boorman, "has been closely correlated with the firing rate of neurons that release dopamine"—a neurotransmitter involved in reward-motivated behavior—and brain regions to which they project, such as the striatum and ventromedial prefrontal cortex. Boorman and colleagues replicated previous studies in showing that this reward system made and updated predictions about subjects' own financial reward. Yet during attribute learning, another network in the brain—consisting of the medial prefrontal cortex, anterior cingulate gyrus, and temporal parietal junction, which are thought to be a critical part of the mentalizing network that allows us to understand the state of mind of others—also made and updated predictions, but about the expertise of people and algorithms rather than their own profit.
The differences in fMRIs between assessments of human and nonhuman agents were subtler. "The same brain regions were involved in assessing both human and nonhuman agents," says Boorman, "but they were used differently."
"Specifically, two brain regions in the prefrontal cortex—the lateral orbitofrontal cortex and medial prefrontal cortex—were used to update subjects' beliefs about the expertise of both humans and algorithms," Boorman explains. "These regions show what we call a 'belief update signal.'" This update signal was stronger when subjects agreed with the "human" agents than with the algorithm agents and they were correct. It was also stronger when they disagreed with the computer algorithms than when they disagreed with the "human" agents and they were incorrect. This finding shows that these brain regions are active when assigning credit or blame to others.
"The kind of learning strategies people use to judge others based on their performance has important implications when it comes to electing leaders, assessing students, choosing role models, judging defendents, and so on," Boorman notes. Knowing how this process happens in the brain, says Rangel, "may help us understand to what extent individual differences in our ability to assess the competency of others can be traced back to the functioning of specific brain regions."
The study, "The Behavioral and Neural Mechanisms Underlying the Tracking of Expertise," was also coauthored by John P. O'Doherty, professor of psychology and director of the Caltech Brain Imaging Center, and Ralph Adolphs, Bren Professor of Psychology and Neuroscience and professor of biology. The research was supported by the National Science Foundation, the National Institutes of Health, the Betty and Gordon Moore Foundation, the Lipper Foundation, and the Wellcome Trust.

By Cynthia Eller
Source:
https://www.caltech.edu/content/assessing-others-evaluating-expertise-humans-and-computer-algorithms

How Netflix Reverse Engineered Hollywood

To understand how people look for movies, the video service created 76,897 micro-genres. We took the genre descriptions, broke them down to their key words, … and built our own new-genre generator.

By Alexis C. Madrigal
Source and more:
http://www.theatlantic.com/technology/archive/2014/01/how-netflix-reverse-engineered-hollywood/282679/

Visit to the World's Fair of 2014

The New York World's Fair of 1964 is dedicated to "Peace Through Understanding." Its glimpses of the world of tomorrow rule out thermonuclear warfare. And why not? If a thermonuclear war takes place, the future will not be worth discussing. So let the missiles slumber eternally on their pads and let us observe what may come in the nonatomized world of the future.
What is to come, through the fair's eyes at least, is wonderful. The direction in which man is traveling is viewed with buoyant hope, nowhere more so than at the General Electric pavilion. There the audience whirls through four scenes, each populated by cheerful, lifelike dummies that move and talk with a facility that, inside of a minute and a half, convinces you they are alive.The scenes, set in or about 1900, 1920, 1940 and 1960, show the advances of electrical appliances and the changes they are bringing to living. I enjoyed it hugely and only regretted that they had not carried the scenes into the future. What will life be like, say, in 2014 A.D., 50 years from now? What will the World's Fair of 2014 be like?I don't know, but I can guess.

By Isaac Asimov
Source and read:
http://www.nytimes.com/books/97/03/23/lifetimes/asi-v-fair.html

Tweeting "Happy New Year" around the world

See:
http://twitter.github.io/interactive/newyear2014/

Skype’s Twitter, Facebook, and blog hacked by Syrian Electronic Army demanding an end to spying

Screen Shot 2014 01 01 at 12.30.36 PM 730x479 Skypes Twitter, Facebook, and blog hacked by Syrian Electronic Army demanding an end to spying

We’re not even through the first day of 2014 and cyber attacks are back again. Earlier today, some person(s) breached Skype’s security and hacked its Twitter account, Facebook page, and blog. The group claiming responsibility is the Syrian Electronic Army (SEA). Its message: end spying on the public.
Update: Minutes after we’ve published this post, it appears that Skype has regained control and has deleted the hacker messages from Twitter and Facebook.
The SEA’s official Twitter account has also repeated the messages that were posted on Skype’s social media profiles and blog.

Today’s events is most likely linked to the US National Security Agency’s revealed surveillance programs that were uncovered by former contractor Edward Snowden. Many tech companies, including Skype’s parent company, Microsoft, have taken steps to refute claims that they have been cooperating with the government.
Documents from the NSA’s Prism program apparently indicate that the secretive agency could spy on Skype audio and video calls thanks to backdoor access — which was contrary to Skype’s insistence that its service could not be wiretapped. Interestingly, in October, it was reported that the messaging and calling service was under investigation in Luxembourg over its link to the NSA. A month later, the country’s data protection authority cleared both Microsoft and Skype of any violations.

By Ken Yeung
Source and read more:
http://thenextweb.com/microsoft/2014/01/01/skypes-twitter-account-blog-get-hacked-sea-demanding-end-spying/?fromcat=all#!q5MZX