ITL 537: ICT Sektöründe İnovatif Yaklaşımlar Dersi İkinci Hafta Konuğu: TTNet CEO'su Abdullah Orkun Kaya


"ITL 537: ICT Sektöründe İnovatif Yaklaşımlar" dersinin ikinci hafta konuğu TTNet A.Ş. Genel Müdürü Sayın Abdullah Orkun Kaya Bey ile inovatif çözümleri ve ürünleri, dünyada ilke imza attıkları önemli projeleri, inovasyonu geliştirmek için şirketlere ve düzenleyici kurumlara düşen ödevler ile inovatif ürün geliştirirken zaman zaman yaşadıkları sorunlar ile bu sorunların çözüm önerileri, regülasyonların inovasyonu veya katma değerli hizmet geliştirilmesini engelleyebildiği somut örnekler ile, regülatör kuurmların yakınsaması, net neutrality, OTT'lrle rekabet, coverage ve wifi gibi konular dersin katılımcılarından gelen sorular çerçevesinde son derece faydalı, verimli ve keyifli bir akşam geçirdik. 


The 3 biggest challenges new FCC chairman Tom Wheeler will face

FCC seal feature

After winning confirmation from the Senate, Tom Wheeler will likely be sworn in as Federal Communications Commission Chairman in the next few days, something many observers say can’t happen fast enough. Since Julius Genachowski stepped down in March, the commission has been in a kind of limbo that it could ill-afford to maintain. The FCC faces not only some big policy decisions but big threats to its regulatory authority as well. How those battles play out will have a big impact not just only carriers and consumers, but also any company that makes its business off the internet.
The FCC under acting chairwoman Mignon Clyburn did take action on many pressing issues facing the country, such as brokering an interoperability deal between AT&T and regional carriers and its largely perfunctory approval of the three-way tie-up between Sprint, SoftBank and Clearwire. But there were still some major M&A deals and policy debates left on the table waiting for the next chairman’s attention. On Tuesday, that wait was finally over after U.S. Sen. Ted Cruz (R-Texas) lifted his hold on the nomination (after getting assurances from Wheeler would take it easy on campaign ad disclosure requirements, according to Variety).

While Wheeler will have a lot on his plate over his five-year term, we have identified what we think are the three biggest issues facing Wheeler during his tenure.

The incentive auction

The FCC hasn’t auctioned any major hunks of airwaves since the 700 MHz of auction of 2008, which became the basis for the first nationwide rollout of LTE. In 2014, it hopes — perhaps wishfully — to open up the spectrum spigot again, targeting broadcast airwaves in the 600 MHz band for future mobile broadband services.
This time around, the FCC doesn’t want to evict TV broadcasters from their spectrum. So it’s proposing an extremely complex mechanism called an incentive auction designed to match up the airwaves broadcasters are willing to sell with the prices carriers are willing to pay. Broadcasters would submit the equivalent of reverse bids to the FCC, which would then repackage their airwaves into chunks usable by the mobile industry. Operators would then bid on those frequency bundles.
If that already sounds ridiculously complex, you don’t know the half of it. There’s no guarantee the broadcasters will part with their spectrum, especially in urban markets where it’s most valuable, and there’s no guarantee carriers will take a shine to the repackaged bands or pay the prices the broadcasters are asking. What’s more, it’s not just the broadcasters, carriers and government who have a stake in its outcome. Everyone from consumer advocacy groups to labor unions to Google and Microsoft are involved.
There are still many open questions such as whether regional carriers and smaller nationwide operators like T-Mobile will get a bidding advantage in the auction and whether a portion of those airwaves will be set aside for unlicensed white space broadband. Depending on how Wheeler and the FCC handle it, this auction could set a new precedence for the redistribution of public airwaves. It could also be a disaster.

Net neutrality

Carriers can’t discriminate against different types of traffic on their networks — that’s the basic premise of net neutrality. It prevents Comcast from slowing your Netflix video stream to a crawl and AT&T from blocking Skype to your mobile phone. The FCC established its network neutrality guidelines last decade, but since then ISPs and carriers have been chipping away at them in court.

In 2010 Comcast won a case against the FCC, which determined that commission could regulate ISP pipes but not the bits flowing over them. Verizon is in court arguing it should be allowed to charge content providers to prioritize their traffic on its networks. The FCC is a bit hamstrung because under Genachowski it chose not to go before Congress to ask for more authority to regulate network neutrality. That decision could cost the commission, internet content providers and consumers dearly. Here’s how Stacey explained those ramifications in September:
If the courts decide the FCC doesn’t have the legal authority to enforce the network neutrality rules, it not only could gut the rules, but it also gives ISPs a free pass to start making decisions about the information aspects of their service — and in today’s non-competitive broadband environment — that could mean throttling Netflix or charging Google more money to deliver a clean YouTube stream. It also neuters the agency moving forward when all content will flow as information over broadband pipes — from TV to your doctor visits.
Wheeler may not have been aboard when the guidelines were written, but he’ll have to deal with the fallout if net neutrality is overturned. There’s also a question of how hard Wheeler will fight to preserve the FCC’s net neutrality powers. In his career Wheeler has been chief lobbyist for the cable industry and the mobile industry, both of which want to see the rules overturned.

The IP transition

In the IT world moving away from a legacy technology to an all-IP environment seems like a good thing, but in the communications world such a transition from could have nasty repercussions for consumers.
As Public Knowledge SVP Harold Feld pointed out, our entire system of communications regulation hinges on old time division multiplexing (TDM) technologies and the copper wiring forming the backbone of telephone networks. Universal service, ensuring rural access to communications services and the interconnection of networks — so a Comcast customer can call a Verizon telephone — are all defined by those regulations. Telephone companies face those regulations. IP communications companies do not.
AT&T has already announced plans to do away with its copper networks and transform itself into an all-IP provider. But the implications of the IP transition have manifested themselves first in Verizon’s territory. When superstorm Sandy destroyed the copper infrastructure on Fire Island in New York City, Verizon didn’t replace it, offering customers more expensive wireless connections instead to handle their voice and broadband services.
Wheeler has a sticky policy debate ahead of him. It’s not just a question of how regulations evolve as carriers enter the IP age. It’s a question of whether all IP communications companies — i.e. half of Silicon Valley — should be regulated like carriers.

By Kevin Fitchard

Why no FuelBand Android app? Quality first and scale second, says Nike.


It’s been twenty months since we went hands-on with Nike’s first FuelBand, and despite assumptions at the time that the accompanying FuelBand app would land on Android in the not-too-distant future, the activity-tracking wristband remains compatible with iOS for now, even as we approach the brand new FuelBand launch next week.
For many, this glaring omission in Nike’s technological armory is astounding given that, well, Android represents somewhere in the region of 80% of the smartphone market. Even though we do acknowledge that a significant chunk of the Android market constitutes lower-end devices, it’s surely commercial suicide to ignore such a dominant platform for so long?
During a chat with Stefan Olander, VP of Digital Sport at Nike, earlier today, we were keen for the company to elaborate on its previous statements around being committed to iOS and the Web. For example, it has said before:
“To deliver the best experience for all Nike+ FuelBand users, we are focusing on the FuelBand experience across iOS and, where you can sync your activity, set new goals, and connect with friends. At this time, we are not working on an Android version of the mobile app.”
Such statements kind of sidestep the specific issue with Android, so we sought some clarification on why it would choose to ignore Android for so long. And the answer probably isn’t all that surprising.

So…why no Android?

“Bluetooth LE (low energy) is relatively new, and on the Android operating system there are so many devices running different versions of the OS. It is not a stable technology,” says Olander. “You can’t use Bluetooth LE across the entire stack.”
The last thing Nike wants is for someone to buy the new FuelBand only to discover they have an old HTC Wildfire with Android 2.3 installed, something that clearly won’t cater for the automatic syncing enabled by the new Bluetooth. However, surely that doesn’t explain why it hasn’t been included thus far, long before the FuelBand went all Bluetooth LE crazy?
“With the original FuelBand, you still had the challenge of catering for 200 different handsets and however-many versions of the operating system,” continues Olander.
“As we’re looking forward, for us it’s really about making sure we have a great experience. We have nothing against Android. Our running app [Nike+ Running] is on both iOS and Android, and we have learned a lot from that – at the end of the day, you really do get reach. But for us it’s quality first, scale second. If we can’t guarantee quality to a number of our users, we’ll wait until the platform is ready. And right now, we don’t believe the effort is worth the return, for Bluetooth LE. And we want to do it really, really well for iOS.”
It’s the age-old problem for Android – fragmentation. It’s why many apps land on iOS before Android, but often that only applies to smaller companies without the resources to dedicate to the problem. There are quite a few Android devices that are more than capable of coping with anything FuelBand throws at it, and with Android 4.3 continuing to roll out (albeit very slowly), this will only increase.
There is a danger, of course, that someone will buy the new Nike+ FuelBand SE completely oblivious to the device and OS requirements for it to work with the wristband, and that could certainly cause problems when someone is spending $149 on it. But it still doesn’t really explain why it hasn’t appeared for the old version of the FuelBand, when many other similar companies with similar devices – Withings, Jawbone and Fitbit – have managed to cater for Android.
Oh, and what about that much rumored ‘special agreement’ with Apple to keep it iOS-only? Well, Olander poured cold water on that too.
“No no, there’s none of that,” he says. “When it makes sense for consumers, and so they get a quality experience, we’ll do that [launch on Android]. We have nothing against Android, we’re not prohibited from doing it, we just want to make sure that when we do it, it works well.”

Going global

On a similar note, I also asked Olander about its international endeavors, after all, the Nike FuelBand is rather limited in its availability. On November 6 when the new wristband goes to market, it will hit the US, Canada, and the UK as before, but will also be made available for the first time in France, Germany and Japan. That’s still only six markets though.
“For us, it’s really about supporting the product when it launches, so we need great presentation at retail, and we need to adapt it into the local language,” explains Olander. “Then there’s customer service, people have questions you know, ‘how do I set this up?’, and having the infrastructure to be able to service that is a pretty big deal. We want to make sure that’s really all working before we expand out into other countries.”
According to Nike, quality is everything. If it doesn’t feel it can put something to market and offer the best experience, then it won’t.
So, the FuelBand will land in more markets at some point, and Android likely will happen one day too, but there’s no immediate launch plans.
And what about Windows Phone? “Maybe, one day…when they have scaled things up a bit,” adds Olander.

By Paul Sawers
Source and Image:

Android’s Next Targets: Wearables, TVs, Low-End Phones

The launch of Google Inc.’s Android KitKat, the next version of the most widely used operating software for smartphones and tablets, is drawing near. Google executives haven’t announced a release date but people who have been briefed on KitKat say that it is coming soon.
There have been several reports about KitKat’s likely features based on leaked screenshots and leaked photos of the Nexus 5 smartphone that will be the first device to show off those features. But we’ve reviewed a confidential file that Google shared with companies that make Android devices to explain the most important new features. (A Google spokeswoman didn’t respond to requests for comment.) Here’s what we know.
One Android to rule them all?
With KitKat, Google has worked even harder to address one of Android’s biggest disadvantages versus Apple: less than half of Android devices are running the latest version of the software, called “Jelly Bean,” which was released in summer 2012. Nearly two-thirds of Apple devices already are running the latest version of its iOS software, released last month, the company has said.
This Android fragmentation makes it tougher for Android app developers to run the latest versions of their services across all Android devices. Some earlier releases of Android were better suited to higher-end devices that have more memory capacity for all the newest features. As a result, cheaper phone makers sometimes ended up using older versions of Android.
The document about KitKat that we reviewed, marked “confidential,” makes clear that Google wants its new software to work well on low-end phones in addition to the more expensive Samsung Galaxy and HTC devices.
KitKat “optimizes memory use in every major component” and provides “tools to help developers create memory-efficient applications” for “entry-level devices,” such as those that have 512 megabytes of memory, according to the document.
Google has long sought ways to help make the newest versions of Android compatible with low-cost devices, the kind that are proliferating in developing countries with the help of manufacturers like Huawei, ZTE, and others. This time Google has been more proactive with makers of lower-memory devices, said people who have been briefed on that matter.
Questions remain about whether the effort will bear fruit. In many markets, wireless carriers don’t do a good job of pushing software upgrades to existing Android devices that already have been sold.
The improvements for low-memory devices also could help the software to better power wearable-computing devices.
Wearing it proudly
The KitKat release shows that Google is preparing for the rise of wearable-computing devices. According to the confidential document, KitKat is expected to support three new types of sensors: geomagnetic rotation vector, step detector and step counter.
These features are likely geared for forthcoming Android-powered smartwatch made by Google and possibly the company’s head-mounted Google Glass, as well as non-Google devices. Android smartphone apps that track people’s fitness also could get a boost from the new feature as more manufacturers pack motion sensors into devices.
There is another potential benefit to Android from supporting these kinds of sensors: Google will be able to tell how far someone walked based on the steps they took. That could come in handy as Google tries to map more indoor locations such as malls and airports, where GPS and WiFi sensors don’t always do a good enough job of pinpointing exactly where a smartphone user is located. It also could improve the walking directions that people use on Google Maps.
Another crack at NFC
KitKat will allow developers to create services to allow phones to “emulate” physical cards that let people make payments, earn loyalty rewards, enter secure buildings and public-transit system, according to the confidential document. But it’s unclear whether the change will spur growth in the area.
Google has been a huge proponent of Near-Field Communication technology, which allows phones to exchange data with other devices over distances of a few inches. The technology enables people to pay for things at stores with their phones, among other users. But the technology hasn’t gotten much adoption from app developers, nor has Apple embedded it in the iPhone.
On Android, adoption was slowed in part because developers couldn’t create apps that emulated what physical cards do in the real world without first getting permission from wireless carriers, says Einar Rosenberg, chief executive of Creating Revolutions, which makes NFC-based apps. That’s because carriers control a part of the phone called the “secure element” where a card owner’s personal information is stored.
According to the KitKat marketing materials, developers will be able to emulate cards without keeping people’s information stored in the secure element.
The biggest question mark about the feature is where the personal information will be stored without running the risk of getting manipulated or stolen by hackers, Mr. Rosenberg says.
Control the TV
Google wants your Android device to be a remote control. The next version of Android lets developers build apps that control TVs, tuners, switches and other devices by sending infrared signals.
Samsung and HTC devices already have built-in infrared “blasters” and both companies used a company called Peel to design an app that can control TVs. But KitKat will help developers avoid having to write different apps for different hardware makers because there will now be a standard way for all apps to tell the Android device to activate the blasters.
Bluetooth boost
Google wants Android apps to be able to interact with a wide variety of devices using Bluetooth technology. Those devices include joysticks, keyboards, and in-car entertainment systems. In KitKat, new support for something called Bluetooth HID over GATT and Bluetooth Message Access Profile will allow Android to talk to more devices than before.
We have oodles more details about Android KitKat but much of it is too technical to describe here. Find me on Twitter or Google+ if you have questions about features that will be included in the release.
By Amir Efrati

How to Crack a Wi-Fi Password

Cracking Wi-Fi passwords isn't a trivial process, but it doesn't take too long to learn—whether you're talking simple WEP passwords or the more complex WPA. Learn how it works so you can learn how to protect yourself.

Source and more:

Email Statistics Report, 2013 -2017

From Executive Summary

The total number of worldwide email accounts is expected to increase from nearly 3.9 billion accounts in 2013 to over 4.9 billion accounts by the end of 2017. This represents an average annual growth rate of about 6% over the next four years.

Email is remains the go-to form of communication in the Business world. In 2013, Business email accounts total 929 million mailboxes. This figure is expected grow at an average annual growth rate of about 5% over the next four years, and reach over 1.1 billion by the end of 2017. The majority of Business email accounts are currently deployed on-premises. However adoption of Cloud

Business email services, particularly Google Apps and Microsoft Office 365, is expected to rapidly increase over the next four years.

Consumer email accounts currently make up the vast majority of worldwide email accounts, accounting for 76% of worldwide email accounts in 2013. Consumer email accounts are typically available as free offerings. Consumer email accounts’ market share is expected to steadily increase over the next four years, as more people come online on a worldwide basis and email continues to be a key component of the online experience. Email accounts are required for users to sign up for social networking sites, such as Facebook and Twitter, instant messaging, and more.






Worldwide Email Accounts (M)






Business Email Accounts (M)






% Business Email Accounts






Consumer Email Accounts (M)






% Consumer Email Accounts






Instant Messaging (IM) is also showing slower growth due to increased usage of social networking, text messaging, Mobile IM, and other forms of communication by both Business and Consumer users. In 2013, the number of worldwide IM accounts totals over 2.9 billion.
Mobile IM, however, is expected to show strong growth over the next four years, primarily due to increased mobile adoption by Consumers on a worldwide basis. In 2013, worldwide Mobile IM is expected to total 460 million accounts.

Social Networking will grow from about 3.2 billion accounts in 2013 to over 4.8 billion accounts by the end of 2017. The majority of social networking accounts still come from the Consumer space, however, business-oriented Enterprise Social Networks are also showing strong adoption.

Source and read the full report:

Programming DNA for Molecular Robots: An Interview with Lulu Qian

Embracing the idea that molecules can be programmed much like a computer, researchers can now perform remarkable feats on a very small scale. New Caltech faculty member Lulu Qian, assistant professor of bioengineering, performs research in the field of molecular programming because it allows her to design synthetic molecular systems with neural-network-like behaviors and tiny robots, both from the programmed interactions of DNA molecules. Originally from Nanjing, China, Qian received her bachelor's degree from Southeast University in 2002 and her PhD from Shanghai Jiao Tong University in 2007. After working as a postdoctoral scholar at Caltech in the laboratory of Shuki Bruck, Qian became a visiting fellow at Harvard University; she returned to Caltech and joined the faculty in July. Recently, Qian answered a few questions about her research, and how it feels to be back at Caltech.
What do you work on?
I work on rationally designing and creating molecular systems with programmable behaviors. I am interested in programming biological molecules—like DNA and RNA—to recognize molecular events from the biochemical environment, process information, make decisions, take actions, and to learn and evolve. Molecular programming is not just about using computer programs to aid the design and analysis of molecular systems; it is more about adapting the principles of computer science to create biochemical systems that can carry out instructions to perform tasks at the molecular level. For example, I develop simple and standard molecular components that can be used to perform a variety of tasks and systematic ways to configure the behavior of interacting molecules to carry out one computational or mechanical task or another. These custom-designed molecules can be ordered from a commercial supplier and mixed in a test tube to generate a "molecular program."
Using this approach, I have designed DNA circuits that can solve basic logic problems, and I have constructed a DNA neural network that can perform simple associative memory functions—much like a network of neurons in the brain, though in a rudimentary way. In my future research, I would like to improve the speed, robustness, and complexity of these implementations and to explore the possibility of creating molecular systems with learning capabilities, while also beginning new work in the field of molecular robots—tiny, nanoscale machines made of DNA that can perform a designed task such as sorting cargo or solving a maze.
What do you find most exciting about your research?
I am driven by curiosity—outside of the lab, I like Legos and puzzles—and I view life as a program, one that is much more sophisticated than any other program that we know of so far. The sequence of nucleotides that make up DNA—As, Ts, Cs, and Gs—encodes the program within a genome, orchestrating molecules to sense, to compute, to respond, and to grow. Because of their different lengths and sequences, one genome produces a bacterium while another produces a plant, or an insect, or a mammal. The genetic program describes how to make molecules, and molecules are machines that can achieve complex tasks to regulate the behaviors of individual cells. To better appreciate the molecular programs that nature creates, I want to understand what possible behaviors a network of interacting molecules can exhibit and how we can rationally design such behaviors.
But, I am also driven by my engineering nature. I want to design and build molecular systems with increasing complexity and sophistication. For example, you could imagine using such molecular machines to make a nanoscale factory that manufactures novel chemicals in a test tube. These chemicals could become new materials or new drugs. You could also imagine embedding such molecular machines into individual cells so that you could collect information from the molecular environment and regulate the cell's behavior. Such regulation could lead to responsive biosynthesis—the production of proteins or other molecules in response to a stimulus—or localized diagnostics followed by therapeutics.
How did you get into your field?
I started programming computers when I was 13 years old, and I have loved it ever since then. My dad was a philosopher, and because of his influence, I got curious about fundamental questions such as who I am and why I think the way that I do. At first, I tried to look for these answers in molecular biology, but as a programmer, biology was difficult for me to understand. Unlike in programming, you cannot just define a few logical principles to understand the behavior of an entire biological system or organism. At the time, biology was not as fun for me—or as logical—as computer programming.
But just before I went to graduate school, I discovered the first publication in DNA computing by Len Adleman at the University of Southern California. He used DNA molecules as a computing substrate to solve a hard math problem. The moment that I finished reading this paper, I felt completely excited. It was the first time that I saw a strong relationship between molecules that are traditionally only used in biology—like DNA and RNA—and computer programming. That was when I started working in my field.

By Jessica Stoller-Conrad
Source and more:

Intel finds Asian pollution makes computers sick, too

The symptoms of industrial pollution are everywhere in Asia, where pedestrians wear surgical masks to filter the air and urban smog is sometimes so thick that Beijing’s Forbidden City is rendered nearly invisible behind a cloak of soot. Just this month, Chinese authorities canceled flights at Beijing’s main airport amid especially heavy pollution, and shuttered highways in and out of the city.
The implications for human health are obvious; studies show that pollution is shortening lifespans in northern China by five years or more.
Intel engineers in Oregon are now discovering that rotten air is also taking a toll on electronics in China and India, with sulfur corroding the copper circuitry that provides neural networks for PCs and servers and wrecking the motherboards that run whole systems.
“We got the board and it was pretty obvious. You open the chassis up and you see blackish material on every type of surface,” said Anil Kurella, the Hillsboro material scientist who’s leading Intel’s research effort.
While pollution represents a true health crisis in Asia, it hasn’t reach those levels in computing terms. Very few computers fail, even in polluted countries such as China and India. Intel won’t say just how many more fail amid atmospheric contamination than would typically be expected, but it does say pollution makes failure “multiple” times more likely.
As the features on electronics continue to shrink in the years ahead, Intel says computers will only become more vulnerable to contaminants. And since developing economies are, by definition, developing, Intel is increasingly reliant on markets in China and India for sales growth.
So the company is intentionally brewing noxious air in a small chamber inside a windowless Hillsboro lab, to study the pollutants’ effects and, hopefully, devise a solution that protects the computers.
“That’s why Intel gets involved,” Kurella, “so that we understand the physics and technology before it’s become a big problem.”
Intel’s engineers first spotted the issue a few years ago, when it noticed an unusual number of customers from China and India returning computers with failed motherboards (where the microprocessor brain resides and communicates with various support electronics.) Once Intel noticed a trend, the cause was immediately evident.
The basics of the problem are straightforward. Copper is the essential element on an electronic circuit board, an excellent conductor of electricity that serves as a computer’s nervous system, carrying information and instructions.
But copper is also very susceptible to corrosion. And when the copper connections fail, the computer does, too.
“The copper is there to conduct the electricity,” said Tom Marieb, a vice president in Intel’s manufacturing group. “The more we eat away at it the less connectivity is left.”
The sources of Asian pollution are well understood, according to Staci Simonich, a chemistry professor at Oregon State University who has studied the environment in China and traveled to Beijing to document atmospheric conditions during the 2008 Olympics. Coal, burned to generate electricity, produces sulfur. It’s the same phenomenon that caused the acid rain that plagued the U.S. in the 1980s and ‘90s.
As Asian economies expand they demand more electricity, which in turn creates more pollution. The proliferation of cars and exhaust exacerbates the problem, according to Simonich.
“It’s the first time I’ve heard of it affecting electronics,” she said, “but I think it makes sense.”
It was an eye-opener for Intel, too. The company designs electronics to operate under various conditions – laptops, for example, are built to be more robust than desktop PCs.
Intel hadn’t anticipated the environmental challenges PCs and servers find in the developing world. In the U.S., a server might operate in a climate-controlled data center where it’s coddled by large companies that want to protect their investments.
In India, the only way to cool a server might be to open a window at night – exposing the machine to filthy air from a train station or power plant.
“Part of us understanding the reliability of anything is how it’s actually going to be used,” said Marieb, the manufacturing vice president. “What shocked us about this was our assumptions were wrong.”
Though Intel doesn’t make motherboards itself, as the world’s largest producer of computer chips it has the most at stake if computers aren’t as reliable in the developing world as they are in the United States and Europe. It’s hoping that its research will help circuit board manufacturers – who operate in a high-volume, low-margin industry – find fixes it couldn’t afford to develop on its own.
Intel isn’t the only technology manufacturer to encounter problems with pollution. Dell and IBM have both documented similar environmental problems – in Dell’s case, it reported that electronics in corrosive environments typically failed within two to four months.
Failures vary by location, depending on the amount of pollution nearby. In an extreme example near Mumbai, India, The Times of India reported that up to 80 percent of electronics had to be replaced at a mall, office complex and housing development built atop an old garbage dump.
There is evidence that environmental conditions are improving in Asia, Simonich says – or, at least, that pollution has plateaued. But scientists and activists don’t expect the problem to be broadly addressed for many years, perhaps decades.
In the meantime, people living there will continue using computers – and manufacturers will continue to sell them. Intel’s research focuses on keeping those machines working reliably for as long as possible in adverse environments.
The easiest solution would be to replace copper circuitry with something more resistant to corrosion. Gold works very well, and is widely used in some electronics, but it’s too expensive for low-cost circuit boards. Silver is also vulnerable to corrosion – except at thicknesses that would be prohibitively pricy.
So Intel is experimenting with various other coatings that could protect copper circuitry from the atmosphere. Some of the solutions show promise, according to the company, though none are bulletproof.
To refine its solutions, Intel invested $300,000 in a chamber of gasses for its Hillsboro lab and gave Kurella primary responsibility for studying the problem. The company describes the device as a large oven where it bakes circuit boards in conditions that match the conditions it finds overseas.
Lab techs load test tubes carrying pressurized hydrogen sulfide, sulfur dioxide and chlorine, calibrating their release to evaluate their effects on the electronics. They suspend tiny bits of copper and silver inside, measuring the chemicals’ effect.
The chamber occupies a corner of a large building that looks and feels like a warehouse, idling under fluorescent lights as the pollutants take their toll. Intel is still evaluating the combination of heat, humidity and pollution to see what role each plays in the corrosion.
In its work, Intel unearthed research done in the U.S. during the 1970s and ‘80s, when scientists found similar problems triggered by acid rain.
A year into their investigation, the Oregon engineers are finding no easy answers. Short of eliminating the pollution itself, early solutions are either too expensive or too inconsistent. But it is closer to understanding the issues at play, and perhaps finding a remedy.
“That really is the first order of business, is understanding the physics,” Marieb said. “That’s what generates new ideas.”

By Mike Rogoway

The Value of Openness for a Sustainable Internet

The Internet has changed the world. It has revolutionized the way individuals communicate and collaborate, the way entrepreneurs and corporations do business and the way governments develop policy and interact with their citizens. The Internet is a catalyst for innovation, communication, economic growth and social development. These statements are commonplace, but it is valuable for us to think about how the Internet has come to be so transformative.
At the most fundamental level, this question can best be approached by examining the open and collaborative processes along with the legal and governance principles that have enabled the Internet’s evolution. Both the open platform and the open processes through which the Internet is developed, have created an empowering medium whose impacts extend far beyond the realm of technology to affect all aspects of our societies, our economies and how we do business, as well as our mechanisms of governance.
This paper is intended to briefly describe openness in the context of the Internet and its effects, and why openness is vitally important to the sustainability of the Internet as we move forward.
As Internet access expands across all regions, increasing numbers of Internet users will engage in Internet Governance issues and make their voices heard, both at the global and local levels. It is timely to have a discussion about openness and sustainability, as we enter a period of intense assessment and many questions are raised regarding the way the Internet is and should be structured and governed. In 2014, the Internet Governance calendar is accelerating with major conferences, including the ITU Plenipotentiary Conference and World Telecommunication Development Conference, WSIS+10 High Level Review event and the Internet Governance Forum. These conferences will shape the future economic, technical and political governance of the Internet.
As the international community discusses the current and future arrangements for Internet governance , we believe it is timely to reflect on “The Value of Openness for a Sustainable Internet” to help shape a common vision for the post-2015 era. Whether the challenges are related to technology, development or policy; whether the issues are regional or global, the Internet Society’s assumption is that a healthy and sustainable Internet is based on the principle of openness:
  • Open global standards for innovation
  • Open communications for everyone
  • Open for economic progress through innovation
  • Open and multistakeholder governance for inclusion.

Read the full document


The Battle for Power on the Internet

Distributed citizen groups and nimble hackers once had the edge. Now governments and corporations are catching up. Who will dominate in the decades ahead?
We’re in the middle of an epic battle for power in cyberspace. On one side are the traditional, organized, institutional powers such as governments and large multinational corporations. On the other are the distributed and nimble: grassroots movements, dissident groups, hackers, and criminals. Initially, the Internet empowered the second side. It gave them a place to coordinate and communicate efficiently, and made them seem unbeatable. But now, the more traditional institutional powers are winning, and winning big. How these two side fare in the long term, and the fate of the rest of us who don’t fall into either group, is an open question—and one vitally important to the future of the Internet.
In the Internet’s early days, there was a lot of talk about its “natural laws”—how it would upend traditional power blocks, empower the masses, and spread freedom throughout the world. The international nature of the Internet circumvented national laws. Anonymity was easy. Censorship was impossible. Police were clueless about cybercrime. And bigger changes seemed inevitable. Digital cash would undermine national sovereignty. Citizen journalism would topple traditional media, corporate PR, and political parties. Easy digital copying would destroy the traditional movie and music industries. Web marketing would allow even the smallest companies to compete against corporate giants. It really would be a new world order.
This was a utopian vision, but some of it did come to pass. Internet marketing has transformed commerce. The entertainment industries have been transformed by things like MySpace and YouTube, and are now more open to outsiders. Mass media has changed dramatically, and some of the most influential people in the media have come from the blogging world. There are new ways to organize politically and run elections. Crowdfunding has made tens of thousands of projects possible to finance, and crowdsourcing made more types of projects possible. Facebook and Twitter really did help topple governments.
But that is just one side of the Internet’s disruptive character. The Internet has emboldened traditional power as well.
On the corporate side, power is consolidating, a result of two current trends in computing. First, the rise of cloud computing means that we no longer have control of our data. Our e-mail, photos, calendars, address books, messages, and documents are on servers belonging to Google, Apple, Microsoft, Facebook, and so on. And second, we are increasingly accessing our data using devices that we have much less control over: iPhones, iPads, Android phones, Kindles, ChromeBooks, and so on. Unlike traditional operating systems, those devices are controlled much more tightly by the vendors, who limit what software can run, what they can do, how they’re updated, and so on. Even Windows 8 and Apple’s Mountain Lion operating system are heading in the direction of more vendor control.
I have previously characterized this model of computing as “feudal.” Users pledge their allegiance to more powerful companies who, in turn, promise to protect them from both sysadmin duties and security threats. It’s a metaphor that’s rich in history and in fiction, and a model that’s increasingly permeating computing today.
Medieval feudalism was a hierarchical political system, with obligations in both directions. Lords offered protection, and vassals offered service. The lord-peasant relationship was similar, with a much greater power differential. It was a response to a dangerous world.
Feudal security consolidates power in the hands of the few. Internet companies, like lords before them, act in their own self-interest. They use their relationship with us to increase their profits, sometimes at our expense. They act arbitrarily. They make mistakes. They’re deliberately—and incidentally—changing social norms. Medieval feudalism gave the lords vast powers over the landless peasants; we’re seeing the same thing on the Internet.
It’s not all bad, of course. We, especially those of us who are not technical, like the convenience, redundancy, portability, automation, and shareability of vendor-managed devices. We like cloud backup. We like automatic updates. We like not having to deal with security ourselves. We like that Facebook just works—from any device, anywhere.
Government power is also increasing on the Internet. There is more government surveillance than ever before. There is more government censorship than ever before. There is more government propaganda, and an increasing number of governments are controlling what their users can and cannot do on the Internet. Totalitarian governments are embracing a growing “cyber sovereignty” movement to further consolidate their power. And the cyberwar arms race is on, pumping an enormous amount of money into cyber-weapons and consolidated cyber-defenses, further increasing government power.

By Bruce Schneier
Source and read more:

Copyright and Intellectual Property: Change is Coming

Today, there is almost no anonymity online. Many people strive for the opposite, in fact — total publicity as it concerns their professional goals, copyrighted materials, and intellectual property. In our contemporary world with new value systems, it just doesn’t make sense to hide your intellectual property. The very fact of stopping a new idea from implementation doesn’t make sense. Perhaps, it could even be considered a crime in the future. However, we aren’t speaking to the abolition of the copyright or its infringement.

Against the backdrop of the new developments and opportunities in today’s information-centric culture, copyright registration can be an obsolete mean to an ineffective end. In many cases, it’s even a limiting factor for industry development, and oddly enough, infringes on the rights of authors. Our current intellectual property system benefits corporations by complicating the process of protecting the rights of content creators. In an era where opportunities and innovations abound our system is almost a tragic comedy.

In most cases, intellectual property is more like a competition of strength, and has nothing to do with people’s actual needs. On one hand, every person has an inherent right to optimal distribution of their intellectual activities. On the other hand, society has constructed a powerful system of checks and balances, and power lies in the hands of an elite few. It’s no secret that information technology has changed concepts of relationships in all spheres of human activity — including between content creators and their buyers.

There is an extremely radical view that any intellectual property belongs to society as a whole. This point of view is followed by corporations, whose success is based on the effective use of text, music, video, pictures, and other intellectual property. Surely, we can understand that. There’s no doubt their success as organizations depends on their ability to cut a profit by expending the least money and effort possible. This system is only perpetuated by the rules that surround modern intellectual property issues. Our current system has effectively obstructed the development of modern digital distribution methods that appropriately reward content creators.

If the idea of intellectual property belonging to an entire society isn’t so far off the mark, it’s only fair the society pays back the individuals behind the content, who created the value for the world. We’re obligated to work out a method that helps content creator’s claim what they deserve freely. The system needs to be simple, fast, and cheap. Content creators need to have an opportunity to be an active participant in their intellectual property distribution process. These innovative new systems and mechanisms should be beneficial to everyone involved, from the initial creator to the consumer.

There are no technology-related obstacles that stand in the way of reaching these goals. Moreover, some entrepreneurs and enthusiasts find creative solutions within the current legal framework to enable simplification of copyright registration and intellectual property distribution. However, with the support of governments worldwide, we could not only improve the quality of life for millions of people, but achieve new levels of interpersonal communication. We could launch a new economy for an information-based society.

Can you afford to buy a music track for $0.99? Of course you can! What if you lived in a less economically-developed country? Probably not! Where do these prices come from? Are they set by the music creator? Should the creator earn enough? Yes, they definitely should. But should this track be available to the public? Yes, it should. What is the copyright of the track, and can we change it? I believe new systems are possible, and that there are no technical obstacles barring us from putting these systems into place. All of these questions lead to a single answer, which is the simplification of registration. Easier copyright recognition will open doors to a new era of digital distribution and intellectual property rights transfer.

Remember how in the recent past, it was incredibly complicated for authors to navigate book publishing, printing houses, registration fees and other obstacles. Now, all it takes to publish is just to typing text in a publishing program. The new wave of indie authors has raised plenty of complaints, of course. Some people believe that now every single ‘hack writer’ can become an author. However, it’s evident that these thoughts stem from a fear of losing personal status, an inability to accept innovations, and other fears that stem from human defense mechanisms.

I am sure the intellectual property sphere will undergo a similar revolution. There’s just no question that technology is worlds ahead from legislation. It’s just up to society to acknowledge this reality and move forward. We have to provide new opportunities to content creators to distribute their work. If we don’t, we’re permitting artificial barriers to content creation, the production of intellectual property rights, and distribution independently. Creators have a right to select their terms of sale and use, and the value of their product.

I discussed distribution issues in my previous post, but it’s directly tied to the copyright issue we’re covering here. We have to provide the entire chain with every advantage of new technologies and legal solutions. If a content creator can control the entire process, the consumer can access the products they want without violating the creators’ rights. The sales conditions will benefit everyone involved. Currently, if you ask a musician how they sell their songs or how often they’re in rotation on the radio, they’re likely to burst into tears.

This isn’t about any party’s plight, but the absurd situation we’re currently in. Both authors and consumers need each other. However, no one’s talking about the causes of our current situation, or the army of invisible intermediaries that control buyer/seller relationships. The advent of the internet has revolutionized our processes and values, regardless of how stagnant they seemed before the age of technology. I’m convinced that we’ll soon witness considerable changes in the copyright and intellectual property spheres.
By Vitalii Soldatenko

New Bill Against Bulk Phone Medadata Collection

A Bill
To reform the authorities of the Federal Government to require the production of certain business records, conduct electronic surveillance, use pen registers and trap and trace devices, and use other forms of information gathering for foreign intelligence, counterterrorism, and criminal purposes, and for other purposes.
Source and read:

How Glass Magnifies Desire

We no longer just look through glass; we also want it to peer into us.

By John Garrison
Source and read:

Samsung wants your TV to talk to your fridge

Samsung took a first step towards connecting the living room to the internet of things Monday with the introduction of its new Smart TV SDK, which offers some basic integration of smart devices into the TV experience.
Owners of 2013 and 2014 Samsung Smart TVs who also own connected appliances made by the company will be able to get status updates from their fridge, washer or air conditioner directly on their TV screen, including information on whether their laundry is done or if someone has opened the fridge. The SDK also makes some basic interaction between the TV and connected devices possible. For example, TV viewers will be able to tweak the air conditioning without ever leaving their couch when they’re watching an especially chilly movie.
Fans of the internet of things have been talking about these kinds of integrations for a long time — the washing machine talking to your TV is probably the most-overused example of the connected home — but so far, we have seen very little in the way of actual implementations, in part because smart TVs actually haven’t been all that smart. Samsung is clearly just taking baby steps with the integration it announced today. The connected home goes beyond just those three appliances, with light being an obvious candidate for future integration. And eventually, one would hope that the platform would also interact with third-party appliances.
The company announced the integration at its developer conference in San Francisco Monday, where it also showed off its new mutliscreen SDK, which will bring Chromecast-like features to its smart TV platform. And to get developers excited about its TVs, it also dropped some pretty big numbers: Samsung sold 53 million TVs last year, and Samsung Electronics America VP Eric Anderson said that his company now sees a 72 percent activation rate for its smart TVs and Blu-ray players in the U.S., something that he claimed was far above industry averages.
Viewers who do access Samsung’s smart TV platform end up using it a lot: 19 percent of the owners of a connected device from Samsung watch online video with it every day. The company’s TV apps get accessed 40 times a week per device on average, said Anderson, and some of the apps on the platform also see remarkable use:
Hulu Plus gets used about one hour a day on average, Univision’s Uvideo service 45 minutes a day. However, Pandora dwarfs them all, clocking a whopping 2.7 hours a day. Samsung didn’t give out daily per-person usage numbers for Netflix, but said that the video service tracks a total of two million hours of streaming across Samsung devices per day on weekends.

By Janko Roettgers

Google Amends Proposal to Settle EU Antitrust Investigation

BRUSSELS— Google Inc. GOOG -0.02%Google Inc. Cl AU.S.: Nasdaq $1015.00 -0.20-0.02% Oct. 28, 2013 4:00 pm Volume (Delayed 15m) : 1.10MU.S.: Nasdaq $1015.70 +0.70+0.07% Oct. 28, 2013 7:29 pm Volume (Delayed 15m): 65,259 P/E Ratio 28.74Market Cap $340.91 Billion Dividend Yield N/ARev. per Employee $1,060,12010/28/13 Google, Microsoft Threaten End...10/28/13 Google Amends Proposal to Sett...10/28/13 The Morning Download: Obamacar...More quote details and news » offered new proposals to address European Union concerns that the company unfairly uses its search engine to promote its own services, particularly for shopping and local searches.

Google proposed displaying three sets of results from rival search engines in a box under the U.S.-based company's own shopping results. The rivals still would have to pay through an auction mechanism to be featured, as proposed earlier, but the minimum price for search terms was cut to three European cents from 10 European cents (to four U.S. cents from 14 U.S. cents).

Google also changed its earlier proposal so that search terms can be more precisely defined. An online shopping service will be able, for example, to say it wants to come up in searches for a specific brand of tennis racket, rather than just "sports equipment."

"The aim of the commitment is to show rivals have visibility on screen," a senior EU official said Monday. "It's not for us to mandate the final outcome" of what users click on.

The bloc gave Google's rivals and others four weeks to review the proposals.

If the EU accepts Google's proposals, they will be binding on the company for five years. A trustee would be appointed to monitor compliance.The binding offer marks a new step for a company that earlier this year was able to exit a similar investigation by the U.S. Federal Trade Commission after making only voluntary commitments.

If the European Commission, the EU's executive arm, doesn't accept Google's proposals, it can ask for further changes or begin formal legal antitrust proceedings. An antitrust case can be lengthy and expensive, as the EU's case against Microsoft Corp. MSFT -0.45%Microsoft Corp.U.S.: Nasdaq $35.57 -0.16-0.45% Oct. 28, 2013 4:00 pm Volume (Delayed 15m) : 37.55MU.S.: Nasdaq $35.60 +0.03+0.08% Oct. 28, 2013 7:21 pm Volume (Delayed 15m): 806,310 P/E Ratio 13.17Market Cap $298.27 Billion Dividend Yield 3.15% Rev. per Employee $810,20210/28/13 Google, Microsoft Threaten End...10/25/13 Microsoft Surface Sales Perk U...10/25/13 Tech Trifecta: Amazon, Microso...More quote details and news » last decade showed.

The commission sent a request for information to 125 companies, including all the complainants and rivals who responded to earlier Google proposals. "We're seeking targeted feedback on specific points," the EU official said.

In another tweak to the previous proposals, which were slammed as inadequate by Google's rivals, the changes would apply to search terms entered anywhere. Instead of just applying to the search box on the Google home page, the proposals would affect searches in a browser's toolbar, address bar and even by voice.

"We've made significant changes to address the [European Commission]'s concerns, greatly increasing the visibility of rival services and addressing other specific issues," Google said. "Unfortunately, our competitors seem less interested in resolving things than in entangling us in a never-ending dispute." Google's shares fell 20 cents to close at $1,015 Monday on the Nasdaq Stock market.

It remained unclear whether giving away valuable real estate close to where it sells advertising would hurt Google's search business.

The changes would apply to Google's various European sites, but not—although fewer than 5% of queries in Europe are made with the U.S. site, EU officials said.

The offer also addresses Google copying content from other websites to use on its own, a process known as "scraping." One example is user reviews of restaurants or hotels that pop up on Google maps. Before, companies could avoid that only by opting out of Google search results completely. Now, officials said, a nonretaliation clause has been strengthened for companies who do opt out.

EU Competition Commissioner Joaquín Almunia this month said he hoped to reach a decision next spring on a settlement to its investigation, which started nearly three years ago.

Some complainants said the proposals are insufficient.

"Google still doesn't appear to have offered anything that will prevent it from systematically preferencing its own services and manipulating results, a clear failure of the initial offer," said David Wood, who represents iComp, a group of rivalsthat includes Microsoft. "What is needed is a principle-based, forward-looking approach, including a clear commitment not to discriminate against its rivals."

Thomas Vinje, a lawyer for FairSearch Europe, which represents several complainants, said "no genuinely significant changes have been made to the initial proposal."

 By Frances Robinson

The School of MakeOurMark


NIST Seeks Public Input on Updated Smart Grid Cybersecurity Guidelines

The National Institute of Standards and Technology (NIST) is requesting public comments on the first revision to its guidelines for secure implementation of "smart grid" technology.
The draft document, NIST Interagency Report (IR) 7628 Revision 1: Guidelines for Smart Grid Cybersecurity, is the first update to NISTIR 7628 since its initial publication in September 2010. During the past three years, use of smart grid technology has expanded dramatically, particularly the number of smart energy meters on homes, and technology and laws have progressed as well. These changes prompted NIST to update its document.
"Millions of smart meters are in use around the country now, and as the smart grid is implemented we have gained more knowledge that required minor tweaks to the existing document," says NIST computer scientist Tanya Brewer. "There also have been legislative changes in states such as California and Colorado concerning customer energy usage data, and we have made revisions to the volume on privacy based on the changing regulatory framework."
NISTIR 7628 remains a three-volume document geared mainly toward cybersecurity specialists. Volume 1 contains mostly technical material for maintaining the security of the grid, including a reference architecture and high-level security requirements. Vol. 2 addresses privacy issues, containing a discussion of potential privacy issues in smart grid compared to other networked systems. Vol. 3 contains analyses and references that support the document's contents.
Brewer says most of the changes are minor additions to existing sections of NISTIR 7628, though there is a newly added section in Vol. 2 regarding privacy. While cybersecurity practitioners will most likely be its primary audience, Brewer says public utility commissioners, vendors and researchers also will find the changes of interest.
The draft version of NISTIR 7628 Revision 1 can be found at Comments will be accepted until Dec. 24, 2013, and can be submitted to using the Excel template available at the site. A Federal Register notice announcing the request for comments is available at