National Cyber Security Strategies in the World

A free and open Internet is at the heart of the new Cyber Security Strategy by the European Union High Representative Catherine Ashton and the European Commission. The new Communication is the first comprehensive policy document that the European Union has produced in this area. It comprises internal market, justice and home affairs and the foreign policy aspects of cyberspace issues. ENISA has listed all the documents of National Cyber Security Strategies in the EU but also in the world (latest update April 2013). Some of these documents are still under consultation so no official translations in English were produced.

European Union

Austria

Austrian Cyber Security Strategy (2013)

Belgium

To be soon published (2013)

Czech Republic EU

Cyber Security Strategy of Czech Republic for the 2011-2015 Period (2011)

Estonia

Cyber Security Strategy (2008)

Finland

Finland's Cyber Security Strategy (2013)

France

Information systems defence and security, France's strategy (2011)

Germany

Cyber Security Strategy for Germany (2011)

Hungary

National Cyber Security Strategy (2013)

Lithuania

Programme for the development of electronic information security (cyber security) for 2011-2019 (2011)

Luxembourg

National strategy on cyber security (2011) - in french

The Netherlands

The national cyber security strategy (2013) - NEW

Poland

Govermental Program for Protection of Cyberspace for the years 2011-2016 (2013) - in polish

Romania

Cyber Security Strategy in Romania (2011)

Slovak Republic

National Strategy for Information security in the Slovak Republic (2008)

Spain

The National Security Strategy (2013)

United Kingdom

Cyber Security Strategy of the United Kingdom (2009)

Worldwide

Australia

Cyber Security Strategy (2011) world

Canada

Canada's cyber security strategy (2010)

India

National Cyber Security Strategy (2013)

Japan

Information Security Strategy for protecting the nation (2010)

Kenya

Announced 2013 - to be published

Montenegro

Announced 2013 - to be published in October 2013

New Zealand

New Zealands Cyber Security Strategy (2011)

Norway

National Strategy for Information Security (2012) - in norwegian

Russia

The Information Security Doctrine of the Russian Federation (2000)

Singapore

Third national cyber security masterplan (2013-2018) - to be published

South Africa

Cyber Security policy of South Africa (2010)

South Korea

National Cyber Security Strategy (2011) - not available in EN

Switzerland

National strategy for Switzerlands's protection against cyber risks (2012)

Turkey

National Cybersecurity Strategy (2013)

Uganda

Announced 2013 - to be published

United States of America

International Strategy for cyberspace (2011)

Source:
https://www.enisa.europa.eu/activities/Resilience-and-CIIP/national-cyber-security-strategies-ncsss/national-cyber-security-strategies-in-the-world

Beyaz Şapka Bilgi Güvenliği Konferansı



Source:
www.nebulabilisim.com.tr/BS13

ADİL KONUKOĞLU, KUDÜS TAHKİM MERKEZİ YÖNETİM KURULU ÜYESİ OLDU

ADİL KONUKOĞLU, KUDÜS TAHKİM MERKEZİ YÖNETİM KURULU ÜYESİ OLDU











Gaziantep Sanayi Odası (GSO) Yönetim Kurulu Başkanı Adil Konukoğlu, uluslararası alanda önemli bir görev üstlendi.
Konukoğlu, İsrailli ve Filistinli işadamları arasında ticari anlaşmazlıkları çözmek için kurulan, Kudüs Tahkim Merkezi’nin Yönetim Kurulu Üyeliğine ve Bütçeden Sorumlu Komite Üyeliğine seçildi.
Türkiye Odalar ve Borsalar Birliği (TOBB) Başkanı M. Rifat Hisarcıklıoğlu’nun, Başkanı olduğu Kudüs Tahkim Merkezi’nin resmi açılış töreni ve ilk Yönetim Kurulu Toplantısı, Doğu Kudüs’te 18 Kasım Pazartesi günü yapıldı.
Hisarcıklıoğlu’nun teklifi üzerine, toplantıda, GSO Yönetim Kurulu Başkanı Adil Konukoğlu, Kudüs Tahkim Merkezi Yönetim Kurulu Üyeliğine ve Bütçeden Sorumlu Komite Üyeliğine seçildi.
Kudüs Tahkim Merkezi açılış etkinliğine katılan, Ortadoğu Dörtlüsü Özel Temsilcisi ve eski İngiltere Başbakanı Tony Blair de, Türkiye’nin bu kuruluşa liderlik etmekle büyük bir rol üstlendiğini belirterek, uluslararası toplum adına Kudüs Tahkim Merkezi taraflarını tebrik etti.
Adil Sani Konukoğlu, TOBB Başkanı RifatHisarcıklıoğlu başkanlığındaki heyetle birlikte ziyaretlerinin ikinci günü olan 19 Kasım’da, Nablus ve Cenin’i ziyaret etti. Ziyaret çerçevesinde, TOBB’un, Filistin'de yapacağı organize sanayi bölgesi için mutabakat zaptı imza törenine katıldı.
-‘’BU GÖREVİ LAYIKIYLA YAPACAĞIZ’’-
İsrail ve Filistin’in, Ortadoğu’nun iki önemli ülkesi olduğuna işaret eden Konukoğlu, iki ülke işadamları arasında milyarlarca dolarlık iş yapıldığını ve bu çerçevede zaman zaman ticari sorunlar ortaya çıkabildiğini, kurulan Kudüs Tahkim Merkezi’nin bu ticari sorunların kısa sürede çözülmesinde önemli rol oylanacağını belirtti.

Kudüs Tahkim Merkezi projesinin bir özel sektör girişimi olduğunu bildiren Konukoğlu, birçok uluslararası görevi başarıyla yürüten TOBB Başkanı Rifat Hisarcıklıoğlu’nun bu kuruluşun başına getirilmesinin çok önemli bir karar olduğunu ifade ederek, şöyle dedi:
‘’Kendisini bir kez daha tebrik ediyorum. Bana da bu organizasyonda yer verilmesi büyük bir onurdur. Bu görevi layıkıyla yapacağız.’’
Bölgede, iş yapma ortamının iyileşmesinin bölge barışına katkı sağlayacağına vurgu yapan Konukoğlu, şunları söyledi:
‘’Bölgede kalıcı bir barışın sadece politik değil, ekonomik bir temele de dayandırılması gerektiğine inanıyoruz. İsrailli ve Filistinli işadamları da, ülkelerinin ekonomisinin daha yakın hale gelmesinde çok önemli rol oynayabilir. Kudüs Tahkim Merkezi çok yararlı bir girişim. Merkez çalışmalarıyla, ticari ve ekonomik işbirliklerinin gelişmesine ve barış sürecine köprü olacaktır. Bu, tüm bölge barışına hizmet edecektir.’’
TOBB’un, Filistin'de yapacağı Cenin Organize Sanayi Bölgesinin de bölge iş hayatına büyük katkısı olacağından bahseden Konukoğlu, burada üretilen ürünlerin sıfır gümrük ve sıfır kotayla dünya pazarlarına satılacağını, bu özelliği ile Türkiye için de cazip bir proje olduğunu kaydetti.


Source:
http://www.gso.org.tr/?gsoHaberID=2892

English Has a New Preposition, Because Internet

Linguists are recognizing the delightful evolution of the word "because." 

Let's start with the dull stuff, because pragmatism.
The word "because," in standard English usage, is a subordinating conjunction, which means that it connects two parts of a sentence in which one (the subordinate) explains the other. In that capacity, "because" has two distinct forms. It can be followed either by a finite clause (I'm reading this because [I saw it on the web]) or by a prepositional phrase (I'm reading this because [of the web]). These two forms are, traditionally, the only ones to which "because" lends itself.
I mention all that ... because language. Because evolution. Because there is another way to use "because." Linguists are calling it the "prepositional-because." Or the "because-noun."
You probably know it better, however, as explanation by way of Internet—explanation that maximizes efficiency and irony in equal measure. I'm late because YouTube. You're reading this because procrastination. As the linguist Stan Carey delightfully sums it up: "'Because' has become a preposition, because grammar." 
 
Indeed. So we get uses like this, from Wonkette
Well here is a nice young man, Fred E. Ray Smith, running for Oklahoma state Senate, from jail, where he was taken for warrants and drunk driving and driving without a license or registration, and also he owes so much child support and his ex has a protective order out against him. We assume he is going to win, because “R-Oklahoma.”
And like this, from the Daily Kos:
 If due north was good enough for that chicken's parents and grandparents and great-great-great-great-grandparents, it's good enough for that chicken too, damn it. But Iowa still wants to sell eggs to California, because money.
And like this, from Lindy West and Jezebel:
Did you hear the big news? Men are going extinct. Really really slowly, and probably only in theory, but extinct nonetheless! [...]
Lame! RIP, dudes! Now, I'm sure kneejerk anti-feminist dickwads think that the eradication of men is exactly what we women mean by "plz can we have equal rights now thx." Because logic.
It's a usage, in other words, that is exceptionally bloggy and aggressively casual and implicitly ironic. And also highly adaptable. Carey has unearthed instances of the "because-noun" construction with the noun in question being, among other terms, "science, math, people, art, reasons, comedy, bacon, ineptitude, fun, patriarchy, politics, school, intersectionality, and winner." (Intersectionality! Because THEORY. Bacon! Because BACON.)
But the formulation isn't simply limited to nouns. Carey again:
The construction is more versatile than “because+noun” suggests. Prepositional because can be yoked to verbs (Can’t talk now because cooking), adjectives (making up examples because lazy), interjections (Because yay!), and maybe adverbs too, though in strings like Because honestly., the adverb is functioning more as an exclamation. The resulting phrases are all similarly succinct and expressive.
Which is to say, the "because-noun" form is limited only to the confines of your own imagination. It can be anything you want it to be. So we get comments like these, with people using "because" not just to explain, but also to criticize, and sensationalize, and ironize:
 
By Megan garber
Source:

Watch the world's currencies flow into BTC in realtime

Each trade results in a bitcoin being sent from the currency counter in red to the country on the map. The value in BTC is listed in green and plotted across the map. The last exchange rate for each currency is listed in @purple and updated for each trade.
 
Source and watch the map:

User Manual: Building iPhone Apps for the ‘Internet of Things’

What exactly is the “Internet of Things”? It’s a popular phrase used to describe a category of physical devices like home-monitoring devices, lamps, watches and cars that now connect to PCs, tablets, and smartphones. You may have seen some examples in the Apple Store, or mentioned on technology websites, but few have explained the process of making the magic happen.
Home automation apps, such as Belkin’s WeMo Switch, or products sold on SmartThings, communicate with door locks or light switches to control your apartment or home from anywhere on a WiFi network. Another well known example is Square, the payment dongle that allows users to use an iPhone or iPad with a specially made scanner for credit card transactions.
Entering the connected device space can seem like an attractive pursuit for a budding entrepreneur, but unless you have a background in electrical engineering, it may also be a daunting endeavor.
Ryan Matzner is the Director of Strategy at Fueled, a development agency in New York City and London that builds mobile apps and web products. According to Matzner, “Building an app that connects with a piece of hardware is complex because you have to think about the limitations of the hardware and how it’s going to connect. Is it over WiFi? Bluetooth? Is it part of Apple’s Made For iPhone (MFi) program? For this reason and more, you won’t find as many startups entering this space in comparison to dating networks or messaging.”
When building an app that relies on hardware, there are many considerations, such as the anatomy and capabilities of the iPhone, iPad or iPod Touch.  Square, for instance, designed its card reader to plug into the headphone jack on iOS devices, rather than the 30-pin (now 8-pin) charging port.  One reason for this decision was to take advantage of the ⅛-inch port’s physical strength, since it has less chance of breakage.
Since all new iPhones now require adapters for old charging devices, Square saved potentially millions of dollars on reworking its design.  However, the company still had to build a product that converts data from a credit card’s magnetic strip into an audio signal, allowing it to recognize transactions, which is neither easy or cheap to do. Yet, this fortune has allowed Square to stay ahead of the curve, compared to iHome, whose line of clock radios likely frustrated new users after the Lightning connector was introduced.
Screen Shot 2013-11-18 at 5.11.53 PM
At the beginning stages of development, you’ll need to decide with which native iPhone features you’ll need to establish connectivity. If you’re using an app that involves sharing or communication, like Celeste, you should know the difference between WiFi and Bluetooth. This could also be beneficial for remote control apps, a very popular one being Snatch, which facilitates interaction between an iOS device and a Macbook. You may even want to contact cellular carriers to make sure there aren’t any differences in their services. For example, Verizon Wireless and Sprint do not allow users to make telephone calls and surf online simultaneously, while AT&T does.
Even if you have a product that doesn’t require an app, it might work with the iPhone’s onboard features, such as the iSight camera. Building an accessory like the Olloclip camera lens may require you to know the zoom and resolution specifications that you’re starting with, along with the dimensions of the phone itself.
Apple’s MFi program and iOS Developer Library are open to developers for any assistance in designing MFi accessories. These programs provide free technical support for the engineering of iOS-related electronic components. The enrollment costs nothing, though you will have to give Apple $4 of every unit sold when people start buying the finished product.

It would also be wise to understand all of the stipulations that Apple has for its licensees. There are two segments of the program, one for developers and one for manufacturers, each with its own requirements. For the manufacturing license, you’ll need to own your manufacturing facilities, and both licenses involve a credit review of your company. The good news is, qualified businesses will get special access to inside information and even certain pieces of Apple hardware, such as a dock connector.
When developing an app for connected devices, a series of programming protocols must be used to make your accessory work with the iPhone. A common chunk of code, called the External Accessory Framework is entered in the beginning stages to let the iOS product know that another object is trying to communicate with it.
After this step, you must declare the appropriate protocols to link the accessory with a particular app, causing it to launch when connected. According to the iOS Developer Library, if this is not done, iOS will launch the App Store in an attempt to find an app that will be compatible. Lastly, for any interaction involving the two products, an “EASession” must be used to transfer and monitor data. As with any other iOS app, developers must use Objective-C as their programming language.
Creating a piece of app-enabled hardware is more work than most developers usually endure, but it also expands your opportunities for creating the next popular mobile product. The Internet of Things is an emerging market with many opportunities left untapped, making it a tempting path for entrepreneurs. Just make sure you’re ready to take the plunge first.
 
By Jeremy Rappaport
Source:

PDR: A Prevention, Detection and Response Mechanism for Anomalies in Energy Control Systems

Abstract:
Prevention, detection and response are nowadays considered to be three priority topics for protecting critical infrastructures, such as energy control systems. Despite attempts to address these current issues, there is still a particular lack of investigation in these areas, and in particular in dynamic and automatic proactive solutions. In this paper we propose a mechanism, which is called PDR, with the capability of anticipating anomalies, detecting anomalous behaviours and responding to them in a timely manner. PDR is based on a conglomeration of technologies and on a set of essential components with the purpose of offering situational awareness irrespective of where the system is located. In addition, the mechanism can also compute its functional capacities by evaluating its efficacy and precision in the prediction and detection of disturbances. With this, the entire system is able to know the real reliability of its services and its activity in remote substations at all times.

By Maria C. Alcaraz; Meltem Sonmez Turan
Source and read the full paper:
http://www.nist.gov/customcf/get_pdf.cfm?pub_id=911543

Street View floats into Venice

Venice was once described as “undoubtedly the most beautiful city built by man,” and from these pictures it’s hard to disagree. You can now explore panoramic imagery of one of the most romantic spots in the world, captured with our Street View Trekker technology.

It was impossible for us to collect images of Venice with a Street View car or trike—blame the picturesque canals and narrow cobbled walkways—but our team of backpackers took to the streets to give Google Maps a truly Shakespearean backdrop. And not just the streets—we also loaded the Trekker onto a boat and floated by the famous gondolas to give you the best experience of Venice short of being there.
Our Trekker operator taking a well-earned rest while the gondolier does the hard work
The beautiful Piazza San Marco, where you can discover Doge's Palace, St. Marks' Cathedral, the bell tower, the Marciana National Library and the clocktower

We covered a lot of ground—about 265 miles on foot and 114 miles by boat—capturing not only iconic landmarks but several hidden gems, such as the Synagogue of the first Jewish Ghetto, the Devil’s Bridge in Torcello island, a mask to scare the same Devil off the church of Santa Maria Formosa and the place where the typographer Manutius created the Italics font. Unfortunately, Street View can’t serve you a cicchetto (local appetizer) in a classic bacaro (a typical Venetian bar), though we can show you how to get there.
The Devil’s Bridge in Torcello Island

Once you’ve explored the city streets of today, you can immerse yourself in the beauty of Venice’s past by diving deep in to the artworks of the Museo Correr, which has joined the Google Cultural Institute along with Museo del Vetro and Ca’ Pesaro - International Gallery of Modern Art.
Compare the modern streets with paintings of the same spots by artists such as Carpaccio and Cesare Vecellio
Or delve into historical maps of Venice, like this one showing the Frari Church, built in 1396

Finally, take a look behind the scenes showing how we captured our Street View imagery in Venice.

The Floating City is steeped in culture; it’s easy to see why it’s retained a unique fascination and romance for artists, filmmakers, musicians, playwrights and pilgrims through the centuries—and now, we hope, for Street View tourists too.



Dogfight: How Apple and Google Went to War and Started a Revolution




Behind the bitter rivalry between Apple and Google—and how it’s reshaping the way we think about technology

The rise of smartphones and tablets has altered the business of making computers. At the center of this change are Apple and Google, two companies whose philosophies, leaders, and commercial acumen have steamrolled the competition. In the age of Android and the iPad, these corporations are locked in a feud that will play out not just in the marketplace but in the courts and on screens around the world.
Fred Vogelstein has reported on this rivalry for more than a decade and has rare access to its major players. In Dogfight, he takes us into the offices and board rooms where company dogma translates into ruthless business; behind outsize personalities like Steve Jobs, Apple’s now-lionized CEO, and Eric Schmidt, Google’s executive chairman; and inside the deals, lawsuits, and allegations that mold the way we communicate. Apple and Google are poaching each other’s employees. They bid up the price of each other’s acquisitions for spite, and they forge alliances with major players like Facebook and Microsoft in pursuit of market dominance.
Dogfight reads like a novel: vivid nonfiction with never-before-heard details. This is more than a story about what devices will replace our phones and laptops. It’s about who will control the content on those devices and where that content will come from—about the future of media in Silicon Valley, New York, and Hollywood.
By Fred Vogelstein
Publication date: November 12, 2013
Source:
http://www.amazon.com/Dogfight-Apple-Google-Started-Revolution-ebook/dp/B00BIV1R98

The Experiments Most Likely to Shake Up the Future of Physics

The current era of particle physics is over. When scientists at CERN announced last July that they had found the Higgs boson – which is responsible for giving all other particles their mass – they uncovered the final missing piece in the framework that accounts for the interactions of all known particles and forces, a theory known as the Standard Model.
And that's a good thing, right? Maybe not.
The prized Higgs particle, physicists assumed, would help steer them toward better theories, ones that fix the problems known to plague the Standard Model. Instead, it has thrown the field into a confusing situation.
“We’re sitting on a puzzle that is difficult to explain,” said particle physicist Maria Spiropulu of Caltech, who works on one of the LHC's main Higgs-finding experiments, CMS.
It may sound strange, but physicists were hoping, maybe even expecting, that the Higgs would not turn out to be like they predicted it would be. At the very least, scientists hoped the properties of the Higgs would be different enough from those predicted under the Standard Model that they could show researchers how to build new models. But the Higgs' mass proved stubbornly normal, almost exactly in the place the Standard Model said it would be.
To make matters worse, scientists had hoped to find evidence for other strange particles. These could have pointed in the direction of theories beyond the Standard Model, such as the current favorite supersymmetry, which posits the existence of a heavy doppelganger to all the known subatomic bits like electrons, quarks, and photons.
Instead, they were disappointed by being right. So how do we get out of this mess? More data!
Over the next few years, experimentalists will be churning out new results, which may be able to answer questions about dark matter, the properties of neutrinos, the nature of the Higgs, and perhaps what the next era of physics will look like. Here we take a look at the experiments that you should be paying attention to. These are the ones scientists are the most excited about because they might just form the next cracks in modern physics.

By Adam Mann
Source:
http://www.wired.com/wiredscience/2013/11/future-physics-experiments/

Premise

Premise is building machinery to improve global economic transparency.

Premise software and mobile infrastructure collects millions of discrete data points every day from thousands of local sources, enabling our clients, who are among the world's largest institutions, to understand and navigate unprecedented volatility in global inflation,                     
industry competitive dynamics, and food security. We are based in San Francisco with a presence in 30 countries, and are backed by some of the most forward-looking investors including Google Ventures, Harrison Metal, and Andreessen Horowitz.

Source and more:
http://www.premise.com/about.html

SİBER OLAYLARA MÜDAHALE EKİPLERİNİN KURULUŞ, GÖREV VE ÇALIŞMALARINA DAİR USUL VE ESASLAR HAKKINDA TEBLİĞ

11 Kasım 2013  PAZARTESİ
Resmî Gazete
Sayı : 28818
TEBLİĞ
Ulaştırma, Denizcilik ve Haberleşme Bakanlığından:
SİBER OLAYLARA MÜDAHALE EKİPLERİNİN KURULUŞ, GÖREV VE
ÇALIŞMALARINA DAİR USUL VE ESASLAR HAKKINDA TEBLİĞ
BİRİNCİ BÖLÜM
Amaç ve Kapsam, Dayanak, Tanımlar
Amaç ve kapsam
MADDE 1 (1) Bu Tebliğin amacı ve kapsamı, Siber Olaylara Müdahale Ekiplerinin kuruluş, görev ve çalışmalarına ilişkin usul ve esaslarını belirleyerek, hizmetlerin etkin ve verimli bir şekilde yürütülmesini sağlamaktır.

Power Plants and Other Vital Systems Are Totally Exposed on the Internet



What do the controls for two hydroelectric plants in New York, a generator at a Los Angeles foundry, and an automated feed system at a Pennsylvania pig farm all have in common? What about a Los Angeles pharmacy’s prescription system and the surveillance cameras at a casino in the Czech Republic?
They’re all exposed on the internet, without so much as a password to block intruders from accessing them.
Despite all of the warnings in recent years about poorly configured systems exposing sensitive data and controls to the internet, researchers continue to find machines with gaping doors left open and a welcome mat laid out for hackers.
The latest crop comes courtesy of San Francisco-based independent security researcher Paul McMillan, who scanned the entire IPv4 address space (minus government agencies and universities) and found unsecured remote management software running on 30,000 computers.
McMillan searched for port 5900 — a port generally used by Virtual Network Computing systems, or VNC, that are used to control computers remotely. His automated scan took just 16 minutes and used a tool McMillan crafted from combining two existing tools – Masscan to do the port scanning and VNCsnapshot to take screenshots of each system the scan found. He looked only at VNC installations that had no authentication.
Some of the systems are easily identified, since the name of the company appears somewhere on the screen. Many of the systems, however, are unidentifiable since only their IP address is known (often it’s just the IP address of the user’s internet service provider). The nature of the system exposed is also not always clear from the screenshots McMillan’s tool collected. Many of them simply show cartoon schematics of a ventilation system or a factory’s conveyor belts, making it difficult to identify the nature of the operation.
Others were readily identifiable. Mary Longenecker of Creek Place Farms was alarmed to learn that her pig-feeding system was accessible to anyone. The machine mixes and doles out the feed to the Berkshire pigs on her Pennsylvania farm.
“That’s the brains of our operation because it’s so automated,” Longenecker told WIRED. “If someone pressed the stop button, it halts making feed in the entire system, or they could change the feed rations in all of the recipes and really mess things up.”
There’s also the milk inventory controls for a Holstein farm in British Columbia, and the records and appointment system for a string of veterinary clinics in the United Kingdom identifying pets and their owners and the records of their care. One system appears to monitor and control the ventilation for underground miners in Romania, while another displays a view of the refrigeration system for a food service company in Pennsylvania that provides lunches to schools and other facilities. Another appears to be the controls for an internet radio station in Bulgaria.
“A lot of the infrastructure that shows up is there because the software maker had it poke holes in the firewalls for this protocol, but other protocols aren’t showing through that firewall,” McMillan says. “So I think a lot of people think this stuff is behind their firewall” and therefore safe.
Although the systems can be configured to require authentication for access, McMillan found 30,000 systems that had no authentication.
Among ones he found exposed were cash register and point-of-sale systems showing customer purchases and credit card numbers, billboard control systems in South Korea, a system for tracking which exits are open and closed at several elderly residential housing units in New York, several car wash systems, as well as a number of pharmacies, including one in Los Angeles that was exposing full details of customers — their date of birth, home address, contact phone number and the kind of prescription they obtained. One record captured by the screenshot tool identified a 27-year-old female patient who obtained birth control from the pharmacy.
McMillan isn’t sure why the pharmacy data showed up — a violation of federal HIPAA regulations that tightly control who can access patient data — but he suspects the pharmacy may have been using remote management software to monitor employee activity on the computer and weren’t aware that it was also making the computer desktop accessible to anyone on the internet. A number of the control systems he found also appear to be using TeamViewer to allow manufacturers to monitor and troubleshoot the systems for their customers. A spokesman for TeamViewer, however, says that the software requires a password by default for access.
Also caught in the scan were a number of desktops of random users who had VNC on their systems. One desktop capture showed the computer owner playing World of Warcraft, another was downloading TV shows, a third was in the midst of making a Western Union money transfer while another was attempting to log into a bitcoin mining account. Another user in California — perhaps a staff member in a physician’s office — was in the midst of writing an email about a patient when McMillan’s screenshot tool captured the text. McMillan’s scan also captured an image of three children in pajamas apparently opening presents on Christmas morning. WIRED contacted the ISP, who contacted the owner of the computer in South Dakota, who believes the screen capture was taken while he was looking at a picture of his grandchildren.
McMillan initially posted all of the screenshots online that his scan had captured. But he pulled them down quickly after other security researchers criticized him for exposing the vulnerable systems. He has provided the information to US CERT and to ICS-CERT so that they can contact the owners or their ISPs and let them know that their systems are vulnerable. He’s also prepared a password-protected portal with all of the images sorted by IP address and country so that other researchers can help him contact the owners.
A selection of screenshots from some of the systems appear in the gallery above, with sensitive details blurred by WIRED.

By Kim Zetter
Source and more:
http://www.wired.com/threatlevel/2013/11/internet-exposed

A (relatively easy to understand) primer on elliptic curve cryptography

Everything you wanted to know about the next generation of public key crypto.

Elliptic curve cryptography (ECC) is one of the most powerful but least understood types of cryptography in wide use today. An increasing number of websites make extensive use of ECC to secure everything from customers' HTTPS connections to how they pass data between data centers. Fundamentally, it's important for end users to understand the technology behind any security system in order to trust it. To that end, we looked around to find a good, relatively easy-to-understand primer on ECC in order to share with our users. Finding none, we decided to write one ourselves. That is what follows.
Be warned: this is a complicated subject, and it's not possible to boil it down to a pithy blog post. In other words, settle in for a bit of an epic because there's a lot to cover. If you just want the gist, here's the TL;DR version: ECC is the next generation of public key cryptography, and based on currently understood mathematics, it provides a significantly more secure foundation than first-generation public key cryptography systems like RSA. If you're worried about ensuring the highest level of security while maintaining performance, ECC makes sense to adopt. If you're interested in the details, read on.

The dawn of public key cryptography

The history of cryptography can be split into two eras: the classical era and the modern era. The turning point between the two occurred in 1977, when both the RSA algorithm and the Diffie-Hellman key exchange algorithm were introduced. These new algorithms were revolutionary because they represented the first viable cryptographic schemes where security was based on the theory of numbers; it was the first to enable secure communication between two parties without a shared secret. Cryptography went from being about securely transporting secret codebooks around the world to being able to have provably secure communication between any two parties without worrying about someone listening in on the key exchange.
Whitfield Diffie and Martin Hellman.
Modern cryptography is founded on the idea that the key that you use to encrypt your data can be made public while the key that is used to decrypt your data can be kept private. As such, these systems are known as public key cryptographic systems. The first, and still most widely used of these systems, is known as RSA—named after the initials of the three men who first publicly described the algorithm: Ron Rivest, Adi Shamir, and Leonard Adleman.
What you need for a public key cryptographic system to work is a set of algorithms that is easy to process in one direction but difficult to undo. In the case of RSA, the easy algorithm multiplies two prime numbers. If multiplication is the easy algorithm, its difficult pair algorithm is factoring the product of the multiplication into its two component primes. Algorithms that have this characteristic—easy in one direction, hard the other—are known as trapdoor functions. Finding a good trapdoor function is critical to making a secure public key cryptographic system. Simplistically, the bigger the spread between the difficulty of going one direction in a trapdoor function and going the other, the more secure a cryptographic system based on it will be.

A toy RSA algorithm

The RSA algorithm is the most popular and best understood public key cryptography system. Its security relies on the fact that factoring is slow and multiplication is fast. What follows is a quick walk-through of what a small RSA system looks like and how it works.
In general, a public key encryption system has two components, a public key and a private key. Encryption works by taking a message and applying a mathematical operation to it to get a random-looking number. Decryption takes the random looking number and applies a different operation to get back to the original number. Encryption with the public key can only be undone by decrypting with the private key.
Computers don't do well with arbitrarily large numbers. We can make sure that the numbers we are dealing with do not get too large by choosing a maximum number and only dealing with numbers less than the maximum. We can treat the numbers like the numbers on an analog clock. Any calculation that results in a number larger than the maximum gets wrapped around to a number in the valid range.
In RSA, this maximum value (call it max) is obtained by multiplying two random prime numbers. The public and private keys are two specially chosen numbers that are greater than zero and less than the maximum value (call them pub and priv). To encrypt a number, you multiply it by itself pub times, making sure to wrap around when you hit the maximum. To decrypt a message, you multiply it by itself priv times, and you get back to the original number. It sounds surprising, but it actually works. This property was a big breakthrough when it was discovered.
To create an RSA key pair, first randomly pick the two prime numbers to obtain the maximum (max). Then pick a number to be the public key pub. As long as you know the two prime numbers, you can compute a corresponding private key priv from this public key. This is how factoring relates to breaking RSA—factoring the maximum number into its component primes allows you to compute someone's private key from the public key and decrypt their private messages.
Let's make this more concrete with an example. Take the prime numbers 13 and 7. Their product gives us our maximum value of 91. Let's take our public encryption key to be the number 5. Then using the fact that we know 7 and 13 are the factors of 91 and applying an algorithm called the Extended Euclidean Algorithm, we get that the private key is the number 29.
These parameters (max: 91, pub: 5, priv: 29) define a fully functional RSA system. You can take a number and multiply it by itself 5 times to encrypt it, then take that number and multiply it by itself 29 times and you get the original number back.
Let's use these values to encrypt the message "CLOUD".
In order to represent a message mathematically, we have to turn the letters into numbers. A common representation of the Latin alphabet is UTF-8. Each character corresponds to a number.
Under this encoding, CLOUD is 67, 76, 79, 85, 68. Each of these digits is smaller than our maximum of 91, so we can encrypt them individually. Let's start with the first letter.
We have to multiply it by itself five times to get the encrypted value.
67×67 = 4489 = 30 *
*Since 4489 is larger than max, we have to wrap it around. We do that by dividing by 91 and taking the remainder.
4489 = 91×49 + 30
30×67 = 2010 = 8
8×67 = 536 = 81
81×67 = 5427 = 58
This means the encrypted version of 67 (or C) is 58.
Repeating the process for each of the letters, we get that the encrypted message CLOUD becomes:
58, 20, 53, 50, 87
To decrypt this scrambled message, we take each number and multiply it by itself 29 times:
58×58 = 3364 = 88 (Remember, we wrap around when the number is greater than max.)
88×58 = 5104 = 8

9×58 = 522 = 67
Voila, we're back to 67. This works with the rest of the digits, resulting in the original message.
The takeaway is that you can take a number, multiply it by itself a number of times to get a random-looking number, and then multiply that number by itself a secret number of times to get back to the original number.

By Nick Sullivan
Source and read more:
http://arstechnica.com/security/2013/10/a-relatively-easy-to-understand-primer-on-elliptic-curve-cryptography/

Risk-based Authentication: A Primer

Advanced risk-based authentication techniques can reduce an organization’s exposure to potentially costly, reputation-damaging information security breaches.

Unauthorized access to sensitive data presents a pervasive threat to an organization’s brand equity, competitive posture, and reputation. Given today’s evolving threat landscape, traditional identity and access management technologies no longer suffice. Corporate leaders are justifiably concerned about the impact of a security incident, and pressure is mounting to not only detect but, more importantly, prevent threats. Fortunately, next-generation identity and access management solutions employing advanced risk-based authentication techniques can help.

These solutions work by developing a risk score for each log-in attempt, and then weighing this score against allowable risk thresholds for various systems. Adapting authentication levels based on risk reduces the fallout organizations experience when the single form of authentication they rely on (such as a password or biometric scanner) gets compromised.

The risk score estimates the risk associated with a log-in attempt based on a user’s typical log-in and usage profile, taking into account their device and geographic location, the system they’re trying to access, the time of day they typically log in, their device’s IP address, and even their typing speed. An employee logging into a CRM system using the same laptop, at roughly the same time of day, from the same location and IP address will have a low risk score. By contrast, an attempt to access a finance system from a tablet at night in Bali could potentially yield an elevated risk score.

Risk thresholds for individual systems are established based on the sensitivity of the information they store and the impact if the system were breached. Systems housing confidential financial data, for example, will have a low risk threshold.

If the risk score for a user’s access attempt exceeds the system’s risk threshold, authentication controls are automatically elevated, and the user may be required to provide a higher level of authentication, such as a PIN or token. If the risk score is too high, it may be rejected outright.

The use cases in the following infographic illustrate how risk-based authentication systems work.

Click here or on graphic to enlarge.

By Irfan Saif
Source and more:
http://deloitte.wsj.com/cio/2013/10/30/risk-based-authentication-a-primer/

Bir Başka Erkan AKDEMİR ve Türkiye'de Telekomünikasyon'un Tarihçesi Dersi



 
ITL 537 ICT Sektöründe İnovatif Yaklaşımlar Dersinde bu hafta Avea CEO'su Sayın Erkan AKDEMİR Bey konuğumuz oldu. Ülkemizde telekomünikasyon sektörünün dünü, bugünü ve yarınına kendi deneyimleri ile ışık tutan Erkan Akdemir Bey'in hayatının ve kariyerinin telekomünikasyon sektörünün tarihçesi ile iç içe geçmiş olduğunu ilk defa öğrenmiş olduk. Bu akşam gerçekten hepimiz bir başka Erkan AKDEMİR tanıdık, birlikte hem çok güldük hem çok şey öğrendik.



KU Leuven Faculty of Law is looking for a candidate for the position of full time professor in ICT Law

12 2010 ICT-recht

The Faculty of Law invites applications for a fulltime position as a member of the Senior Academic Staff in the field of ICT law within the Interdisciplinary Centre for Law & ICT (ICRI: http://www.law.kuleuven.be/icri), a research Centre of the Faculty of Law of KU Leuven.
ICRI performs research on legal issues related to the digital society, in particular in the fields of competition law in the ICT market, regulation of electronic communications networks and services, regulation of audiovisual media and of information society services, electronic commerce, information security, digital evidence, intellectual rights protection in the digital environment, privacy and personal data protection and cybercrime. With a group of 20-25 junior and senior legal researchers it participates in various interdisciplinary projects on the EU (currently FP7), national and regional level. ICRI belongs to the Security Department of iMinds (http://www.iminds.be) (formerly IBBT).and to the Leuven Center for Information and Communication Technology (LICT: www.kuleuven.be/lict) It coordinates the Belgian Cybercrime Centre for Excellence for Training, Research & Education (B-CCentre:http://www.b-ccentre.be) and participates in the Policy Research Centre on Media (Steunpunt Media) and the Policy Research Centre on Media Literacy (Steunpunt Mediawijsheid) funded by the Flemish Government.
ICRI is committed to contribute to an adequate regulatory and policy framework for the upcoming digital society. Its research is focused on the design of innovative legal solutions and it is characterized by its intra- and interdisciplinary approach, constantly aspiring cross-fertilization between legal, technical, economic and socio-cultural perspectives. By conducting both applied and fundamental legal research in a spirit of academic freedom and freedom of inquiry, ICRI aspires to a place among the centres of excellence in the area of law & ICT in Europe and beyond.
Based on its extensive research expertise, ICRI provides specialized education programmes and courses in Dutch and English, such as the specialized Master (LL.M.) in IP / ICT Law, organized on the Brussels KU Leuven campus (http://www.law.kuleuven.be/icri/psiml/)
The assignment consists of teaching in an academic context, scientific research and additional duties of an academic nature, along with those to the wider society.
 
Source and more:

Meet “badBIOS,” the mysterious Mac and PC malware that jumps airgaps

Three years ago, security consultant Dragos Ruiu was in his lab when he noticed something highly unusual: his MacBook Air, on which he had just installed a fresh copy of OS X, spontaneously updated the firmware that helps it boot. Stranger still, when Ruiu then tried to boot the machine off a CD ROM, it refused. He also found that the machine could delete data and undo configuration changes with no prompting. He didn't know it then, but that odd firmware update would become a high-stakes malware mystery that would consume most of his waking hours.
In the following months, Ruiu observed more odd phenomena that seemed straight out of a science-fiction thriller. A computer running the Open BSD operating system also began to modify its settings and delete its data without explanation or prompting. His network transmitted data specific to the Internet's next-generation IPv6 networking protocol, even from computers that were supposed to have IPv6 completely disabled. Strangest of all was the ability of infected machines to transmit small amounts of network data with other infected machines even when their power cords and Ethernet cables were unplugged and their Wi-Fi and Bluetooth cards were removed. Further investigation soon showed that the list of affected operating systems also included multiple variants of Windows and Linux.
"We were like, 'Okay, we're totally owned,'" Ruiu told Ars. "'We have to erase all our systems and start from scratch,' which we did. It was a very painful exercise. I've been suspicious of stuff around here ever since."
In the intervening three years, Ruiu said, the infections have persisted, almost like a strain of bacteria that's able to survive extreme antibiotic therapies. Within hours or weeks of wiping an infected computer clean, the odd behavior would return. The most visible sign of contamination is a machine's inability to boot off a CD, but other, more subtle behaviors can be observed when using tools such as Process Monitor, which is designed for troubleshooting and forensic investigations.
Another intriguing characteristic: in addition to jumping "airgaps" designed to isolate infected or sensitive machines from all other networked computers, the malware seems to have self-healing capabilities.
"We had an air-gapped computer that just had its [firmware] BIOS reflashed, a fresh disk drive installed, and zero data on it, installed from a Windows system CD," Ruiu said. "At one point, we were editing some of the components and our registry editor got disabled. It was like: wait a minute, how can that happen? How can the machine react and attack the software that we're using to attack it? This is an air-gapped machine and all of a sudden the search function in the registry editor stopped working when we were using it to search for their keys."
Over the past two weeks, Ruiu has taken to Twitter, Facebook, and Google Plus to document his investigative odyssey and share a theory that has captured the attention of some of the world's foremost security experts. The malware, Ruiu believes, is transmitted though USB drives to infect the lowest levels of computer hardware. With the ability to target a computer's Basic Input/Output System (BIOS), Unified Extensible Firmware Interface (UEFI), and possibly other firmware standards, the malware can attack a wide variety of platforms, escape common forms of detection, and survive most attempts to eradicate it.
But the story gets stranger still. In posts here, here, and here, Ruiu posited another theory that sounds like something from the screenplay of a post-apocalyptic movie: "badBIOS," as Ruiu dubbed the malware, has the ability to use high-frequency transmissions passed between computer speakers and microphones to bridge airgaps.

Bigfoot in the age of the advanced persistent threat

At times as I've reported this story, its outline has struck me as the stuff of urban legend, the advanced persistent threat equivalent of a Bigfoot sighting. Indeed, Ruiu has conceded that while several fellow security experts have assisted his investigation, none has peer reviewed his process or the tentative findings that he's beginning to draw. (A compilation of Ruiu's observations is here.)
Also unexplained is why Ruiu would be on the receiving end of such an advanced and exotic attack. As a security professional, the organizer of the internationally renowned CanSecWest and PacSec conferences, and the founder of the Pwn2Own hacking competition, he is no doubt an attractive target to state-sponsored spies and financially motivated hackers. But he's no more attractive a target than hundreds or thousands of his peers, who have so far not reported the kind of odd phenomena that has afflicted Ruiu's computers and networks.
In contrast to the skepticism that's common in the security and hacking cultures, Ruiu's peers have mostly responded with deep-seated concern and even fascination to his dispatches about badBIOS.
"Everybody in security needs to follow @dragosr and watch his analysis of #badBIOS," Alex Stamos, one of the more trusted and sober security researchers, wrote in a tweet last week. Jeff Moss—the founder of the Defcon and Blackhat security conferences who in 2009 began advising Department of Homeland Security Secretary Janet Napolitano on matters of computer security—retweeted the statement and added: "No joke it's really serious." Plenty of others agree.
"Dragos is definitely one of the good reliable guys, and I have never ever even remotely thought him dishonest," security researcher Arrigo Triulzi told Ars. "Nothing of what he describes is science fiction taken individually, but we have not seen it in the wild ever."

Been there, done that

Triulzi said he's seen plenty of firmware-targeting malware in the laboratory. A client of his once infected the UEFI-based BIOS of his Mac laptop as part of an experiment. Five years ago, Triulzi himself developed proof-of-concept malware that stealthily infected the network interface controllers that sit on a computer motherboard and provide the Ethernet jack that connects the machine to a network. His research built off of work by John Heasman that demonstrated how to plant hard-to-detect malware known as a rootkit in a computer's peripheral component interconnect, the Intel-developed connection that attaches hardware devices to a CPU.
It's also possible to use high-frequency sounds broadcast over speakers to send network packets. Early networking standards used the technique, said security expert Rob Graham. Ultrasonic-based networking is also the subject of a great deal of research, including this project by scientists at MIT.
Of course, it's one thing for researchers in the lab to demonstrate viable firmware-infecting rootkits and ultra high-frequency networking techniques. But as Triulzi suggested, it's another thing entirely to seamlessly fuse the two together and use the weapon in the real world against a seasoned security consultant. What's more, use of a USB stick to infect an array of computer platforms at the BIOS level rivals the payload delivery system found in the state-sponsored Stuxnet worm unleashed to disrupt Iran's nuclear program. And the reported ability of badBIOS to bridge airgaps also has parallels to Flame, another state-sponsored piece of malware that used Bluetooth radio signals to communicate with devices not connected to the Internet.
"Really, everything Dragos reports is something that's easily within the capabilities of a lot of people," said Graham, who is CEO of penetration testing firm Errata Security. "I could, if I spent a year, write a BIOS that does everything Dragos said badBIOS is doing. To communicate over ultrahigh frequency sound waves between computers is really, really easy."
Coincidentally, Italian newspapers this week reported that Russian spies attempted to monitor attendees of last month's G20 economic summit by giving them memory sticks and recharging cables programmed to intercept their communications.

By Dan Goodin
Source and read more:
http://arstechnica.com/security/2013/10/meet-badbios-the-mysterious-mac-and-pc-malware-that-jumps-airgaps/

We’re About to Lose Net Neutrality — And the Internet as We Know It

Net neutrality is a dead man walking. The execution date isn’t set, but it could be days, or months (at best). And since net neutrality is the principle forbidding huge telecommunications companies from treating users, websites, or apps differently — say, by letting some work better than others over their pipes — the dead man walking isn’t some abstract or far-removed principle just for wonks: It affects the internet as we all know it.
Once upon a time, companies like AT&T, Comcast, Verizon, and others declared a war on the internet’s foundational principle: that its networks should be “neutral” and users don’t need anyone’s permission to invent, create, communicate, broadcast, or share online. The neutral and level playing field provided by permissionless innovation has empowered all of us with the freedom to express ourselves and innovate online without having to seek the permission of a remote telecom executive.
But today, that freedom won’t survive much longer if a federal court — the second most powerful court in the nation behind the Supreme Court, the DC Circuit — is set to strike down the nation’s net neutrality law, a rule adopted by the Federal Communications Commission in 2010. Some will claim the new solution “splits the baby” in a way that somehow doesn’t kill net neutrality and so we should be grateful. But make no mistake: Despite eight years of public and political activism by multitudes fighting for freedom on the internet, a court decision may soon take it away.

Game of Loopholes and Rules

How did we get here?
The CEO of AT&T told an interviewer back in 2005 that he wanted to introduce a new business model to the internet: charging companies like Google and Yahoo! to reliably reach internet users on the AT&T network. Keep in mind that users already pay to access the internet and that Google and Yahoo! already pay other telecom companies — often called backbone providers — to connect to these internet users. [Disclosure: I have done legal work for several companies supporting network neutrality, including Google.]
But AT&T wanted to add an additional toll, beyond what it already made from the internet. Shortly after that, a Verizon executive voiced agreement, hoping to end what he called tech companies’ “free lunch”. It turns out that around the same time, Comcast had begun secretly trialing services to block some of the web’s most popular applications that could pose a competitive threat to Comcast, such as BitTorrent.
Yet the phone and cable companies tried to dress up their plans as a false compromise. Counterintuitively, they supported telecommunications legislation in 2006 that would authorize the FCC to stop phone and cable companies from blocking websites.
There was a catch, however. The bills included an exception that swallowed the rule: the FCC would be unable to stop cable and phone companies from taxing innovators or providing worse service to some sites and better service to others. Since we know internet users tend to quit using a website or application if it loads even just a few seconds slower than a competitor’s version, this no-blocking rule would essentially have enabled the phone and cable companies to discriminate by picking website/app/platform winners and losers. (Congress would merely enact the loophole. Think of it as a safe harbor for discriminating online.)
Luckily, consumer groups, technology companies, political leaders, and American citizens saw through the nonsense and rallied around a principle to preserve the internet’s openness. They advocated for one simple, necessary rule — a nondiscrimination principle that became known as “network neutrality”. This principle would forbid phone and cable companies not only from blocking — but also from discriminating between or entering in special business deals to the benefit of — some sites over others.
Both sides battled out the issues before Congress, federal agencies, and in several senate and presidential campaigns over the next five years. These fights culminated in the 2010 FCC decision that included the nondiscrimination rule.
Unfortunately, the rule still had major loopholes — especially when it came to mobile networks. It also was built, to some extent, on a shaky political foundation because the then-FCC chairman repeatedly folded when facing pressure. Still, the adopted rule was better than nothing, and it was a major advance over AT&T’s opening bid in 2005 of a no-blocking rule.
As a result, Verizon took the FCC to court to void the 2010 FCC rule. Verizon went to court to attack the part of the rule forbidding them from discriminating among websites and applications; from setting up — on what we once called the information superhighway — the equivalents of tollbooths, fast lanes, and dirt roads.

There and Back Again

So that’s where we are today — waiting for the most powerful court in the nation, the DC Circuit, to rule in Verizon’s case. During the case’s oral argument, back in early September, corporate lobbyists, lawyers, financial analysts, and consumer advocates packed into the courtroom: some sitting, some standing, some relegated to an overflow room.
Since then, everyone interested in internet freedom has been waiting for an opinion — including everyday folks who search the web or share their thoughts in 140 characters; and including me, who argued the first (losing) network neutrality case before the DC Circuit in 2010.

By Marvin Ammori
Source and read more:
http://www.wired.com/opinion/2013/11/so-the-internets-about-to-lose-its-net-neutrality/