Classification of Services in the Digital Economy

The classification of services in the digital economy proves critical for doing business, but it appears to be a particularly complex regulatory matter that is based upon a manifold set of issues. In the context of the General Agreement on Trade in Services (GATS), when the services classification scheme was drafted in the early 1990s, convergence processes had not unfolded yet and the internet was still in its infancy and not a reality in daily life. Therefore, policy makers are now struggling with the problem of regulating trade in electronic services and are in search of a future-oriented solution for classifying them in multilateral and preferential trade agreements. In late fall 2011, the authors of this study were mandated by the European Union, Delegation to Vietnam, in the context of the Multilateral Trade Assistance Project 3 (MUTRAP 3), to work out a report clarifying the classification of services in the information/digital economy and to assess the impact of any decision regarding the classifications on the domestic and external relations policy of Vietnam, as well as to discuss the relevant issues with local experts during three on-site visits.

Authors: Rolf H. Weber; Mira Burri
Source: http://www.amazon.com/Classification-Services-Digital-Economy-Weber/dp/3642316344/ref=sr_1_1?s=books&ie=UTF8&qid=1349000261&sr=1-1&keywords=digital+economy#_

The Digital Rights Movement: The Role of Technology in Subverting Digital Copyright

The Digital Rights Movement: The Role of Technology in Subverting Digital Copyright (The Information Society Series)

The movement against restrictive digital copyright protection arose largely in response to the excesses of the Digital Millennium Copyright Act (DMCA) of 1998. In The Digital Rights Movement, Hector Postigo shows that what began as an assertion of consumer rights to digital content has become something broader: a movement concerned not just with consumers and gadgets but with cultural ownership. Increasingly stringent laws and technological measures are more than incoveniences; they lock up access to our "cultural commons." Postigo describes the legislative history of the DMCA and how policy "blind spots" produced a law at odds with existing and emerging consumer practices. Yet the DMCA established a political and legal rationale brought to bear on digital media, the Internet, and other new technologies. Drawing on social movement theory and science and technology studies, Postigo presents case studies of resistance to increased control over digital media, describing a host of tactics that range from hacking to lobbying. Postigo discusses the movement's new, user-centered conception of "fair use" that seeks to legitimize noncommercial personal and creative uses such as copying legitimately purchased content and remixing music and video tracks. He introduces the concept of technological resistance--when hackers and users design and deploy technologies that allows access to digital content despite technological protection mechanisms--as the flip side to the technological enforcement represented by digital copy protection and a crucial tactic for the movement.

Author: Hector Postigo
Source: http://www.amazon.com/Digital-Rights-Movement-Technology-Information/dp/0262017954/ref=sr_1_4?s=books&ie=UTF8&qid=1349000029&sr=1-4&keywords=information+economy 

e-Health: Redesigning health in Europe for 2020

"...The report outlines the e-Health Task Force’s conclusions regarding the key issues faced by a fundamental re-organisation of healthcare to make use of already existing information technologies. These solutions are often not medical at all, but rather deal with how in the future we will need to treat data, privacy, research as well as the physician/patient relationship.

Since not only EU citizens but also their data move across borders, we require an EU approach, where we harmonise our legislation so everyone can operate using the same rules. If we fail to do this, we can rest assured that other solutions will be found, either mutually incompatible national rules or private sector initiatives, where our fundamental rights may not be guaranteed.

The task we face is to ensure that in the future all EU citizens have access to a high level of healthcare, anywhere in the Union, and at a reasonable cost to our healthcare systems. To do so, we must make use of solutions offered by information technology already today. This, ultimately, is the fundamental conclusion of the Task Force’s report, Redesigning health in Europe for 2020".
Source of that press release, e-Health Task Force Report, Annexes and Audio Video Introduction:
 

Decentralized Information Flow Control for Databases

Privacy and integrity concerns have been mounting in recent years as sensitive data such as medical records, social network records, and corporate and government secrets are increasingly being stored in online systems. The rate of high-profile breaches has illustrated that current techniques are inadequate for protecting sensitive information. Many of these breaches involve databases that handle information for a multitude of individuals, but databases don’t provide practical tools to protect those individuals from each other, so that task is relegated to the application. This dissertation describes a system that improves security in a principled way by extending the database system and the application platform to support information flow control.

Information flow control has been gaining traction as a practical way to protect information in the contexts of programming languages and operating systems. Recent research advocates the decentralized model for information flow control (DIFC), since it provides the necessary expressiveness to protect data for many individuals with varied security concerns. However, despite the fact that most applications implicated in breaches rely on relational databases, there have been no prior comprehensive attempts to extend DIFC to a database system. This dissertation introduces IFDB, which is a database management system that supports DIFC with minimal overhead.

IFDB pioneers the Query by Label model, which provides applications with a simple way to delineate constraints on the confidentiality and integrity of the data they obtain from the database. This dissertation also defines new abstractions for managing information flows in a database and proposes new ways to address covert channels. Finally, the IFDB implementation and case studies with real applications demonstrate that database support for DIFC improves security, is easy for developers to use, and has good performance.

Author: David Andrew Schultz

Download: pdf.

BibTeX entry:
@phdthesis{schultz12phd,
   author = {David Schultz},
   title = {Decentralized Information Flow Control for Databases},
   school = {MIT},
   type = {{Ph.D.}},
   month = jul,
   year = {2012}

Source: http://www.pmg.lcs.mit.edu/pubs/schultz12phd-abstract.html

U.S. Is Tightening Web Privacy Rule to Shield Young

Federal regulators are about to take the biggest steps in more than a decade to protect children online.
The moves come at a time when major corporations, app developers and data miners appear to be collecting information about the online activities of millions of young Internet users without their parents’ awareness, children’s advocates say. Some sites and apps have also collected details like children’s photographs or locations of mobile devices; the concern is that the information could be used to identify or locate individual children.
These data-gathering practices are legal. But the development has so alarmed officials at the Federal Trade Commission that the agency is moving to overhaul rules that many experts say have not kept pace with the explosive growth of the Web and innovations like mobile apps. New rules are expected within weeks.
“Today, almost every child has a computer in his pocket and it’s that much harder for parents to monitor what their kids are doing online, who they are interacting with, and what information they are sharing,” says Mary K. Engle, associate director of the advertising practices division at the F.T.C. “The concern is that a lot of this may be going on without anybody’s knowledge.”
The proposed changes could greatly increase the need for children’s sites to obtain parental permission for some practices that are now popular — like using cookies to track users’ activities around the Web over time. Marketers argue that the rule should not be changed so extensively, lest it cause companies to reduce their offerings for children.
“Do we need a broad, wholesale change of the law?” says Mike Zaneis, the general counsel for the Interactive Advertising Bureau, an industry association. “The answer is no. It is working very well.”
The current federal rule, the Children’s Online Privacy Protection Act of 1998, requires operators of children’s Web sites to obtain parental consent before they collect personal information like phone numbers or physical addresses from children under 13. But rapid advances in technology have overtaken the rules, privacy advocates say.
Today, many brand-name companies and analytics firms collect, collate and analyze information about a wide range of consumer activities and traits. Some of those techniques could put children at risk, advocates say.
Under the F.T.C.’s proposals, some current online practices, like getting children under 13 to submit photos of themselves, would require parental consent.
Children who visit McDonald’s HappyMeal.com, for instance, can “get in the picture with Ronald McDonald” by uploading photos of themselves and combining them with images of the clown. Children may also “star in a music video” on the site by uploading photos or webcam images and having it graft their faces onto dancing cartoon bodies.
But according to children’s advocates, McDonald’s stored these images in directories that were publicly available. Anyone with an Internet connection could check out hundreds of photos of young children, a few of whom were pictured in pajamas in their bedrooms, advocates said.
In a related complaint to the F.T.C. last month, a coalition of advocacy groups accused McDonald’s and four other corporations of violating the 1998 law by collecting e-mail addresses without parental consent. HappyMeal.com, the complaint noted, invites children to share their creations on the site by providing the first names and e-mail addresses of their friends.
“When we tell parents about this they are appalled, because basically what it’s doing is going around the parents’ back and taking advantage of kids’ naïveté,” says Jennifer Harris, the director of marketing initiatives at the Yale Rudd Center for Food Policy and Obesity, a member of the coalition that filed the complaint. “It’s a very unfair and deceptive practice that we don’t think companies should be allowed to do.”
Danya Proud, a spokeswoman for McDonald’s, said in an e-mail that the company placed a “high importance” on protecting privacy, including children’s online privacy. She said that McDonald’s had blocked public access to several directories on the site.
Last year, the F.T.C. filed a complaint against W3 Innovations, a developer of popular iPhone and iPod Touch apps like Emily’s Dress Up, which invited children to design outfits and e-mail their comments to a blog. The agency said that the apps violated the children’s privacy rule by collecting the e-mail addresses of tens of thousands of children without their parents’ permission and encouraging those children to post personal information publicly. The company later settled the case, agreeing to pay a penalty of $50,000 and delete personal data it had collected about children.
It is often difficult to know what kind of data is being collected and shared. Industry trade groups say marketers do not knowingly track young children for advertising purposes. But a study last year of 54 Web sites popular with children, including Disney.go.com and Nick.com, found that many used tracking technologies extensively.
“I was surprised to find that pretty much all of the same technologies used to track adults are being used on kids’ Web sites,” said Richard M. Smith, an Internet security expert in Boston who conducted the study at the request of the Center for Digital Democracy, an advocacy group.
Using a software program called Ghostery, which detects and identifies tracking entities on Web sites, a New York Times reporter recently identified seven trackers on Nick.com — including Quantcast, an analytics company that, according to its own marketing material, helps Web sites “segment out specific audiences you want to sell” to advertisers.
Ghostery found 13 trackers on a Disney game page for kids, including AudienceScience, an analytics company that, according to that company’s site, “pioneered the concept of targeting and audience-based marketing.”
David Bittler, a spokesman for Nickelodeon, which runs Nick.com, says Viacom, the parent company, does not show targeted ads on Nick.com or other company sites for children under 13. But the sites and their analytics partners may collect data anonymously about users for purposes like improving content. Zenia Mucha, a spokeswoman for Disney, said the company does not show targeted ads to children and requires its ad partners to do the same.
Another popular children’s site, Webkinz, says openly that its advertising partners may aim at visitors with ads based on the collection of “anonymous data.” In its privacy policy, Webkinz describes the practice as “online advanced targeting.”
If the F.T.C. carries out its proposed changes, children’s Web sites would be required to obtain parents’ permission before tracking children around the Web for advertising purposes, even with anonymous customer codes.
Some parents say they are trying to teach their children basic online self-defense. “We don’t give out birth dates to get the free stuff,” said Patricia Tay-Weiss, a mother of two young children in Venice, Calif., who runs foreign language classes for elementary school students. “We are teaching our kids to ask, ‘What is the company getting from you and what are they going to do with that information?’ ”

Hackers Breached Adobe Server in Order to Sign Their Malware

The ongoing security saga involving digital certificates got a new and disturbing wrinkle on Thursday when software giant Adobe announced that attackers breached its code-signing system and used it to sign their malware with a valid digital certificate from Adobe.

Adobe said the attackers signed at least two malicious utility programs with the valid Adobe certificate. The company traced the problem to a compromised build server that had the ability get code approved from the company’s code-signing system.

Adobe said it was revoking the certificate and planned to issue new certificates for legitimate Adobe products that were also signed with the same certificate, wrote Brad Arkin, senior director of product security and privacy for Adobe, in a blog post.

“This only affects the Adobe software signed with the impacted certificate that runs on the Windows platform and three Adobe AIR applications that run on both Windows and Macintosh,” Arkin wrote. “The revocation does not impact any other Adobe software for Macintosh or other platforms.”

The three affected applications are Adobe Muse, Adobe Story AIR applications, and Acrobat.com desktop services.

The company said it had good reason to believe the signed malware wasn’t a threat to the general population, and that the two malicious programs signed with the certificate are generally used for targeted, rather than broad-based, attacks.

Arkin identified the two pieces of malware signed with the Adobe certificate as “pwdump7 v7.1″ and “myGeeksmail.dll.” He said that the company passed them on to anti-virus companies and other security firms so that they could write signatures to detect the malware and protect their customers, according to the post.

Adobe didn’t say when the breach occurred, but noted that it was re-issuing certificates for code that was signed with the compromised signing key after July 10, 2012. Also, a security advisory the company released with its announcement showed that the two malicious programs were signed on July 26 of this year. Adobe spokeswoman Liebke Lips told Wired that the company received the two malicious samples on the evening of Sept. 12 and immediately began the process of deactivating and revoking the certificate.

The company said the certificate will be re-issued on Oct. 4, but didn’t explain why it would take that long.

Digital certificates are a core part of the trust that exists between software makers and their users. Software vendors sign their code with digital certificates so that computers recognize a program as legitimate code from a trusted source. An attacker who can sign their malware with a valid certificate can slip past protective barriers that prevent unsigned software from installing automatically on a machine.

Revoking the certificate should prevent the signed rogue code from installing without a warning.

Stuxnet, a sophisticated piece of malware that was designed to sabotage Iran’s nuclear program, was the first malicious code discovered to be using a valid digital certificate. In that case the attackers – believed to have been working for the U.S. and Israel – stole digital certificates from two companies in Taiwan to sign part of their code.

Adobe said that it stored its private keys for signing certificates in a hardware security module and had strict procedures in place for signing code. The intruders breached a build server that had access to the signing system and were able to sign their malicious in that way.

In addition to concerns about the compromised certificate, the breach of the build server raises concerns about the security of Adobe’s source code, which might have been accessible to the attackers. But Arkin wrote that the compromised build server had access to source code for only one Adobe product. The company did not identify the product but said that it was not the Flash Player, Adobe Reader, Shockwave Player or Adobe AIR. Arkin wrote that investigators found no evidence that the intruders had changed source code and that “there is no evidence to date that any source code was stolen.”

Questions about the security of Adobe’s source code came up earlier this month after Symantec released a report about a group of hackers who broke into servers belonging to Google and 33 other companies in 2010. The attackers were after source code for the companies. Adobe was hacked around the same time, but has never indicated if the same attackers that hit Google were responsible for hacking them.

Symantec found evidence that the attackers who struck Google had developed and used an unusually large number of zero-day exploits in subsequent attacks against other companies. The attackers used eight zero-day exploits, five of which were for Adobe’s Flash Player. Symantec said in its report that such a large number of zero-days suggested that the attackers might have gained access to Adobe’s source code. But Arkin insisted at the time that no Adobe software had been stolen.

“We are not aware of any evidence (direct or circumstantial) indicating bad guys have [source code],” he told Wired at the time.
 

Unleashing the Potential of Cloud Computing in Europe - What is it and what does it mean for me?

What is Cloud Computing?
 
‘Cloud computing’ is the storing, processing and use of data on remotely located computers accessed over the internet. Many people use the cloud today without even realising that they do so. Services such as web-based e-mail or social networks may be based on cloud technology. For professional IT users cloud computing means a high degree of flexibility as to the amount of computing power needed. For example, if a service sees increased use, it is very simple to add more capacity to it – something that would take much more time if a company had to install a new physical machine in its own data centre.
 
How does cloud computing work?
 
The user connects his/her computer to the cloud platform through dedicated software. In the cloud, the processing power is provided by big data centres with hundreds of servers and data storage systems that are able to handle practically any computer software (from data processing to video games) that clients might need to use. Sometimes the services are offered free (for example webmail offerings), but most clients can pay flexibly on a pay-per-use basis or by a single monthly fee.
 
In a data centre somewhere on the planet. If the physical location is important, users can make sure this is specified in their cloud computing contracts. With regard to others' personal data, the Data Protection Directive requires data to either be stored in the European Economic Area (EEA) or in a territory that has equivalent privacy laws.
 
What are the main advantages of cloud computing for users?
Users do not have to buy software or buy and maintain expensive servers and data storage. This saves on money, office space and in-house IT support staff. Users also have near total flexibility about the storage space and tools they use.
 
Why do we need an EU strategy to unleash the potential of cloud computing?
The economic benefits are much bigger - €160 billion per year, or around €300 per person per year - through pan-European action. Today the patchwork of different rules at Member State level increases companies' uncertainty about their legal obligations, thus delaying the adoption of cloud computing. While cloud initiatives in the Member States are welcome, such as Andromède in France, G-Cloud in the UK and Trusted Cloud in Germany, this is not enough and not the most efficient way to grow the market for everybody's benefit.
 
What are the economic and job gains from a European cloud strategy?
New estimates indicate that cloud computing revenues in the EU could rise to nearly €80 billion by 2020 if policy intervention is successful (more than doubling the growth of the sector). So this strategy is about building a new industry, and better competing against the United States in particular.
More broadly, we expect a net annual gain of €160 billion to EU GDP by 2020 (or a total gain of nearly €600 billion between 2015 and 2020) if the full EU cloud strategy is in place. Without that, economic gains would be two-thirds less.
These benefits largely come from businesses being able to either save money or get access to technology that makes them more productive.
In terms of overall job numbers, we expect to see 3.8 million jobs generated following full implementation of the strategy, against 1.3 million if the regulatory and other policy barriers are not tackled.1
 
What is the time line of the actions? How long will it take for concrete change?
The Commission will deliver on the key actions identified in the Communication in 2013, notably in respect of actions on standardisation and certification for cloud computing, the development of safe and fair contract terms and conditions and the European Cloud Partnership. A progress report by the end of 2013 will show whether further policy and legislative initiatives are needed.
 
Who can benefit from cloud computing?
All internet users can benefit and cloud computing could revolutionise many fields.
Surveys show that 81% of businesses already using the cloud reported 10%-20% lower IT costs, while 20% of them reported savings rising to 30% or above.
Many consumers already use basic cloud computing (e.g. Internet-based e-mail accounts). A large storage capacity at no or minimal cost, convenient and ubiquitous access, reduction of expenditures – these are some of the advantages offered by the Cloud.
Cloud computing could bring large gains to the public sector, by making it easier to provide services that are integrated, effective and at lower cost.
Cloud computing could also boost research as research institutions could complement their in-house dedicated computing infrastructures with those of cloud providers, thus being able to maintain huge amounts of data and process these much faster, and innovation, as it becomes much easier and cheaper to try out new ideas for IT products or services.
 
How can cloud computing help protect the environment?
Like aviation, the rapid growth of computing means it is one of the fastest growing sources of carbon emissions. At the same time, cloud computing is the best way to increase the carbon efficiency of computing use. This is because large cloud-related investments can be planned with low-energy servers and green sources of energy, much more easily than ensuring hundreds of millions of computer users make green choices. In addition, hardware use can be optimised, reducing the number of physical machines needed to perform a given set of tasks.
The European Commission is funding a research project – the Eurocloud server project – whose first results show that it could be possible to cut cloud data centre energy use by 90%, this coming on top of the efficiencies already achieved by switching from desktop and server solutions to cloud solutions.
 
How could cloud computing affect the ICT sector?
If barriers to cloud computing were removed, a study of 1000 European companies shows that:
  • More than 98% of EU companies would start or strengthen cloud usage.
  • The cloud would attract new users: 96% of those EU businesses that are not using the cloud but are currently thinking about it would actually start investing.
  • An increase in the demand for IT skills not only for the fundamental areas such as data centre management but also, for example, in digital marketing, app design, social networking and financial health.
Details regarding the European Cloud Partnership (ECP)
 
What is the European Cloud Partnership and what will it do?
The European Cloud Partnership (ECP) will consist of high level procurement officers from European public bodies and key players from IT and telecom industry. The ECP will, under the guidance of a Steering Board, bring together public procurement authorities and industry consortia to implement pre-commercial procurement actions. This will allow them to identify public sector cloud computing requirements, to develop specifications for IT procurement, and to procure reference implementations. Thereby it will help advance towards common and even joint procurement of cloud computing services by public bodies on the basis of common user requirements. The ECP does not aim at creating a physical cloud computing infrastructure. Rather, via procurement requirements that will be promoted by participating Member States and public authorities for use throughout the EU, its aim is to ensure that the commercial offer of cloud computing in Europe, both of the public and of the private sector, is adapted to European needs,.
 
How will the European Cloud Partnership (ECP) operate?
A Steering Board will advise on strategic orientations, in particular with regard to public sector adoption of cloud computing services in a way that shapes the market to the benefit of all potential cloud users.
The other key component of the ECP is the implementation level: an initial budget of €10million has been earmarked for a pre-commercial procurement project in the ICT theme of the FP7 Research Programme2 This project will require close coordination and a joining of forces between different public sector actors across several Member States in order to consolidate public sector requirements for procurement and use of cloud computing services.
 
What is main mission of the Steering Board of the European Cloud Partnership (ECP)?
The main mission of the Steering Board includes:
  • advising on strategic priorities for positioning cloud computing in Europe as an engine for economic growth, innovation and cost-efficient public services via the European Cloud Partnership;
  • giving recommendations on policy development for secure and interoperable cloud computing that will contribute to the European Digital Single Market.
What are operational arrangements of the Steering Board of the ECP?
The members of the Steering Board and its Chairperson will be appointed by the Commissioner responsible for the Digital Agenda, and will act in their personal capacities. The Board will meet two or three times per year. The Steering Board may consult with industrial, academic and governmental bodies and experts.
The inaugural meeting of the Steering Board is planned to take place in the last quarter of 2012.
Data protection, security, privacy and user rights
 
How will the strategy help me enforce my rights as a user of cloud services?
One of the key actions of the Strategy is to develop model contract terms and conditions to address issues not covered by the Common European Sales Law such as: data preservation after termination of the contract, data disclosure and integrity, data location and transfer, ownership of the data or direct and indirect liability. Identifying and developing consistent solutions in the area of contract terms and conditions is a way of encouraging wide take up of cloud computing services by increasing consumer trust.
 
How does this strategy relate to the Commission's proposals on data protection?
The concerns of cloud computing providers and users were carefully considered during the preparatory work for the Data Protection Regulation proposed by the Commission in January 2012. The proposed Regulation constitutes a good general basis for the future development of cloud computing
Given that data protection concerns were identified as one of the most serious barriers to cloud computing take-up, it is all the more important that the Council of Ministers and the European Parliament work swiftly towards the adoption of the proposed Regulation as soon as possible in 2013.
Once the proposed Regulation is adopted, the Commission will make use of the new mechanisms to provide any necessary additional guidance on the application of European data protection law in respect of cloud computing services.
 
What is being done concretely at global level to ensure consistent regulation?
Cloud computing is a global business that demands reinforced international dialogue on safe and seamless cross-border use.
The European Commission is working, through international dialogues on trade, law enforcement, security and cybercrime to fully reflect the new challenges raised by cloud computing.
These dialogues are pursued in multilateral fora such as the WTO and the OECD to advance common objectives for cloud computing services, and bilaterally with the USA, Japan and other countries.
 
How do I know if my data is stored in Europe or elsewhere?
The contact terms and conditions should address the issue of data location. However, the "take-it-or-leave-it" standard contracts used by many cloud providers today may not include such information. The strategy underlines the need to develop model contract terms and conditions to address issues not covered by the Common European Sales Law such as, inter alia, data location.
 
What happens to my data if the cloud company I use shuts down?
This will normally be covered by the contact terms and conditions – the need for clearer protection is why the Commission will develop model contract terms and conditions to address issues not covered by the Common European Sales Law.
 
Standards, certification and contracts
Why can't you write the necessary standards yourself, why do you rely on the industry to make this happen?
Standardisation works best as an industry-led process. The industry is already putting a lot of effort into creating standards that increase interoperability of the clouds.
Standards are emerging but at the moment there is no common agreement as to which standards would guarantee the required interoperability, data portability and reversibility. The Commission wants to identify coherent sets of useful standards to make it easier for the demand and supply sides to organise themselves.
 
When you do you hope to launch the certification scheme?
The Commission will work with the support of ENISA and other relevant bodies to assist the development of an EU-wide voluntary certification schemes in the area of cloud computing (including data protection) and establish a list of such schemes by 2014.
If it is voluntary, what will you do if companies simply decide not to join?
We will keep working with companies to increase the attractiveness of the scheme. Citizens tell us they want such information, and it should be remembered that certification is not a punishment for companies. It simply gives them a tool to signal their quality and compliance to prospective customers.
 
Does the Cloud Computing Strategy foresee the building of a "European Super-Cloud"?
No, the strategy is not about creating physical infrastructures. But we want to see publicly available cloud offerings that meet European standards not only in regulatory terms, but in terms of being competitive, open and secure.
 
What about the security in the cloud?
Cloud-specific security risks relate to multi-tenancy and shared resources character of cloud computing (that means that the same physical infrastructure will often serve many different customers of a cloud provider). In the cloud, the client cedes control of security to some extent to the service provider, making it important to be able to assess whether the cloud service provider complies with the security requirements. This shows why certification schemes will play an important role because they help providers signal compliance to prospective users in a reliable way. On the other hand, for non-IT security experts, leaving security issues to the hands of IT professionals working for the cloud service provider could in fact increase security.
 
Are clouds interoperable? Is it possible to easily change your cloud service provider?
At the moment different cloud offerings are not as interoperable as they could be. Cloud providers might use different operating systems or application interfaces which are not interoperable, meaning that software developed to work with one cloud provider cannot easily be made to work with another. This could lead to dependency from one service provider, since it is not necessarily easy to move data from one cloud to another ("lock-in").
 
Does the cloud computing strategy address wider the security issues?
The strategy does not address the security issues related to the internet and online environment as such. The Commission will in the coming months address general cyber security challenges in its Strategy for Cyber Security. This forthcoming strategy will address all information society providers, including cloud computing service providers. It will, inter alia, indicate appropriate technical and organisational measures that should be taken to manage security risks. It will also establish reporting obligations to competent authorities of significant incidents.
 
Does the cloud computing strategy intend to impede the activities of international cloud providers in Europe?
No. The strategy aims at facilitating Europe's participation in the global growth of cloud computing by: reviewing standard contractual clauses applicable to transfer of personal data to third countries and adapting them, as needed, to cloud services; and by calling upon national data protection authorities to approve Binding Corporate Rules for cloud providers.3 Furthermore, the Commission will also build on its on-going international dialogues with the USA, Japan and other countries, as regards key cloud themes.
1 :
See: IDC, "Quantitative Estimates of the demand for cloud Computing in Europe and the likely barriers to take-up", February 2012.
2 :
See Objective 11.3 of http://cordis.europa.eu/fp7/ict/docs/ict-wp2013-10-7-2013.pdf
3 :
The relevant opinions of the Article 29 Working Party (See: WP 195 and WP 153) will serve as a basis for a Commission draft. Binding Corporate Rules are one means to allow for legal international data transfers: they govern in an enforceable manner how the different parts of a corporation, regardless of their international location, deal with personal data.
 

EGKS e-Genel Kurul Yapmaya Hazır

29 Ağustos 2012 tarih ve 28396 sayılı resmi gazetede yayımlanan Anonim Şirketlerin Genel Kurullarında Uygulanacak Elektronik Genel Kurul Sistemi Hakkında Tebliğ, anonim şirketlerin genel kurullarına elektronik ortamda katılmaya, öneride bulunmaya, görüş açıklamaya ve oy kullanmaya imkân tanıyan elektronik genel kurul sisteminin (EGKS) kuruluşunu, işleyişini, teknik hususlar ile güvenlik kıstaslarına ilişkin usul ve esaslarını belirlemektir. Söz konusu Tebliğ 1 Ekim 2012 tarihinde yürürlüğe girecektir.

Bu tebliğe istinaden hayata geçirilecek olan Elektronik Genel Kurul Sistemi, Merkezi Kayıt Kuruluşu tarafından hazırlanmış olup 1 Ekim 2012 tarihinden itibaren hizmet vermeye başlayacaktır. Yayımlanan tebliğde EGKS’nin işleyiş ve kuruluşu ile ilgili teknik gereksinimlerinin yanı sıra, 8. Maddede EGKS güvenliğinin teknik bir raporla tespit ettirilmesi hükmü bulunmaktadır. Tebliğin 8. Maddesi, güvenlik denetlemesini yapabilecek kurumlardan biri olarak Türkiye Bilimsel ve Teknolojik Araştırma Kurumu (TÜBİTAK)’nu saymıştır.

MKK tarafından TÜBİTAK'a müracaat sonucu gerekli teknik denetimler gerçekleştirilmiş ve EGKS Tebliğ'de belirtilen kriterlere uygunluğu tespit edilmiştir.

Yine Tebliğ'de yer alan hüküm uyarınca söz konusu Teknik Raporun ticaret sicilinde tescil ve ilan edilmesi gerekmektedir. Yarın gerçekleştirilecek tescil ve ilan işlemini takiben EGKS, 1 Ekim 2012 tarihi itibariyle borsaya kote şirketlere elektronik ortamda genel kurulları çerçevesinde yardımcı olacak.

 

The NFC Report

The NFC Report


The Near Field Communications (NFC) Report is an in-depth research study that provides information on every aspect of this fast emerging new technology — the business, the technology, the applications, the players, and the future of the market.
The NFC Report examines in detail the issues affecting the development of the NFC market and provides a comprehensive education in how the technology works, the business models being adopted by key players in the industry, the current status of the market, and the issues that will affect future growth.
This research study is published in three volumes:
  • NFC Technologies and Systems: How NFC phones work, the types of back office system required, how NFC security works both in theory and in practice, the technical decisions that impact most on the costs, returns and viability of an NFC project — and what the options are for maximising return on investment. More details.
  • NFC Business Models: The tasks involved in creating a commercially successful NFC infrastructure and the strategic options available to mobile network operators, handset manufacturers, financial institutions, industry suppliers and others seeking to understand how to take advantage of NFC. More details.
  • The NFC Market 2012: What NFC needs to succeed, the current state of play in the NFC market around the world, the barriers to success and the opportunities for future growth. More details.
Source: http://www.sjb.co.uk/the-nfc-report/

IDC Cloud Research

Recent IDC cloud research shows that Worldwide revenue from public IT cloud services exceeded $21.5 billion in 2010 and will reach $72.9 billion in 2015, representing a compound annual growth rate (CAGR) of 27.6%. This rapid growth rate is over four times the projected growth for the worldwide IT market as a whole (6.7%). By 2015, one of every seven dollars spent on packaged software, server, and storage offerings will be through the public cloud model. The cloud movement is about much more than the cloud. Cloud cannot be sufficiently understood as a standalone phenomenon in the IT market, but rather as a core ingredient of a larger transformation of the IT industry - and many other industries using IT to transform themselves. Other ingredients enabled by cloud - and, in turn, accelerating cloud adoption - include the expanding "species" of mobile devices, the explosion of mobile apps, the growing availability of wireless broadband, and the explosion of big data tools.

For more info on cloud computing in different parameters please see : http://www.idc.com/prodserv/idc_cloud.jsp

EnCase Computer Forensics -- The Official EnCE: EnCase Certified Examiner Study Guide

EnCase Computer Forensics -- The Official EnCE: EnCase Certified Examiner Study Guide

The official, Guidance Software-approved book on the newest EnCE exam!
The EnCE exam tests that computer forensic analysts and examiners have thoroughly mastered computer investigation methodologies, as well as the use of Guidance Software's EnCase Forensic 7. The only official Guidance-endorsed study guide on the topic, this book prepares you for the exam with extensive coverage of all exam topics, real-world scenarios, hands-on exercises, up-to-date legal information, and sample evidence files, flashcards, and more.
  • Guides readers through preparation for the newest EnCase Certified Examiner (EnCE) exam
  • Prepares candidates for both Phase 1 and Phase 2 of the exam, as well as for practical use of the certification
  • Covers identifying and searching hardware and files systems, handling evidence on the scene, and acquiring digital evidence using EnCase Forensic 7
  • Includes hands-on exercises, practice questions, and up-to-date legal information
  • Sample evidence files, Sybex Test Engine, electronic flashcards, and more
Source: http://www.amazon.com/EnCase-Computer-Forensics-Official-EnCE/dp/0470901063/ref=sr_1_2?s=books&ie=UTF8&qid=1348686925&sr=1-2&keywords=computer+forensics+incident+response+essentials#_

Google Answers Data Center Critics With Wind Power

If you read this week’s investigative report on data centers from The New York Times, you might think they’re all inefficient power-guzzing behemoths, focused on keeping servers up and running at any expense. But Google’s Rick Needham has a different take.

Google was “a bit surprised” by a few things in the Times piece, says Needham, the search engine company’s director of energy and sustainability. “There was no distinction between smaller companies running their own server closet, which really isn’t very efficient, versus cloud service providers like Google,” he says. Secondly: “There was really no mention about renewable energy and actually sourcing your power itself from a renewable source.”

Data center watchers, including Wired, have dinged the Times article for painting all data centers with a single brush, but the article did shine a light on the big energy-efficiency problems that plague the computer industry as a whole.

We know that companies like Google are pretty secretive about some of the tricks they use to make their data centers more efficient — that’s a competitive advantage. But they are considerably more open, however, when it comes to talking about the things they’re doing to make the actual sources of their electricity more environmentally friendly.

To underscore his point about renewable energy, Needham says that Google has now inked a 10-year deal with a Oklahoma utility company, the Grand River Dam Authority, to supply 48 megawatts of wind-generated power to its Mayes County, Oklahoma, data center.

The power is coming from a 300 megawatt facility now being built by Apex Wind Energy just outside of Oklahoma City, Oklahoma. It will go online later this year, Google says.

Google has done wind-power deals in the past, but it’s had to put its clean power on the grid in a bit of a round-about manner. It has offset the energy it uses by acting as a power broker, paying wind farms in Oklahoma and Iowa for their electricity, and then selling it wholesale onto the grid to replace the power it uses in its data centers.

With the Grand River deal, there’s now a connection between the clean wind power and the juice that’s feeding Google’s data farm, says Gary Cook, a data center analyst with Greenpeace. “Here they can more directly claim that they are displacing demand for dirty energy,” he says. “We hope that more companies will make the type of investments and purchase agreements that Google has done in Oklahoma.”

But these wind deals — which total more than 268 megawatts — are only part of the picture. Google decided to become a carbon-neutral company back in 2007. In the past two and a half years, it has invested $915 million in alternative energy projects, such as the Shepherds Flat wind farm, which just went online this week.

“We’ve been building very efficient data centers for a very, very long time,” says Needham. “But efficiency is just one part of what we do.”
 

İLK KAYITLI e-POSTA HİZMET SAĞLAYICIMIZ: PTT



Yeni TTK'nun 1525. maddesinin 2. fıkrası uyarınca Bilgi Teknolojileri ve İletişim Kurumu  (BTK-Information, Communication and Telecommunication Authority) tarafından regüle edilen, 25 Ağustos 2011 ve 16 Mayıs 2012 tarihli Resmi Gazetelerde ikincil düzenlemeleri yayımlanan Kayıtlı e-Posta (Certified or Registered e-Mail) konusunda himet sunacak ilk Kayıtlı e-Posta Hizmet Sağlayıcı (KEPHS; REM Service Provider) PTT (General Directorate of Turkish Post) olmuştur.

10 Eylül 2012 tarihi itibariyle BTK tarafından yetkilendirilen PTT, gerek TTK gerek Tebligat Kanunu çerçevesinde oldukça önemli görevler beklemektedir.

PTT'ye KEPHS sıfatıyla yeni görevinde başarılar diliyorum.

KEP ve KEPHS'lere ilişkin güncel bilgiler için: http://www.btk.gov.tr/bilgi_teknolojileri/kayitli_elektronik_posta/kephs.php

Appeals Court Caves to TSA Over Nude Body Scanners

A federal appeals court on Tuesday said it was giving the Transportation Security Administration until the end of March to comport with an already 14-month-old order to “promptly” hold public hearings and take public comment concerning the so-called nude body scanners installed in U.S. airport security checkpoints.
The public comments and the agency’s answers to them are reviewable by a court, which opens up a new avenue for a legal challenge to the agency’s decision to deploy the scanners. Critics maintain the scanners, which use radiation to peer through clothes, are threats to Americans’ privacy and health, which the TSA denies.
On July 15, 2011, the U.S. Circuit Court of Appeals for the District of Columbia Circuit set aside a constitutional challenge brought by the Electronic Privacy Information Center trying to stop the government from using intrusive body scanners across U.S. airports. But the decision also ordered the TSA “to act promptly” and hold public hearings and publicly adopt rules and regulations about the scanners’ use, which it has not done, in violation of federal law.
Then on Aug. 1 of this year, the court ordered (.pdf) the TSA to explain why it had not complied with its order. In response, the agency said it was expected to publish, by the end of February, a notice in the Federal Register opening up the Advanced Imaging Technology scanners to public comments and public hearings. That would be 19 months after the court order.
On Tuesday, the court gave the TSA until the end of March, meaning the agency has 20 months to “promptly” comply with the court’s order. EPIC was urging the appeals court to reverse the court’s blessing of the so-called nude body scanners because of the TSA’s lack of compliance with the court’s original order.
The Transportation Security Administration has denied allegations from the Electronic Privacy Information Center that it was stonewalling the court’s order. (.pdf) The TSA said the agency was having staffing issues and was awaiting approval from the Department of Homeland Security and the Office of Management and Budget before it releases public documents associated with its 2009 decision to make the body scanners the “primary” security apparatus at the nation’s airports.
The three-judge appellate court, which is one stop from the Supreme Court, ruled last year that the TSA breached federal law when it formally adopted the Advanced Imaging Technology scanners as the primary method of screening. The judges — while allowing the scanners to be used — said the TSA violated the Administrative Procedures Act for failing to have a 90-day public comment period, and ordered the agency to undertake one.
Under the Administrative Procedures Act, agency decisions like the TSA’s move toward body scanners must go through what is often termed a “notice and comment” period if their new rules would substantially affect the rights of the public — in this case, air passengers. But the court’s decision last year did not penalize the TSA for its shortcomings. The TSA argued to the court that a public comment period would thwart the government’s ability to respond to “ever-evolving threats.”
Concerns about the machines include the graphicness of the human images, the potential health risks and the scanners’ effectiveness.
 

Autonomous Vehicles Now Legal in California

At Google’s headquarters in Mountain View, California, Governor Jerry Brown was joined on stage by Sergey Brin to sign a new law that would allow autonomous vehicles to operate on California roads.

The bill, SB 1298, sponsored by Senator Alex Padilla (D-L.A.), establishes safety and performance standards that will be enforced by both the California Department of Motor Vehicles and the Highway Patrol. The law requires the DMV to draft regulations for autonomous vehicles by January 1, 2015, and while the vehicles can operate autonomously, a licensed driver is required to be behind the wheel if something goes awry.

California is now the third state to enact autonomous vehicle legislation, following another Google-championed bill that passed in Nevada last February, as well as a Florida law that was approved earlier this year.

The goal of Padilla’s law is to keep California at the forefront of autonomous-vehicle development, and the Governor reiterated that sentiment at today’s event. Google has logged more than 300,000 miles in its fleet of autonomous Toyota Prius hybrids and Lexus RX crossovers, while Stanford has worked with Volkswagen and Audi on autonomous technology in Silicon Valley.

Other automakers have announced plans to bring different forms of driver assistance that takes control of the vehicle at low speeds, including BMW, Mercedes-Benz, Ford, Volvo and Cadillac, the last of which is expected to introduce its “Super Cruise” system within the next few years.

While consumer acceptance of autonomous vehicles is still very much in question, this latest legislative move is another step towards the eventual mainstreaming of the technology.

“Anyone who gets behind the wheel of an [autonomous] car is going to be a little skittish,” the Governor said at the event. “But they’ll get over it.”
 

Let's Do It: Codev2 by Lawrence Lessig



From the Preface: "This is a translation of an old book—indeed, in Internet time, it is a translation of an ancient text." That text is Lessig's "Code and Other Laws of Cyberspace." The second version of that book is "Code v2." The aim of Code v2 is to update the earlier work, making its argument more relevant to the current internet.
Code v2 was written in part through a collaborative Wiki. That version is still accessible here. Lessig took the Wiki text as of 12/31/05, and then added his own edits. Code v2 is the result.
The Wiki text was licensed under a Creative CommonsAttribution-ShareAlike 2.5 License. So too is the derivative. Reflecting the contributions of the community to this new work, all royalties have been dedicated to Creative Commons.

Sources: http://codev2.cc/; http://www.lessig.org/

Bilişim ve Teknoloji Hukuku Enstitüsü Yeni Akademik Yıl



Bilişim ve Teknoloji Hukuku Enstitüsü'nün lisans programında verdiği derslerden olan "e-Devlet" dersinin açılış dersi bugün yapıldı. Dersi alan sevgili Arkadaşlarımla güz dönemi boyunca hangi konuları irdeleyeceğimize ilişkin bilgilendirmeden sonra, kendilerini Türkiye'nin yasama sürecine dahil etmek üzere anlaştık. İlk görevleri halihazırda Başbakanlıkta bulunan ve Ekim ayında Parlamento'ya gönderilmesi beklenen "Kişisel Verilerin Korunması Hakkında Kanun Tasarısı" üzerinde çalışmak ve Meclis Adalet Komisyonuna sunulmak üzere görüş yazısı hazırlamak olacak. Söz konusu Kanun Tasarısı online.bilgi.edu.tr'de yer almaktadır.
 


Bilişim ve Teknoloji Hukuku Enstitüsü'nün Bilişim Hukuku Yüksek Lisans Programındaki derslerden, Giriş dersi niteliğindeki çok sevgili Arkadaşım Av. Yasin BECENİ tarafından verilen Bilgi ve İletişim Teknolojileri Hukuku dersinin ilk haftasını, Enstitümüzün son derece başarılı bilişim hukukçularından sevgili Yücel HAMZAOĞLU ve Nilay ERDEM yaptı.




Bruce Schneier's Comment on SHA-3 Competition

SHA-3 to Be Announced


NIST is about to announce the new hash algorithm that will become SHA-3. This is the result of a six-year competition, and my own Skein is one of the five remaining finalists (out of an initial 64).

It's probably too late for me to affect the final decision, but I am hoping for "no award."

It's not that the new hash functions aren't any good, it's that we don't really need one. When we started this process back in 2006, it looked as if we would be needing a new hash function soon. The SHA family (which is really part of the MD4 and MD5 family), was under increasing pressure from new types of cryptanalysis. We didn't know how long the various SHA-2 variants would remain secure. But it's 2012, and SHA-512 is still looking good.

Even worse, none of the SHA-3 candidates is significantly better. Some are faster, but not orders of magnitude faster. Some are smaller in hardware, but not orders of magnitude smaller. When SHA-3 is announced, I'm going to recommend that, unless the improvements are critical to their application, people stick with the tried and true SHA-512. At least for a while.

I don't think NIST is going to announce "no award"; I think it's going to pick one. And of the five remaining, I don't really have a favorite. Of course I want Skein to win, but that's out of personal pride, not for some objective reason. And while I like some more than others, I think any would be okay.

Well, maybe there's one reason NIST should choose Skein. Skein isn't just a hash function, it's the large-block cipher Threefish and a mechanism to turn it into a hash function. I think the world actually needs a large-block cipher, and if NIST chooses Skein, we'll get one.

Source: http://www.schneier.com/

For more info about SKEIN: http://www.schneier.com/skein.html

Cryptographic Hash Algorithm Competition: SHA-3

NIST announced a public competition (Federal Register Notice) on Nov. 2, 2007 to develop a new cryptographic hash algorithm, which converts a variable length message into a short "message digest" that can be used in generating digital signatures, message authentication codes, and many other security applications in the information infrastructure. The competition was NIST's response to advances in the cryptanalysis of hash algorithms. The winning algorithm will be named "SHA-3", and will augment the hash algorithms currently specified in the Federal Information Processing Standard (FIPS) 180-3, Secure Hash Standard.
NIST received sixty-four entries by October 31, 2008; and selected fifty-one candidate algorithms to advance to the first round on December 10, 2008, and fourteen to advance to the second round on July 24, 2009. A year was allocated for the public review of the fourteen second-round candidates.
NIST received significant feedback from the cryptographic community. Based on the public feedback and internal reviews of the second-round candidates, NIST selected five SHA-3 finalists - BLAKE, Grøstl, JH, Keccak, and Skein to advance to the third (and final) round of the competition on December 9, 2010, which ended the second round of the competition.
Submitters of the finalist algorithms are allowed to make minor modifications to their algorithms and submit the final packages to NIST by January 16, 2011. A one-year public comment period is planned for the finalists. NIST also plans to host a final SHA-3 Candidate Conference in the spring of 2012 to discuss the public feedback on these candidates, and select the SHA-3 winner later in 2012.

Source: http://csrc.nist.gov/groups/ST/hash/sha-3/index.html

RISK ANALYSIS VIII

Risk Analysis VIII (Wit Transactions on Information and Communication Technologies)

Risk Analysis VIII (Wit Transactions on Information and Communication Technologies) by C.A. Brebbia, September 2012.


Comprised of the papers presented at the eighth International Conference on Simulation in Risk Analysis and Hazard Mitigation, this book covers a topic of increasing importance. Scientific knowledge is essential to our better understanding of risk. Natural hazards such as floods, earthquakes, landslides, fires and others, have always affected human societies. Man-made hazards, however, played a comparatively small role until the industrial revolution when the risk of catastrophic events started to increase due to the rapid growth of new technologies and the urbanization of populations. The interaction of natural and anthropogenic risks adds to the complexity of the problem. Due to advances in computational methods and the ability to model systems more precisely we can now quantify hazards, simulate their effects and calculate risk with greater accuracy, enabling us to manage risk much more effectively. These developments are particularly relevant to environmental issues, where substantial risks are involved. Governments, and their publics, now place a high priority on effective risk management and the mitigation of possible hazards. The book addresses topics such as: Estimation of Risk; Risk Management; Vulnerability; Geomorphologic Risk; Network Systems; Climate Change Risks; Hazard Prevention, Management and Control; Security and Public Safety; Transportation Safety; Safe Ship Operations; Early Warning Systems; Food Safety; Risk Perception; Natural Hazards; Technological Risk. The book will be of interest to planners, emergency managers, environmentalists, engineers, policy makers and other government officials, researchers and academics involved in the field of risk and disaster management.


Source: http://www.amazon.com/Analysis-Transactions-Information-Communication-Technologies/dp/1845646207

ITU REPORT: THE STATE OF BROADBAND 2012

The State of Broadband Report: Achieving Digital Inclusion for all
A Report by the Broadband Commission of  ITU
September 2012
http://www.broadbandcommission.org/Documents/bb-annualreport2012.pdf

Glass Works: How Corning Created the Ultrathin, Ultrastrong Material of the Future

For Corning's success story and the details of their relationship with Apple see:

http://www.wired.com/wiredscience/2012/09/ff-corning-gorilla-glass/

Corning's impressive "Made possible by Corning" commercial see also:

http://www.youtube.com/watch?v=6Cf7IL_eZ38

Hayatımın Kilometre Taşı: Prof. Dr. UĞUR ALACAKAPTAN







 


İstanbul Bilgi Üniversitesi Hukuk Fakültesi ve Bilişim ve Teknoloji Hukuku Enstitüsü olarak bugün yeni akademik yılın açılışını gerçekleştirdik. Benim için bu günü anlamlı ve özel kılan en büyük şey ise, hayatımın kilometre taşlarından biri olan çok değerli Hocam Prof.Dr. UĞUR ALACAKAPTAN'ın da bizimle birlikte olmasıydı.

Uğur ALACAKAPTAN Hocam bana öncelikle "insan" olmayı, farklı fikirlere saygılı olmayı, her türlü zorluklara rağmen hayata her zaman pozitif bakmayı, sınıf, statü, yaş farkı gözetmeden insana değer vermeyi, akademik kapris ve kompleksten uzak durmayı ve herşeyden önemlisi yaratıcı ve anlayışlı olmayı öğretti.

Kendisi Türkiye'nin en genç profesör unvanı alan akademisyenlerinden biri olarak bizlere: "Ben genç yaşta profesör oldum da ne oldu, sonra hapse girdim. Siz bana uymayın yavaş hareket edin" şeklindeki esprileri ile mücadelelerle dolu hayatından örnekler vererek, tüm bilgi ve tecrübesini genç akademisyenlerle paylaşmaktan hiçbir zaman kaçınmamıştır.

Yapmak istediğim şeylere hiç bir zaman "hayır" demeyen, akademik kimliğimin gelişmesinde sonsuz emek ve katkıları olan Uğur ALACAKAPTAN ile çalışmış ve onu tanımış olmaktan ötürü bahtiyarım.

Bugün Türkiye'de bilişim hukukundan bahsediyorsak ve bilişim hukukçuları yetiştirebiliyorsak, bunun gizli kahramanının Uğur ALACAKAPTAN Hocam olduğunu herkesin bilmesini isterim. 2000 yılında İstanbul Bilgi Üniversitesi'nde "bilişim hukuku konusunda çalışmak istiyorum" dediğimde Uğur Hocam memnuniyetle kabul etti ve her zaman bana bu konuda destek oldu. Hukuk Fakültesi müfredatı içinde bilişimle ilgili dersler açmama izin vererek genç hukukçuların bilişim hukuku konusunda farkındalıklarının gelişmesine, bilişim hukuku konusunda ilk araştırma merkezi fikrine onay vererek kamu-özel sektör-sivil toplum ve üniversite işbirliği sonucunda Türkiye'nin e-devlet ve e-dönüşüm çalışmalarına katkıda bulunmamı sağladı. Özgürlük aşığı olarak benim de özgür olabilirsem birşeyler başarabileceğime ilk günden beri mutlak olarak inandı ve beni hep destekledi.

Uğur ALACAKAPTAN Hocam; nasılsın? sorusuna verdiğim "yaban domuzu gibiyim" cümlesinin sahibi, hayatımın kilometre taşı, dağarcığıma nakşedilmiş bir güzel insan ve kalbimde sonsuz bir sevgi ile her zaman yanımda olacak.

Sevgili Uğur ALACAKAPTAN Hocama bizimle ve tüm sevdikleri ile birlikte daha nice sağlıklı, huzurlu ve mutlu yıllar diliyorum ve Onu çok ama çok sevdiğimi bir kez daha tekrarlıyorum.

MINISTER YILDIRIM : “GENERAL ASSEMBLY MEETINGS WILL BE HELD DIGITALLY

 
 
Turkey’s Transportation, Maritime Affairs and Communications Minister, Binali Yıldırım, stated that it has become possible for Turkish companies to carry out all of their transactions electronically, by means of secure electronic signature, after July 1,2012, and that the dream of a “Digital Company” became a reality.
According to the new Turkish Commercial Code, general assembly meeting, one of the most important duties of companies listed on the stock exchange, must be carried out in the electronic media. The Minister of Transportation, Maritime Affairs and Communications, Binali Yıldırım, announced that the “Information System for Electronic General Meetings”, which hosts Secure Electronic Signature and was prepared by the Central Registry Agency, would be used to conduct electronic general assembly meetings of listed companies, will be launched on October 1, 2012. Yıldırım was quoted as saying “The companies will be able to carry out general assembly meetings electronically. This is an obligation for companies listed on the stock exchange. This system will increase Turkey’s attraction in terms of foreign capital and international investment climate.
“Information System for Electronic General Meetings” that the Minister Binali Yıldırım first mentioned in the Izmir International Electronic Apostille Forum on July 15, 2012, is being launched in Turkey. Minister Yıldırım said that the system was introduced by the new Turkish Commercial Code, which became effective on July 1st. Yıldırım expressed that ‘Digital Company’ is one of the most important innovations that had been introduced by the new Turkish Commercial Code said, “The most critical components of Digital Company, which will make it possible to run all relations with employees, shareholders, suppliers, other companies and the state, are secure electronic signature, time stamp and registered e-mail”.
General Meetings will be Carried out Digitally
Yıldırım emphasized that electronic general assembly meetings, which are deemed mandatory for companies listed on the stock exchange and optional for others, are among the most important fields of application for secure electronic signature and time stamp. Yıldırım, who underlined that the “Information System for Electronic General Meetings” will set an example for other countries to popularize electronic general assembly meetings, said “This system, which will directly contribute to increasing Turkey’s attraction in terms of foreign capital and international investment climate, will enable shareholders of all companies with foreign capital to attend general assembly meetings from anywhere in the world by means of secure electronic signature obtained from Turkey. The problem of power gaps and challenges in representation in companies with foreign-capital has finally ended. Shareholders of companies with foreign capital can now attend general assembly meetings and personally cast votes, thereby actively taking part in the management of the company, by obtaining a secure electronic signature from electronic certificate service providers, by using their passport numbers or temporary Turkish ID numbers. Turkey has introduced the first and only example of this successful application that uses information technologies to set an example for the rest of the world.
Central Registry Agency (MKK) is Stepping In
Minister Yıldırım announced that as of October 1, 2012, the “Electronic General Assembly System” prepared by the Central Registry Agency (www.mkk.com.tr) will be used by companies listed on Istanbul Stock Exchange. Minister Yıldırım added that not only will the corporate governance of such companies become more effective, competitive capacity of Turkish companies would also increase as a result of the use of this system. Yıldırım was quoted to say “The Electronic General Assembly System has been designed to make compliance with principles of corporate governance and cross-border use of partnership rights easier. Plus, it will contribute to making our country’s investment environment attractive to global investors, as a result of increased exchange of information between Istanbul Stock Exchange and listed companies, transparency and compliance with international regulations. I congratulate Central Registry Agency, one of the most important actors in our capital market, for the system it has developed and the services it offers to our companies.”