The top 10 new reasons to be afraid of hackers

The scariest new tricks at this year's twin computer crime conferences, Black Hat and Def Con

Thousands of computer hackers are heading to Las Vegas this week for Black Hat and Def Con, back-to-back information security conventions where attendees are warned not to send passwords over Wi-Fi or use the ATMs due to a surge in digital mischief-making.
It’s traditional for these skilled programmers to unveil their greatest exploits at the conventions, prompting a wave of press attention as seemingly-secure parts of our daily lives are turned against their users. Here’s how to hack an iPhone within one minute of plugging it in to a tampered charger. Here’s how to "trivially" gain access to surveillance cameras in homes, banks, prisons, and military facilities.
 
By Adrianne Jeffries
Source and read:

The Machine Zone: This Is Where You Go When You Just Can't Stop Looking at Pictures on Facebook

What an anthropologist's examination of Vegas slot machines reveals about the hours we spend on social networks

By Alexis C. Madrigal
Source and read:
http://www.theatlantic.com/technology/archive/2013/07/the-machine-zone-this-is-where-you-go-when-you-just-cant-stop-looking-at-pictures-on-facebook/278185/

Breakpoint: Why the Web will Implode, Search will be Obsolete, and Everything Else you Need to Know about Technology is in Your Brain



We are living in a world in which cows send texts to farmers when they're in heat, where the most valuable real estate in New York City houses computers, not people, and some of humanity's greatest works are created by crowds, not individuals.

We are in the midst of a networking revolution--set to transform the way we access the world's information and the way we connect with one another. Studying biological systems is perhaps the best way to understand such networks, and nature has a lesson for us if we care to listen: bigger is rarely better in the long run. The deadliest creature is the mosquito, not the lion. It is the quality of a network that is important for survival, not the size, and all networks--the human brain, Facebook, Google, even the internet itself--eventually reach a breakpoint and collapse. That's the bad news. The good news is that reaching a breakpoint can be a step forward, allowing a network to substitute quality for quantity.

In Breakpoint, brain scientist and entrepreneur Jeff Stibel takes readers to the intersection of the brain, biology, and technology. He shows how exceptional companies are using their understanding of the internet's brain-like powers to create a competitive advantage by building more effective websites, utilizing cloud computing, engaging social media, monetizing effectively, and leveraging a collective consciousness. Indeed, the result of these technologies is a more tightly connected world with capabilities far beyond the sum of our individual minds. Breakpoint offers a fresh and exciting perspective about the future of technology and its effects on all of us.
 
Author: Jeff Stiebel
Release date: July 23, 2013
Source:
 

Warrantless Cellphone Tracking Is Upheld

In a significant victory for law enforcement, a federal appeals court on Tuesday said that government authorities could extract historical location data directly from telecommunications carriers without a search warrant.
The closely watched case, in the United States Court of Appeals for the Fifth Circuit, is the first ruling that squarely addresses the constitutionality of warrantless searches of historical location data stored by cellphone service providers. Ruling 2 to 1, the court said a warrantless search was “not per se unconstitutional” because location data was “clearly a business record” and therefore not protected by the Fourth Amendment.
The ruling is likely to intensify legislative efforts, already bubbling in Congress and in the states, to consider measures to require warrants based on probable cause to obtain cellphone location data.
The appeals court ruling sharply contrasts with a New Jersey State Supreme Court opinion in mid-July that said the police required a warrant to track a suspect’s whereabouts in real time. That decision relied on the New Jersey Constitution, whereas the ruling Tuesday in the Fifth Circuit was made on the basis of the federal Constitution.
The Supreme Court has yet to weigh in on whether cellphone location data is protected by the Constitution. The case, which was initially brought in Texas, is not expected to go to the Supreme Court because it is “ex parte,” or filed by only one party — in this case, the government.
But the case could renew calls for the highest court to look at the issue, if another federal court rules differently on the same question. And two other federal cases involving this issue are pending.
“The opinion is clear that the government can access cell site records without Fourth Amendment oversight,” said Orin Kerr, a constitutional law scholar at George Washington University Law School who filed an amicus brief in the case.
For now, the ruling sets an important precedent: It allows law enforcement officials in the Fifth Circuit to chronicle the whereabouts of an American with a court order that falls short of a search warrant based on probable cause.
“This decision is a big deal,” said Catherine Crump, a lawyer with the American Civil Liberties Union. “It’s a big deal and a big blow to Americans’ privacy rights.”
The group reviewed records from more than 200 local police departments last year, concluding that the demand for cellphone location data had led some cellphone companies to develop “surveillance fees” to enable police to track suspects.
In reaching its decision on Tuesday, the federal appeals court went on to agree with the government’s contention that consumers knowingly give up their location information to the telecommunications carrier every time they make a call or send a text message on their cellphones.
“That means it is not protected by Fourth Amendment when the government goes to a third-party service provider and issues something that is not a warrant to demand production of those records,” said Mark Eckenwiler, a former Justice Department lawyer who worked on the case and is now with the Washington law firm Perkins Coie. “On this kind of historical cell site information, this is the first one to address the core constitutional question.”
Historical location data is crucial to law enforcement officials. Mr. Eckenwiler offered the example of drug investigations: A cellphone carrier can establish where a suspect met his supplier and how often he returned to a particular location. Likewise, location data can be vital in establishing people’s habits and preferences, including whether they worship at a church or mosque or whether they are present at a political protest, which is why, civil liberties advocates say, it should be accorded the highest privileges of privacy protection.
The decision could also bear implications for other government efforts to collect vast amounts of so-called metadata, under the argument that it constitutes “business records,” as in the National Security Agency’s collection of Verizon phone records for millions of Americans.
“It provides support for the government’s view that that procedure is constitutional, obtaining Verizon call records, because it holds that records are business records,” said Mr. Kerr, of George Washington University. “It doesn’t make it a slam dunk but it makes a good case for the government to argue that position.”
An important element in Tuesday’s ruling is the court’s presumption of what consumers should know about the way cellphone technology works. “A cell service subscriber, like a telephone user, understands that his cellphone must send a signal to a nearby cell tower in order to wirelessly connect his call,” the court ruled, going on to note that “contractual terms of service and providers’ privacy policies expressly state that a provider uses a subscriber’s location information to route his cellphone calls.”
In any event, the court added, the use of cellphones “is entirely voluntary.”
The ruling also gave a nod to the way in which fast-moving technological advances have challenged age-old laws on privacy. Consumers today may want privacy over location records, the court acknowledged: “But the recourse for these desires is in the market or the political process: in demanding that service providers do away with such records (or anonymize them) or in lobbying elected representatives to enact statutory protections.”
Cellphone privacy measures have been proposed in the Senate and House that would require law enforcement agents to obtain search warrants before prying open location records. Montana recently became the first state to require a warrant for location data. Maine soon followed. California passed a similar measure last year but Gov. Jerry Brown, a Democrat, vetoed it, saying it did not strike what he called the right balance between the demands of civil libertarians and the police.
 
By Somini Sengupta
Source:

BİLİŞİM ve TEKNOLOJİ HUKUKU ENSTİTÜSÜ MASTER PROGRAMI BAŞVURU TAKVİMİ

İstanbul Bilgi Üniversitesi Bilişim ve Teknoloji Hukuku Enstitüsü, Bilişim Hukuku Master Programı 17 Eylül 2013 tarihine kadar başvuru alacaktır.

Mülakatlar 21 Eylül 2013 Cumartesi günü yapılacaktır.

2013 Güz dönemi dersleri 7 Ekim 2013 tarihinde başlayacaktır.

2013-2014 Akademik Takvimi için: http://www.bilgi.edu.tr/site_media/uploads/files/2013/06/24/akademik-takvim-lisansustu-2013-14-tr_1.pdf

Programla ilgili ayrıntılı bilgi için meldao@bilgi.edu.tr veya oktayonay@gmail.com adreslerine mail atabilirsiniz.

Başvuru belgelerine aşağıdaki linkten erişim mümkündür: http://cyberlaw.bilgi.edu.tr/basvuru

The Hole in Our Collective Memory: How Copyright Made Mid-Century Books Vanish

A book published during the presidency of Chester A. Arthur has a greater chance of being in print today than one published during the time of Reagan.

neweditions650.jpg
Paul J. Heald
Last year I wrote about some very interesting research being done by Paul J. Heald at the University of Illinois, based on software that crawled Amazon for a random selection of books. At the time, his results were only preliminary, but they were nevertheless startling: There were as many books available from the 1910s as there were from the 2000s. The number of books from the 1850s was double the number available from the 1950s. Why? Copyright protections (which cover titles published in 1923 and after) had squashed the market for books from the middle of the 20th century, keeping those titles off shelves and out of the hands of the reading public.
Heald has now finalized his research and the picture, though more detailed, is largely the same: "Copyright correlates significantly with the disappearance of works rather than with their availability," Heald writes. "Shortly after works are created and proprietized, they tend to disappear from public view only to reappear in significantly increased numbers when they fall into the public domain and lose their owners."
The graph above shows the simplest interpretation of the data. It reveals, shockingly, that there are substantially more new editions available of books from the 1910s than from the 2000s. Editions of books that fall under copyright are available in about the same quantities as those from the first half of the 19th century. Publishers are simply not publishing copyrighted titles unless they are very recent.
But this isn't a totally honest portrait of how many different books are available, because for books that are in the public domain, often many different editions exist, and the random sample is likely to overrepresent them. "After all," Heald explains, "if one feeds a random ISBN number [into] Amazon, one is more likely to retrieve Milton's Paradise Lost (with 401 editions and 401 ISBN numbers) than Lorimer's A Wife out of Egypt (1 edition and 1 ISBN)." He found that on average the public domain titles had a median of four editions per title. (The mean was 16, but highly distorted by the presence of a small number of books with hundreds of editions. For this reason, statisticians that Heald consulted recommended using the median.) Heald divided the number of public-domain editions by four, providing a graph that compares the number of titles available.
titlesavailable650.jpg
Paul J. Heald
Heald says the picture is still "quite dramatic." The most recent decade looks better by comparison, but the depression of the 20th century is still notable, followed by a little boom for the most recent decades when works fall into the public domain. Presumably, as Heald writes, in a market with no copyright distortion, these graphs would show "a fairly smoothly doward sloping curve from the decade 2000-20010 to the decade of 1800-1810 based on the assumption that works generally become less popular as they age (and therefore are less desirable to market)." But that's not at all what we see. "Instead," he continues, "the curve declines sharply and quickly, and then rebounds significantly for books currently in the public domain initially published before 1923." Heald's conclusion? Copyright "makes books disappear"; its expiration brings them back to life.
The books that are the worst affected by this are those from pretty recent decades, such as the 80s and 90s, for which there is presumably the largest gap between what would satisfy some abstract notion of people's interest and what is actually available. As Heald writes:
This is not a gently sloping downward curve! Publishers seem unwilling to sell their books on Amazon for more than a few years after their initial publication. The data suggest that publishing business models make books disappear fairly shortly after their publication and long before they are scheduled to fall into the public domain. Copyright law then deters their reappearance as long as they are owned. On the left side of the graph before 1920, the decline presents a more gentle time-sensitive downward sloping curve.
But even this chart may understate the effects of copyright, since the comparison assumes that the same quantity of books has been published every decade. This is of course not the case: Increasing literacy coupled with technological efficiencies mean that far more titles are published per year in the 21st century than in the 19th. The exact number per year for the last 200 years is unknown, but Heald and his assistants were able to arrive at a pretty good approximation by relying on the number of titles available for each year in WorldCat, a library catalog that contains the complete listings of 72,000 libraries around the world. He then normalized his graph to the decade of the 1990s, which saw the greatest number of titles published.
adjustedtitles650.jpg
Paul J. Heald
By this calculation, the effect of copyright appears extreme. Heald says that the WorldCat research showed, for example, that there were eight times as many books published in the 1980s as in the 1880s, but there are roughly as many titles available on Amazon for the two decades. A book published during the presidency of Chester A. Arthur has a greater chance of being in print today than one published during the time of Reagan.
Copyright advocates have long (and successfully) argued that keeping books copyrighted assures that owners can make a profit off their intellectual property, and that that profit incentive will "assure [the books'] availability and adequate distribution." The evidence, it appears, says otherwise.

By Rebecca J. Rosen
Source:
http://www.theatlantic.com/technology/archive/2013/07/the-hole-in-our-collective-memory-how-copyright-made-mid-century-books-vanish/278209/

Privacy and Safety on Facebook: A Guide For Survivors of Abuse

Read the full guide:
https://fbcdn-dragon-a.akamaihd.net/hphotos-ak-prn1/851584_613437522011141_1298974833_n.pdf

Scene Reconstruction from High Spatio-Angular Resolution Light Fields

Changil Kim (Disney Research Zurich / ETH Zurich)
Henning Zimmer (Disney Research Zurich / ETH Zurich)
Yael Pritch (Disney Research Zurich)
Alexander Sorkine-Hornung (Disney Research Zurich)
Markus Gross (Disney Research Zurich / ETH Zürich)

Scene Reconstruction from High Spatio-Angular Resolution Light Fields Teaser Image
The images on the left show a 2D slice of a 3D input light field, a so called epipolar-plane image (EPI), and two out of one hundred 21 megapixel images that were used to construct the light field. Our method computes 3D depth information for all visible scene points, illustrated by the depth EPI on the right. From this representation, individual depth maps or segmentation masks for any of the input views can be extracted as well as other representations like 3D point clouds. The horizontal red lines connect corresponding scanlines in the images with their respective positions in the EPI.

Abstract

This paper describes a method for scene reconstruction of complex, detailed environments from 3D light fields. Densely sampled light fields in the order of 10^9 light rays allow us to capture the real world in unparalleled detail, but efficiently processing this amount of data to generate an equally detailed reconstruction represents a significant challenge to existing algorithms. We propose an algorithm that leverages coherence in massive light fields by breaking with a number of established practices in image-based reconstruction. Our algorithm first computes reliable depth estimates specifically around object boundaries instead of interior regions, by operating on individual light rays instead of image patches. More homogeneous interior regions are then processed in a fine-to-coarse procedure rather than the standard coarse-to-fine approaches. At no point in our method is any form of global optimization performed. This allows our algorithm to retain precise object contours while still ensuring smooth reconstructions in less detailed areas. While the core reconstruction method handles general unstructured input, we also introduce a sparse representation and a propagation scheme for reliable depth estimates which make our algorithm particularly effective for 3D input, enabling fast and memory efficient processing of “Gigaray light fields” on a standard GPU. We show dense 3D reconstructions of highly detailed scenes, enabling applications such as automatic segmentation and image-based rendering, and provide an extensive evaluation and comparison to existing image-based reconstruction techniques.
[Press Release]

Paper

srlf_paper_thumbnail
Changil Kim, Henning Zimmer, Yael Pritch, Alexander Sorkine-Hornung, Markus Gross, Scene Reconstruction from High Spatio-Angular Resolution Light Fields, ACM Transactions on Graphics 32(4) (Proceedings of SIGGRAPH 2013)
Paper (PDF, 53.8 MB) | Paper (Low resolution PDF, 2.7 MB) | BibTeX entry
srlf_supp_thumbnail

The Ethics of Saving Lives With Autonomous Cars Are Far Murkier Than You Think


If you don’t listen to Google’s robot car, it will yell at you. I’m not kidding: I learned that on my test-drive at a Stanford conference on vehicle automation a couple weeks ago. The car wanted its human driver to retake the wheel, since this particular model wasn’t designed to merge lanes. If we ignored its command a third time, I wondered, would it pull over and start beating us like an angry dad from the front seat? Better to not find out.
No car is truly autonomous yet, so I didn’t expect Google’s car to drive entirely by itself. But several car companies — such as Audi, BMW, Ford, GM, Honda, Mercedes-Benz, Nissan, Volkswagen, and others — already have models and prototypes today with a surprising degree of driver-assistance automation. We can see “robot” or automated cars (what others have called “autonomous cars”, “driverless cars”, etc.), coming in our rear-view mirror, and they are closer than they appear.
Why would we want cars driving themselves and bossing us around? For one thing, it could save a lot of lives. Traffic accidents kill about 32,000 people every year in America alone. That’s about 88 deaths per day in the U.S., or one victim every 15 minutes — nearly triple the rate of firearm homicides.
If all goes well, computer-driven cars could help prevent these accidents by having much faster reflexes, making consistently sound judgments, not getting road-rage or being drunk, and so on. They simply wouldn’t be as flawed as humans are.
But no technology is perfect, especially something as complex as a computer, so no one thinks that automated cars will end all traffic deaths. Even if every vehicle on the road were instantly replaced by its automated counterpart, there would still be accidents due to things like software bugs, misaligned sensors, and unexpected obstacles. Not to mention human-centric errors like improper servicing, misuse, and no-win situations — essentially real-life versions of the fictional Kobayashi Maru test in Star Trek.
Still, there’s little doubt that robot cars could make a huge dent in the car-accident fatality rate, which is obviously a good thing — isn’t it?
Actually, the answer isn’t so simple. It’s surprisingly nuanced and involves some modern tech twists on famous, classical ethical dilemmas in philosophy.

The Puzzling Calculus of Saving Lives

Let’s say that autonomous cars slash overall traffic-fatality rates by half. So instead of 32,000 drivers, passengers, and pedestrians killed every year, robotic vehicles save 16,000 lives per year and prevent many more injuries.
But here’s the thing. Those 16,000 lives are unlikely to all be the same ones lost in an alternate world without robot cars. When we say autonomous cars can slash fatality rates by half, we really mean that they can save a net total of 16,000 lives a year: for example, saving 20,000 people but still being implicated in 4,000 new deaths.
There’s something troubling about that, as is usually the case when there’s a sacrifice or “trading” of lives.
The identities of many (future) fatality victims would change with the introduction of autonomous cars. Some victims could still die either way, depending on the scenario and how well robotic cars actually outperform human drivers. But changing the circumstances and timing of traffic conditions will likely affect which accidents occur and therefore who is hurt or killed, just as circumstances and timing can affect who is born.
That’s how this puzzle relates to the non-identity problem posed by Oxford philosopher Derek Parfit in 1984. Suppose we face a policy choice of either depleting some natural resource or conserving it. By depleting it, we might raise the quality of life for people who currently exist, but we would decrease the quality of life for future generations; they would no longer have access to the same resource.
Most of us would say that a policy of depletion is unethical because it selfishly harms future people. The weird sticking point is that most of those future individuals would not have been born at all under a policy of conservation, since any different policy would likely change the circumstances and timing around their conception. In other words, they arguably owe their very existence to our reckless depletion policy.
Contrary to popular intuitions, then, no particular person needs to be made worse off for something to be unethical. This is a subtle point, but in our robot-car scenario, the ethics are especially striking: some current non-victims — people who already exist — would become future victims, and this is clearly bad.
But, wait. We should also factor in the many more lives that would be spared. A good consequentialist would look at this bigger picture and argue that as long as there’s a net savings of lives (in our case, 16,000 per year) we have a positive, ethical result. And that judgment is consistent with reactions reported by Stanford Law’s Bryant Walker Smith who posed a similar dilemma and found that his audiences remain largely unconcerned when the number of people saved is greater than the number of different lives killed.
Still, how much greater does the first number need to be, in order for the tradeoff to be acceptable to society?
If we focused only on end-results — as long as there’s a net savings in life, even just a few lives — it really doesn’t matter how many lives are actually traded. Yet in the real world, the details matter.
Say that the best we could do is make robot cars reduce traffic fatalities by 1,000 lives. That’s still pretty good. But if they did so by saving all 32,000 would-be victims while causing 31,000 entirely new victims, we wouldn’t be so quick to accept this trade — even if there’s a net savings of lives.
The consequentialist might then stipulate that the lives saved must be at least twice (or triple, or quadruple) the number of lives lost. But this is an arbitrary line without a guiding principle, making it difficult to defend with reason.
Anyway, no matter where the line is, the mathematical benefit for society is little consolation for the families of our new victim class. Statistics don’t matter when it’s your child, or parent, or friend, who becomes a new accident victim — someone who otherwise would have had a full life.
However, we can still defend robot cars against the kind of non-identity problem I suggest above. If most of the 32,000 lives that will die this year are arbitrarily and unpredictably doomed to be victims, there’s no apparent reason why they should be victims in the first place. This means there’s no issue with replacing some or most of them with a new set of unlucky victims.
With this new set of victims, however, are we violating their right not to be killed? Not necessarily. If we view the right not to be killed as the right not to be an accident victim, well, no one has that right to begin with. We’re surrounded by both good luck and bad luck: accidents happen. (Even deontological – duty-based — or Kantian ethics could see this shift in the victim class as morally permissible given a non-violation of rights or duties, in addition to the consequentialist reasons based on numbers.)

Not All Car Ethics Are About Accidents
Ethical dilemmas with robot cars aren’t just theoretical, and many new applied problems could arise: emergencies, abuse, theft, equipment failure, manual overrides, and many more that represent the spectrum of scenarios drivers currently face every day.
One of the most popular examples is the school-bus variant of the classic trolley problem in philosophy: On a narrow road, your robotic car detects an imminent head-on crash with a non-robotic vehicle — a school bus full of kids, or perhaps a carload of teenagers bent on playing “chicken” with you, knowing that your car is programmed to avoid crashes. Your car, naturally, swerves to avoid the crash, sending it into a ditch or a tree and killing you in the process.
At least with the bus, this is probably the right thing to do: to sacrifice yourself to save 30 or so schoolchildren. The automated car was stuck in a no-win situation and chose the lesser evil; it couldn’t plot a better solution than a human could.
But consider this: Do we now need a peek under the algorithmic hood before we purchase or ride in a robot car? Should the car’s crash-avoidance feature, and possible exploitations of it, be something explicitly disclosed to owners and their passengers — or even signaled to nearby pedestrians? Shouldn’t informed consent be required to operate or ride in something that may purposely cause our own deaths?
It’s one thing when you, the driver, makes a choice to sacrifice yourself. But it’s quite another for a machine to make that decision for you involuntarily.
Ethical issues could also manifest as legal and policy choices. For instance, in certifying or licensing an autonomous car as safe for public roads, does it only need to pass the same driving test we’d give to a teenager — or should there be a higher standard, and why? If it doesn’t make sense for robot cars to strictly follow traffic laws and vehicle codes — such as sometimes needing to speed during emergencies — how far should manufacturers or policymakers allow these products to break those laws and under what circumstances?
And finally, beyond the car’s operation: How should we think of security and privacy related to data (about you) coming from your robot car, as well as from any in-vehicle apps? If we need very accurate maps to help automated cars navigate, would it be feasible to crowdsource maps — or does that hold too much room for error and abuse?

By Patrick Lin
Source:
http://www.wired.com/opinion/2013/07/the-surprising-ethics-of-robot-cars/

Apple's Unkept Promises: Cheap iPhones Come at HighCosts to Chinese Workers

By China Labor Watch
July 29, 2013

Read full Report:
http://www.chinalaborwatch.org/pdf/apple_s_unkept_promises.pdf

Lego's World Map

See:
http://lego.gizmodo.com/track-your-travels-on-a-solid-lego-map-of-the-world-952612494

"Happy Birthday Nasa" - NASA's 55th Anniversary: 5 Big Milestones Reached and 5 to Come

On July 29, 1958, Congress passed legislation and President Dwight D. Eisenhower signed into law the National Aeronautics and Space Act, establishing NASA.
In the 55 years since, “NASA has become the world’s premier agent for exploration, carrying on in ‘the new ocean’ of outer space a long tradition of expanding the physical and mental boundaries of humanity,” wrote NASA historian Steven J. Dick.
On the space administration’s anniversary, take a look back on five big milestones NASA has achieved so far—and five more to look forward to in the next few decades.
1958: Explorer 1, the first American satelliteAfter Russia launched the world’s first satellite, Sputnik, the Jet Propulsion Laboratory, which is now part of NASA, immediately began work on its own design. After three months of development, Explorer 1 launched and began circling Earth twelve and a half times per day until 1970, sending back crucial data, and setting the stage for future space exploration.
1969: The moon landing
The first American in space, Astronaut Alan Shepard, spent 15 minutes in flight in 1961. Soon afterward, President John F. Kennedy announced that NASA would next send humans to the moon, and the Apollo program was established. In 1969, Neil Armstrong was the first person to set foot on the surface of the moon, and the world watched as he proclaimed, “That’s one small step for man, one giant leap for mankind.”
1990: The Hubble Space TelescopeBefore the launch of the Hubble Space Telescope, our views into space were limited to what telescopes on the ground could show us. The first telescope in space, named after astronomer Edwin Hubble, revealed clear images of the universe beyond our galaxy for the first time.

1998: The International Space StationThe ISS wasn’t the first space station—Russia had launched two and NASA had sent up the failed Skylab in 1973—but it’s the most advanced. The first portions of the station were launched in 1998, with the first crew arriving in 2000. It’s been in use ever since, as the U.S. and several other countries send astronauts and equipment to study the cosmos on long-term programs.

1996: The Mars PathfinderLaunching in 1996 and landing on the Red Planet in July 1997, the Mars Pathfinder traveled 309 million miles and returned billions of pieces of information and thousands of photos back to Earth for one year. This unmanned mission cleared the way for many more Mars research projects, including today’s rover, Curiosity.

So, what’s next for NASA? You can look forward to these exciting new missions:
2015: Expedition to Pluto
The New Horizons spacecraft launched in 2006 and is now halfway between Earth and Pluto, which is more than 9 billion miles away. The first mission to go such a distance, it’s planned to fly by Pluto and its moons in July 2015.
2016: Exploring Jupiter
Juno, a spacecraft launched in 2011, has already traveled 785 million miles toward Jupiter, traveling 1.6 miles per second relative to Earth. Juno is scheduled to reach the giant planet to study its origins and atmosphere in 2016.
2018: A visit to the sun
The Solar Probe Plus will launch in 2018 and will visit the sun’s outer atmosphere, which is, according to NASA, “arguably the last region of the solar system to be visited by a spacecraft.” The probe is planned to reach that historic destination by 2024.
2025: First manned mission to an asteroid
In 2010, President Barack Obama announced plans to send humans to an asteroid by 2025. Currently, NASA’s Orion program is working toward making that promise a reality.
2030: First manned mission to Mars
Earth’s next biggest space achievement is undoubtedly sending humans to Mars. In 2010, Obama seemed sure of the milestone. “By the mid-2030s, I believe we can send humans to orbit Mars and return them safely to Earth. And a landing on Mars will follow. And I expect to be around to see it,” he said.

By Vi-An Nguyen
Source and more:
http://www.parade.com/58347/viannguyen/nasas-55th-anniversary-5-big-milestones-reached-and-5-to-come/

Minor evidence of fingerprint recognition in future iPhones discovered in new beta of Apple’s iOS 7

iOS developer and tinkerer Hamza Sood has been poking around in the new 4th beta version of iOS 7 and has discovered references to fingerprint recognition. This could indicate that Apple plans to support such biometric ID measures in the next version of its iPhone, though evidence of new features sometimes appears in beta software long before it makes an appearance in an Apple product.
We have previously heard from sources that a fingerprint ID unit could make an appearance in the next iPhone. It would be enabled by tech purchased by Apple in its acquisition of Authentec.
Sood discovered a bundle inside the Accessibility section of iOS that contains references to tutorial images explaining exactly how to identify a user by their fingerprint. The description has a thumb placed on the home button of a device and a string that indicates ‘Recognition is complete’.
 Minor evidence of fingerprint recognition in future iPhones discovered in new beta of Apples iOS 7
While a lot of the discussion surrounding biometric identification has surrounded a pure ‘ID’ use case, we’re pretty excited about the possibilities that extend beyond security. Specifically, adding fingerprint ID could enable quick switching of users all accessing the same iOS device:
Authentec’s hardware can be used to read fingerprints for secure payments, and authentication that works in concert with other information for security purposes. But biometrics is a field fraught with pitfalls when it comes to high-security uses. It’s simply too finicky and easy to circumvent if it’s used alone. So there’s likely going to be a bit of work before it’s in common use as part of a payment or secure login flow on a portable device.
However, there is another major arena where a fingerprint sensor could fix a major issue: identifying different users of a single iOS device, and automatically loading their profile.
Because of the somewhat unreliable nature of biometric IDs, it is likely that we’d see it paired with other methods like security pins as an added layer of protection, rather than the sole means of identification for a user of an iOS device. There are just too many ways to fool these kinds of systems to rely on them alone to protect a device. But as an added measure, it would act as a sort of ‘additional factor’ of ID, which couldn’t hurt.
We’re still pretty interested in how it might improve the situation surrounding multiple household members accessing the same iPad though.

By Matthew Panzarino
Source:
http://thenextweb.com/apple/2013/07/29/minor-evidence-of-fingerprint-recognition-in-future-iphones-discovered-in-new-beta-of-apples-ios-7/?fromcat=all

Asiana 214: Airplane as Hero, and Other Analyses

The role of engineers, pilots, coaches, and God in the path toward disaster -- or safety.

Three weeks after the crash, I hear from several travelers that debris from Asiana 214 is still visible at SFO, apparently as investigators keep working through the clues. I am entering my last day-plus in my current Internet-impaired environment, so a few text-only updates.
1) The landing gear succeeded; they failed. From a reader in the Seattle area:
One factor not mentioned in your posting was the "failure" of the landing gear. That is, in an impact beyond the strength of the LG, they are supposed to detach from the wing without breaking the wing off or ripping open the fuel tanks.
They worked! (before the frisbee pirouette )
Likewise the engines. Unfortunately, one of them came to rest snuggled up to the fuselage, and was the ignition source for the post-evacuation fire....

I work for a certain aeronautical enterprise, and actually sent a congratulatory e-mail to the 777 designers...
2) Credit to the airplane as a whole. From another reader in the industry:
I agree that fatigue & a little bit of culture are the broken links in this chain of events.

I have worked as an aircraft mechanic for United Airlines for [more than 25 years] at [a major US hub], and most of us at work believe the 777 is one of Boeing's finest achievements. The talk around the hanger has always been the 777 "is so smart it's a very difficult aircraft to have an accident in," and unfortunately without that technology engaged on the aircraft that is exactly what happened to flight 214.
By James Fallows
Source and read more:
http://www.theatlantic.com/technology/archive/2013/07/asiana-214-airplane-as-hero-and-other-analyses/278164/ 

Texas students fake GPS signals and take control of an $80 million yacht

In the good old days spoofing meant a Mad magazine article on a television show.
No longer. In the world of secure (and insecure) networks, the act of spoofing entails faking data to take advantage of network insecurity. And as some University of Texas students led by professor Todd Humphreys have shown, it is now possible to spoof a GPS system.
That is, the students created a device that sent false GPS signals to a ship, overrode the existing GPS signals, and essentially gained control of the navigation of an $80 million yacht in the Mediterranean Sea. Here’s a video explaining how they did it:

The scientists who conducted the experiment — done with permission of the yacht’s owners — say their ability to broadcast counterfeit GPS signals that triggered no alarms within the ship’s navigation system highlights a serious flaw in transportation networks on land and sea. Some 90 percent of the world’s freight moves by sea.
Moreover other semi-autonomous vehicles, such as aircraft, are likely similarly vulnerable.
The problem is not intractable. As this technical paper published in 2011 shows, there are some possible fixes. Nevertheless, it doesn’t take too much imagination to figure out how, when most people carry a GPS enabled device in his or her pocket, that spoofing could pose a problem if not accounted for.

By Eric Berger
Source:
http://blog.chron.com/sciguy/2013/07/texas-students-fake-gps-signals-and-take-control-of-an-80-million-yacht/?cmpid=hpfc

Is There a Theory of Everything?



And, if so, what is it?


Source and read:
http://viewer.zmags.com/publication/7ec2f2e8?page=22

"Biodynamic" and "Luigi Colani"



Luigi Colani, (born Lutz Colani 2 August 1928), is a Berlin-born German industrial designer.
His long career began in the 1950s when he designed cars for companies such as Fiat, Alfa Romeo, Lancia, Volkswagen, and BMW. In 1957, he dropped his given first name Lutz and henceforth went by the name of Luigi. In the 1960s, he began designing furniture, and as of the 1970s, he expanded in numerous areas, ranging from household items such as ballpoint pens and television sets to uniforms and trucks and entire kitchens. A striking grand piano created by Colani is manufactured and sold by the Schimmel piano company.

Style

The prime characteristic of his designs are the rounded, organic forms, which he terms "biodynamic" and claims are ergonomically superior to traditional designs. His "kitchen satellite" from 1969 is the most prominent example of this school of thought. Many of his designs for small appliances are being mass-produced and marketed, but his larger designs have not been built, "a whole host of futuristic concepts that will have us living in pods and driving cars so flat that leg amputation is the only option."
The earth is circular, all the heavenly bodies are round; they all move on round or elliptical orbits. This same image of circular globe-shaped mini worlds orbiting around each other follows us right down to the micro-cosmos. We are even aroused by round forms in species propagation related eroticism. Why should I join the straying mass who want to make everything angular? I am going to pursue Galileo Galilei's philosophy: my world is also round. — Luigi Colani
Source:
http://en.wikipedia.org/wiki/Luigi_Colani

Typography in ten minutes

This is a bold claim, But I stand be­hind it: if you learn and fol­low these five ty­pog­ra­phy rules, you will be a bet­ter ty­pog­ra­ph­er than 95% of pro­fes­sion­al writ­ers and 70% of pro­fes­sion­al de­sign­ers. (The rest of this book will raise you to the 99th per­centile in both categories.)
All it takes is ten min­utes—five min­utes to read these rules once, then five min­utes to read them again.
Ready? Go.
  1. The ty­po­graph­ic qual­i­ty of your doc­u­ment is de­ter­mined large­ly by how the body text looks. Why? Because there’s more body text than any­thing else. So start every project by mak­ing the body text look good, then wor­ry about the rest.
    In turn, the ap­pear­ance of the body text is de­ter­mined pri­mar­i­ly by these four ty­po­graph­ic choices:
  2. point size is the size of the let­ters. In print, the most com­fort­able range for body text is 10–12 point. On the web, the range is 15–25 pixels. Not every font ap­pears equal­ly large at a giv­en point size, so be pre­pared to ad­just as necessary.
  3. line spacing is the ver­ti­cal dis­tance be­tween lines. It should be 120–145% of the point size. In word proces­sors, use the “Exact” line-spac­ing op­tion to achieve this. The de­fault sin­gle-line op­tion is too tight; the 1½-line op­tion is too loose. In CSS, use line-height.
  4. line length is the hor­i­zon­tal width of the text block. Line length should be an av­er­age of 45–90 characters per line (use your word-count func­tion) or 2–3 low­er­case alphabets, like so:
    abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcd
    In a print­ed doc­u­ment, this usu­al­ly means page margins larg­er than the tra­di­tion­al one inch. On a web page, it usu­al­ly means not al­low­ing the text to flow to the edges of the brows­er window.
  5. And finally, font choice. The fastest, eas­i­est, and most vis­i­ble im­prove­ment you can make to your ty­pog­ra­phy is to ig­nore the fonts that came free with your com­put­er (known as sys­tem fonts) and buy a pro­fes­sion­al font (like those found in font recommendations). A pro­fes­sion­al font gives you the benefit of a pro­fes­sion­al de­sign­er’s skills with­out hav­ing to hire one.
    If that’s im­pos­si­ble, you can still make good ty­pog­ra­phy with sys­tem fonts. But choose wise­ly. And nev­er choose times new roman or Arial, as those fonts are fa­vored only by the ap­a­thet­ic and slop­py. Not by ty­pog­ra­phers. Not by you.
That’s it. As you put these five rules to work, you’ll no­tice your doc­u­ments start­ing to look more like pro­fes­sion­al­ly pub­lished material.
Then, if you’re ready for a lit­tle more, try the sum­ma­ry of key rules.
If you’re ready for a lot more, start at the foreword and keep reading.
 Source:
http://practicaltypography.com/typography-in-ten-minutes.html

Quantum Cryptography


This book provides a detailed account of the theory and practice of quantum cryptography. Suitable as the basis for a course in the subject at the graduate level, it crosses the disciplines of physics, mathematics, computer science and engineering. The theoretical and experimental aspects of the subject are derived from first principles, and attention is devoted to the practical development of realistic quantum communications systems. The book also includes a comprehensive analysis of practical quantum cryptography systems implemented in actual physical environments via either free-space or fiber-optic cable quantum channels. This book will be a valuable resource for graduate students, as well as professional scientists and engineers, who desire an introduction to the field that will enable them to undertake research in quantum cryptography. It will also be a useful reference for researchers who are already active in the field, and for academic faculty members who are teaching courses in quantum information science. In addition, much of the material will be accessible to those senior undergraduates who have the requisite understanding of quantum mechanics.
 
Authors: Gerald Gilbert, Michael Hamrick, Yaakov S. Weinstein
Publication date: September 30, 2013
Source:

Making Perfect Takeoffs & Landings in Light Airplanes




Originally published as two separate books, this newly combined edition now brings together comprehensive instruction on taking a plane into the air and brining it back down to the ground. The book shows pilots how to develop total awareness for the situation, the airplane, and the self—and to convert that awareness into perfect takeoffs and landings. The detailed yet easy-to-follow steps given here ensure pilots have the knowledge they need to go beyond rote-learned reactions and develop excellent flying skills. Each chapter defines a specific takeoff or landing situation and the set of characteristics unique to it. Author Ron Fowler presents the methods—and the logic behind the methods—that allow the pilot to master techniques key to normal takeoffs and landings, crosswind procedures, short- and soft-field operations, night procedures, critical landing situations, slips, tailwheel operations, and more. The tips and techniques shared here are useful to the typical rated private pilot, yet the book addresses the needs of the solo student pilot, the thousand-hour pilot, and the flight instructor as well.

Author: Ron Fowler
Publication date: August 1, 2013
Source:
http://www.amazon.com/Making-Perfect-Takeoffs-Landings-Airplanes/dp/1619540304/ref=sr_1_3?s=books&ie=UTF8&qid=1374963713&sr=1-3&keywords=aviation

Gulfstream G650


The Gulfstream G650 is a twin-engine business jet aircraft produced by Gulfstream Aerospace.[4] Gulfstream began the G650 program in 2005 and revealed it to the public in 2008. The G650 is the company's largest and fastest business jet with a top speed of Mach 0.925.


General characteristics
Performance
  • Maximum speed: Mach 0.925 (530 kn, 610 mph, 982 km/h)
  • Cruise speed: Long range cruise: Mach 0.85 (488 kn, 562 mph, 904 km/h) Fast cruise: Mach 0.90 (516 kn, 595 mph, 956 km/h)
  • Range: Long range cruise: 7,000 nautical miles (8,050 mi, 12,960 km) Fast cruise: 6,000 nmi (6,906 mi, 11,112 km)
  • Service ceiling: 51,000 ft (15,500 m)
  • Wing loading: 77.7 lb/ft² (3.72 kPa)
  • Cabin pressurization: 10.7 psi (73.8 kPa)
Source and more:
http://en.wikipedia.org/wiki/Gulfstream_G650

Could the Government Get a Search Warrant for Your Thoughts?

C06086.jpg
Why remain silent if they can just read your mind?

We don't have a mind reading machine. But what if we one day did? The technique of functional MRI (fMRI), which measures changes in localized brain activity over time, can now be used to infer information regarding who we are thinking about, what we have seen, and the memories we are recalling. As the technology for inferring thought from brain activity continues to improve, the legal questions regarding its potential application in criminal and civil trials are gaining greater attention.
Last year, a Maryland man on trial for murdering his roommate tried to introduce results from an fMRI-based lie detection test to bolster his claim that the death was a suicide. The court ruled (PDF) the test results inadmissible, noting that the "fMRI lie detection method of testing is not yet accepted in the scientific community." In a decision last year to exclude fMRI lie detection test results submitted by a defendant in a different case, the Sixth Circuit was even more skeptical, writing (PDF) that "there are concerns with not only whether fMRI lie detection of 'real lies' has been tested but whether it can be tested."
So far, concerns regarding reliability have kept thought-inferring brain measurements out of U.S. (but not foreign) courtrooms. But is technology the only barrier? Or, if more mature, reliable brain scanning methods for detecting truthfulness and reading thoughts are developed in the future, could they be employed not only by defendants hoping to demonstrate innocence but also by prosecutors attempting to establish guilt? Could prosecutors armed with a search warrant compel an unwilling suspect to submit to brain scans aimed at exploring his or her innermost thoughts?
The answer surely ought to be no. But getting to that answer isn't as straightforward as it might seem. The central constitutional question relates to the Fifth Amendment, which states that "no person ... shall be compelled in any criminal case to be a witness against himself." In interpreting the Fifth Amendment, courts have distinguished between testimonial evidence, which is protected from compelled self-incriminating disclosure, and physical evidence, which is not. A suspected bank robber cannot refuse to participate in a lineup or provide fingerprints. But he or she can decline to answer a detective who asks, "Did you rob the bank last week?"
So is the information in a brain scan physical or testimonial? In some respects, it's a mix of both. As Dov Fox wrote in a 2009 law review article, "Brain imaging is difficult to classify because it promises distinctly testimonial-like information about the content of a person's mind that is packaged in demonstrably physical-like form, either as blood flows in the case of fMRI, or as brainwaves in the case of EEG." Fox goes on to conclude that the compelled use of brain imaging techniques would "deprive individuals of control over their thoughts" and be a violation of the Fifth Amendment.
But there is an alternative view as well, under which the Fifth Amendment protects only testimonial communication, leaving the unexpressed thoughts in a suspect's head potentially open to government discovery, technology permitting. In a recent law review article titled "A Modest Defense of Mind Reading," Kiel Brennan-Marquez writes that "at least some mind-reading devices almost certainly would not" elicit "communicative acts" by the suspect, "making their use permissible under the Fifth Amendment." Brennan-Marquez acknowledges that compelled mind-reading would raise privacy concerns, but argues that those should be addressed by the Fourth Amendment, which prohibits unreasonable searches and seizures.
That doesn't seem right. It would make little sense to provide constitutional protection to a suspected bank robber's refusal to answer a detective's question if the thoughts preceding the refusal--e.g., "since I'm guilty, I'd better not answer this question"--are left unprotected. Stated another way, the right to remain silent would be meaningless if not accompanied by protection for the thinking required to exercise it.
And if that weren't enough, concluding that compelled brain scans don't violate the Fifth Amendment would raise another problem as well: In a future that might include mature mind-reading technology, it would leave the Fourth Amendment as the last barrier protecting our thoughts from unwanted discovery. That, in turn, would raise the possibility that the government could get a search warrant for our thoughts. It's a chilling prospect, and one that we should hope never comes to pass.

By John Villasenor
Source:
http://www.theatlantic.com/technology/archive/2013/07/could-the-government-get-a-search-warrant-for-your-thoughts/278111/

Motorcycle futurism: space age dreams come to life on two wheels

 
The world could've been a wonderful place right now. In the 1950s, post-war optimism led many to believe we'd be living in solar-powered pods, commuting to work in atomic-powered flying cars while our handy house robots dressed the Christmas tree.
Of course, that's the same optimism that took us to the moon, not to mention laid the groundwork for the entire tech industry. But what would the optimists of the '50s have dreamed up for the motorcycle of the future? Perhaps Randy Grubb has the answer.
 
By Matt Brian
Source and see:

Most IT pros assume big brother is spying on corporate data

It’s not paranoia if someone really is out to get you. That’s an exaggeration of findings of a new survey of IT pros, the majority of whom just assume that governments are snooping into their corporate data.
More specifically, 62 percent of respondents polled at the big Infosec Europe conference said they think the government is looking at their stores of corporate data. What’s notable about that is that the show (and the survey) took place in April, two months before the blockbuster Edward Snowden disclosures about the U.S. National Security Agency’s data gathering operations. Of course, there were earlier indications of massive data gathering from three former NSA officials-turned-whistle blowers.
The survey results were released Friday and showed that well over half of the 300 IT pros responding simply expect Big Brother to be peering at their stuff. Over half of the respondents work for big enterprises (with more than 5,000 employees) in the financial services, retail, healthcare and insurance businesses. The survey was sponsored by Voltage Security, which is using it to promote the need to protect sensitive financial, customer or employee data as well as corporate intellectual property, for its entire life span.
According to a Voltage statement, the only way to provide the necessary levels of security to guard against data loss, either through surveillance, a malicious attack, or an inadvertent disclosure is through a data-centric security program. The same holds true, presumably, for government-sanctioned surveillance.
The whole PRISM and Tempura government data collection efforts by the U.S. and U.K. respectively are being parlayed by any number of interested parties to further their goals. For example, encryption companies say the only way to prevent snooping by government operatives or others is to fully encrypt all data — assuming the government doesn’t have the keys. And E.U.-based telcos, hosting providers and cloud companies are using outrage over NSA data gathering to aggressively promote the use of E.U.- based clouds built by E.U.-based companies.
All of this controversy will be front and center at Structure: Europe in London, where several sessions will focus on cloud security post-PRISM.

By Barb Darrow
Source:
http://gigaom.com/2013/07/26/most-it-pros-assume-big-brother-is-spying-on-corporate-data/

Black Hat USA 2013

July 27-August 1 2013
Ceasars Palace, Las Vegas, NV

Black Hat USA is the show that sets the benchmark for all other security conferences. As Black Hat returns for its 16th year to Las Vegas, we bring together the brightest in the world for six days of learning, networking, and skill building. Join us for four intense days of Training and two jam-packed filled days of Briefings.

Source and details:
http://www.blackhat.com/us-13/

Spy agencies ban Lenovo PCs on security grounds

Computers manufactured by the world’s biggest personal computer maker, Lenovo, have been banned from the “secret” and ‘‘top secret” ­networks of the intelligence and defence services of Australia, the US, Britain, Canada, and New Zealand, because of concerns they are vulnerable to being hacked.
Multiple intelligence and defence sources in Britain and Australia confirmed there is a written ban on computers made by the Chinese company being used in “classified” networks.
The ban was introduced in the mid-2000s after intensive laboratory testing of its equipment allegedly documented “back-door” hardware and “firmware” vulnerabilities in Lenovo chips. A Department of Defence spokesman confirmed Lenovo ­products have never been accredited for Australia’s secret or top secret ­networks.
The classified ban highlights concerns about security threats posed by “malicious circuits” and insecure firmware in chips produced in China by companies with close government ties. Firmware is the interface be­tween a computer’s hardware and its operating system.
Lenovo, which is headquartered in Beijing, acquired IBM’s PC business in 2005.
IBM continues to sell servers and mainframes that are accredited for secret and top-secret networks. A Defence spokesman said Lenovo had never sought accreditation.
The Chinese Academy of Sciences, a government entity, owns 38 per cent of Legend Holdings, which in turn owns 34 per cent of Lenovo and is its largest shareholder.

Malicious modifications to ­Lenovo’s circuitry


AFR Weekend has been told British intelligence agencies’ laboratories took a lead role in the research into Lenovo’s products.
Members of the British and ­Australian defence and intelligence communities say that malicious modifications to ­Lenovo’s circuitry – beyond more typical vulnerabilities or “zero-days” in its software – were discovered that could allow people to remotely access devices without the users’ knowledge. The alleged presence of these hardware “back doors” remains highly classified.
In a statement, Lenovo said it was unaware of the ban. The company said its “products have been found time and time again to be reliable and secure by our enterprise and public sector customers and we always ­welcome their engagement to ensure we are meeting their security needs”.
Lenovo remains a significant supplier of computers for “unclassified” government networks across western nations, including Australia and New Zealand’s defence departments.
A technology expert at the ­Washington-based Brookings ­In­stitution, Professor John Villasenor, said the globalisation of the semi-conductor market has “made it not only possible but inevitable that chips that have been intentionally and maliciously altered to contain hidden ‘Trojan’ circuitry will be inserted into the supply chain.
“These Trojan circuits can then be triggered months or years later to launch attacks,” he said.

Hardware back doors can be very hard to detect


A security analyst at tech research firm IBRS, James Turner, said hardware back doors are very hard to detect if well designed.
They were often created to look like a minor design or manufacturing fault, he said. To avoid detection, they are left latent until activated by a remote transmission.
“Most organisations do not have the resources to detect this style of infiltration. It takes a highly specialised laboratory to run a battery of tests to truly put hardware and ­software through its paces,” Mr Turner said. “The fact that Lenovo kit is barred from classified networks is significant, and something the ­private sector should look at closely.”
Professor Villasenor said malicious circuitry known as “kill-switches” can be used to stop devices working and to establish back doors. French defence contractors reportedly installed kill-switches into chips that can be remotely tripped if their products fall into the wrong hands.
AFR Weekend has been told the electronic eavesdropping arms of the “five eyes” western intelligence alliance, including the National Security Agency in the US, GCHQ in the UK, and the Defence Signals Directorate in Australia, have physically ­connected parts of their secret and top secret computer networks to allow direct communications between them. This means that security bans on the use of products within the secret networks are ­normally implemented across all five nations. Two commonly used suppliers are Dell and Hewlett-Packard.
The ban on Lenovo computers also applies to Britain’s domestic and foreign security services, MI5 and MI6, and their domestic equivalents: the Australian Security Intelligence Organisation and the Australian Secret Intelligence ­Service.

Not connected with foreign ­counterparts


In contrast to the other ­agencies, ASIO’s top secret network, called “TSNet”, is compartmentalised and not connected with foreign ­counterparts because of its counter-intelligence role.
All these secret-level defence and intelligence networks are “air-gapped”, which means they are physically separated from the internet to minimise security risks. ASIO, ASIS, and DSD are colloquially known as Channel 10, The Other DFAT and The Factory. An academic expert on computer hardware implants, Professor Farinaz ­Koushanfar at Rice University’s Adaptive Computing and Embedded Systems Lab, said the NSA was “incredibly concerned about state-sponsored malicious circuitry and the counterfeit circuitry found on a widespread basis in US defence ­systems”.
“I’ve personally met with people inside the NSA who have told me that they’ve been working on numerous real-world cases of malicious implants for years,” she said.
“But these are all highly classified programs.”
Australia’s defence department runs three networks managed by the Chief Information Officer Group: the Defence Restricted Network; the Defence Secret Network; and the Top Secret Network.
The DRN is not classified and is linked to the internet via secure gateways. The DSN and TSN are air-gapped and off limits to Lenovo devices. An official with clearance to access all three networks can switch between them using a diode, called the Interactive Link, connected to a single computer. Previously officials used multiple desktops connected to individual networks.

Anti-China trade sentiment


In 2006 it was disclosed that the US State Department had decided not to use 16,000 new Lenovo computers on classified networks because of security concerns.
The change in procurement policy was attributed to anti-China trade sentiment after ­Lenovo’s acquisition of IBM’s PC business.
Some experts argue that blocking specific companies from classified networks is not a panacea for security threats given the global nature of supply chains.
Many western vendors have semiconductor fabrication plants, or “foundries”, based in China, which exposes them to the risk of interference.
Huawei Technologies made the same argument in response to the Australian government’s decision to exclude it from the National Broadband Network. Huawei says a better approach would be to evaluate all products in a single forum overseen by security agencies.
The Lenovo revelations follow allegations in The Australian Financial Review last week by the former head of the CIA and NSA, Michael Hayden, that Huawei spies for the Chinese government. Huawei officials and China’s Australian embassy strenuously denied these claims.

By Christopher Joye, Paul Smith and John Kerin
Source:
http://www.afr.com/p/technology/spy_agencies_ban_lenovo_pcs_on_security_HVgcKTHp4bIA4ulCPqC7SL

Director Peter Jackson Liveblogs Last Day of Hobbit Trilogy Shooting


Image: Peter Jackson's Facebook

To mark the last day of shooting on the Hobbit trilogy, Peter Jackson decided to liveblog a full day of work. Readers on Facebook watched the director bid his cat goodbye in the Wellington, New Zealand dawn–and joined him for a very full day of shooting, as he and splinter-unit director Christian Rivers rushed to wrap up complex fight scenes, talk through plans for music with composer Howard Shore, and prepare to dive into the editing room next week.
Jackson also used the liveblog not only to post selfies (see above) but also introduce readers to some pivotal but less prominent figures on the production end of the films–script supervisors, editors, and assistants–as well as his own good-luck rituals: an unchanging wardrobe, the well-worn armchair that stands in for a traditional canvas director’s chair, and “cup after cup of green tea.”
Finally, after a 20-hour day, The Hobbit was done shooting, and a weary Jackson headed home–to a houseful of jubilant teenagers, and, of course, his patient cat. “A long day. A great day. Thank you all for being part of it,” he closed. “Now for some sleep!”
The second installation of The Hobbit, The Desolation of Smaug, is due out in December 2013, with the third and final chapter of the trilogy to follow in 2014.

By Rachel Edidin
Source:
http://www.wired.com/underwire/2013/07/peter-jackson-liveblog-hobbit/

Pinterest Allows Users to Opt-Out of Being Tracked

In Silicon Valley there are hundreds of companies that track people’s habits with the hopes of offering more intrusive advertising. There are, in comparison, very few Valley start-ups that give people the opportunity to opt out of that tracking.
On Friday, Pinterest, which allows users to share photographs and other media on custom “pinboards,” joined the short list of companies that do give people that option.
Pinterest is doing this by enabling the Do Not Track feature in certain Web browsers that allows people to avoid cookies that collect personal information as well as any third-party cookies, including those used for advertising.
In May 2012 Twitter began offering this feature to people who use the social network. But the Do Not Track functionality will work only if a Web site agrees to acknowledge it.
As for people who do not select the Do Not Track feature, Pinterest will be watching over their shoulders more than it has in the past. As Twitter did in 2012, Pinterest introduced a new feature that it says will help surface better content to users.
At the same time it announced the Do Not Track option, the company added a new “board suggestions” component to its site. It will figure out the right type of recommendations for content by tracking the type of Web sites someone has visited that included a “Pin” button.
For example, if you visit a cooking Web site that displays the Pinterest Pin, and then go to Pinterest’s Web site, you will see recommendations for cooking-related pinboards.
In a blog post on the company’s Web site, Pinterest said: “We’re excited to offer everyone a more personal experience, but we also understand if you’re not interested. We respect Do Not Track as an option for people who want to turn off this collection of browsing activity from other sites.”
Privacy groups lauded the company’s decision to allow people to avoid being tracked online.
“It’s good to see some prominent companies come forward and adopt these standards,” said Kurt Opsahl, a senior staff lawyer with the Electronic Frontier Foundation, a nonprofit privacy group, in a phone interview. “By doing so they are saying ‘we’re going to respect people’s privacy preferences.’”
Joseph Lorenzo Hall, a senior staff technologist with the Center for Democracy & Technology, a nonprofit policy group in Washington, said it is important for more companies to follow suit and provide people with the ability to avoid being tracked across the Web.
“Including Twitter, Pinterest is another major first party that has decided to listen to desires of users and offer them this choice,” Mr. Hall said. He noted that the effort behind Do Not Track remains the same: “Provide users simple and usable ways to signal that they don’t want opaque third-parties creating profiles of their online behavior.”
The Do Not Track initiative has recently been embroiled in its own spat of controversy as advertisers feel threatened by the technology. But that hasn’t stopped Pinterest from giving people an option.
“While consensus around the technical specs remains elusive, people are making a choice when they turn on Do Not Track,” said Mike Yang, general counsel of Pinterest, in an e-mail. “We’re going to respect that choice.”

By Nick Bilton
Source:
http://bits.blogs.nytimes.com/2013/07/26/pinterest-allows-users-to-opt-out-of-being-tracked/?ref=technology&_r=0

Information Consumerism: The Price of Hypocrisy

Even the best laws will not lead to a safer internet. We need a sharper picture of the information apocalypse that awaits us in a world where personal data is traded to avert the catastrophy.

By Evgeny Morozof
Source and read:
http://www.faz.net/aktuell/feuilleton/debatten/ueberwachung/information-consumerism-the-price-of-hypocrisy-12292374.html

Türkiye'nin 500 Sanayi Kuruluşu -2012 Açıklandı

İSO 500 listesine göre Türkiye'nin en büyük şirketleri

Sanko Tekstil İşletmeleri Sanayi ve Ticaret A.Ş.'nin 55., Çimko Çimento ve Beton San. Tic. A.Ş'nin 200. ve Süper Film Ambalaj San. ve Tic. A.Ş.'nin 230. sırada yer aldığı şirketlerin tüm listesini görmek için:

http://www.iso.org.tr/tr/web/BesYuzBuyuk/Turkiye-nin-500-Buyuk-Sanayi-Kurulusu--ISO-500-raporunun-sonuclari.html

Social Mobilization and the Networked Public Sphere: Mapping the SOPA-PIPA Debate



The Berkman Center for Internet & Society is pleased to announce the release of a new publication from the Media Cloud project, Social Mobilization and the Networked Public Sphere: Mapping the SOPA-PIPA Debate, authored by Yochai Benkler, Hal Roberts, Rob Faris, Alicia Solow-Niederman, and Bruce Etling.
From the abstract: In this paper, we use a new set of online research tools to develop a detailed study of the public debate over proposed legislation in the United States that was designed to give prosecutors and copyright holders new tools to pursue suspected online copyright violations. Our study applies a mixed-methods approach by combining text and link analysis with human coding and informal interviews to map the evolution of the controversy over time and to analyze the mobilization, roles, and interactions of various actors.
This novel, data-driven perspective on the dynamics of the networked public sphere supports an optimistic view of the potential for networked democratic participation, and offers a view of a vibrant, diverse, and decentralized networked public sphere that exhibited broad participation, leveraged topical expertise, and focused public sentiment to shape national public policy.
We also offer an interactive visualization that maps the evolution of a public controversy by collecting time slices of thousands of sources, then using link analysis to assess the progress of the debate over time. We used the Media Cloud platform to depict media sources (“nodes”, which appear as circles on the map with different colors denoting different media types). This visualization tracks media sources and their linkages within discrete time slices and allows users to zoom into the controversy to see which entities are present in the debate during a given period as well as who is linking to whom at any point in time.
The authors wish to thank the Ford Foundation and the Open Society Foundation for their generous support of this research and of the development of the Media Cloud platform.
About Media Cloud
Media Cloud, a joint project of the Berkman Center for Internet & Society at Harvard University and the Center for Civic Media at MIT, is an open source, open data platform that allows researchers to answer complex quantitative and qualitative questions about the content of online media. Using Media Cloud, academic researchers, journalism critics, and interested citizens can examine what media sources cover which stories, what language different media outlets use in conjunction with different stories, and how stories spread from one media outlet to another. We encourage interested readers to explore Media Cloud.
Source: