When technology betrays us

This article was taken from the July 2011 issue of Wired magazine. Be the first to read Wired's articles in print before they're posted online, and get your hands on loads of additional content by subscribing online.

A few seconds before the Pembroke-Swansea special came barrelling down the railway tracks to crush her car, Paula Ceely sensed something was wrong. Shortly after nightfall, the 20-year-old college student had got out of her car in the pouring rain to open a gate blocking the road ahead. Ceely had used a borrowed TomTom mobile GPS unit to navigate the nearly 240 kilometres of rural road from Redditch, Worcestershire, to her boyfriend's parents' house in Carmarthenshire. It was her first visit.

Judging by the illuminated GPS display on the dashboard device, Ceely was just a few miles shy of her final destination, and the road ahead should have been clear. When Ceely started opening what she thought was a farmer's access gate, she did not realise there were railway tracks underfoot until the train, blowing its whistle, slammed into the tiny Renault Clio behind her. "I could feel the air just pass me," Ceely told the BBC shortly afterwards, "and then my car just did a 360-degree turn on the tracks and was knocked to the other side."

Ceely is not alone. In late 2006 and early 2007, a mini-epidemic of mobile GPS-related mishap stories was making headlines worldwide: a 43-year-old man in Bremen, Germany, turned left when instructed and drove his Audi right on to a tramway; another 20-year-old woman in England followed her dashboard GPS and drove her Mercedes SL500 down a closed road outside the village of Sheepy Magna and into the swollen nearby River Sence; and a man in Australia turned off a highway prematurely, driving through a construction site before stopping his SUV on the concrete steps of a new building.

It's not that consumer-grade dashboard GPS systems are, collectively, at fault. Something else was happening when these commercially available GPS-enabled devices started hitting the larger population -- something more fundamental. Instead of lifting our heads, looking around and thinking for ourselves, some of us no longer saw the world as human beings have for thousands of years -- and simply accepted whatever our GPSes showed us.

In order to reach the masses, technology vendors have taken shortcuts. Software wizards whisk us through otherwise complex configuration settings, interfaces have fewer and fewer options for advanced settings, and consumer goods are produced to be magic boxes whose internal components don't involve the end user. Along the way, we've introduced some unintended consequences.

But what if our dashboard GPS systems deliberately misled us? In spring 2007, Andrea Barisani and Daniele Bianco showed a video at the 2007 CanSecWest security conference in Vancouver, British Columbia, in which the GPS unit in Barisani's 2006 Honda Civic displayed a text alert warning of a terrorist threat near his home in Trieste, Italy. Other rogue messages the two Italian researchers had injected into consumer GPS devices included more innocuous notifications such as "Bullfights Ahead".

This alert information doesn't come from satellites locked in geosynchronous orbit; rather, traffic alerts are sent locally via a ten-year-old radio protocol that satellite radio stations use to populate song names and details on dashboard entertainment screens. It didn't take long before someone figured out how to manipulate this protocol. Since roadside GPS alerts are not encrypted, anyone with the right equipment and knowledge of the signal used by the dashboard device could do this.

The reverse is also true: someone could block an emergency message in a denial-of-service attack. Thus, anyone with a low-power radio transmitter who knows the frequency used by a GPS unit can broadcast information -- true or false -- to passing travellers.

Not only can people send false information to our devices, they can also obtain personal data from us without our knowledge. Apple uses an iPhone's Wi-Fi internet connection to trace it to a physical location. In 2008, a team of researchers in Zurich, Switzerland, found ways in which the Apple Wi-Fi location network could be compromised. The iPad, iPhone and iPod Touch query the nearest wireless access points and transmit that information to a database, where it is correlated with a physical address (longitude and latitude). The Swiss researchers, however, fed this service incorrect information, telling the Apple service that the iPhone was in New York City when it was still in Zurich.

But what if this vulnerability could be used with a more ominous intent? Two years earlier, security researcher Terry Stenvold published similar findings in 2600, a popular hacker magazine.

Stenvold found that he could steal someone else's hardware specifications -- for example, the unique ID of a mobile phone or the unique hardware ID of a laptop -- then upload that information to a location service and have the service tell him that person's current location. Here technology could be used surreptitiously to track, for example, an ex-partner's current location.

Already, third parties can capture our location information and store it for an indefinite period. Have we considered the long-term consequences of this? How might a random trip to a seedy part of town look ten years later? What if it wasn't random? With enough data, what hidden patterns of obsessive behaviour might emerge? Or what if we could spoof our current location to make it appear that we are always at work when we are really not? Should we trust such

location data? "I don't trust many hardware devices. It's scary," says Joe Grand, president of San Francisco-based product design and development company Grand Idea Studio. "People using products today don't often think about what the device is actually doing. The product is helping you do whatever it is you want, but it might also be watching you or doing something nefarious."

In the summer of 2009, Grand figured out how to get unlimited free parking in San Francisco. Working with fellow researchers Jacob Appelbaum and Chris Tarnovsky, Grand studied the 23,000 "smart" parking meters being installed around the city as part of a $35 million (£21 million) pilot project. Grand and Appelbaum wandered around San Francisco with a portable oscilloscope and special SIM card that fits into the parking meter's smartcard socket.

Their set-up -- essentially a digital form of eavesdropping -- allowed them to monitor and capture all electronic communication between the card and the meter. In just three days, using nothing more than a pen and paper, Grand managed to decipher enough of the communication between the smartcard and parking meter to figure out how value was deducted from the card. In the following days, he created his own smart card and programmed it to behave exactly like a legitimate San Francisco card -- the difference was only that Grand was able to set the value to whatever he wanted.

Grand created a counterfeit card worth $999.99. He also found he could freeze that maximum value -- not have it decrease with each use -- and thus make his one-time-use card eternal. This would give him free parking in San Francisco forever. With further research, he says, he could have modified the audit logs, for example. He also could have cleared the coin count. "Say you cut me off and steal my parking space," he says. "Now you go put in your money and walk away, I could clear the meter. That's a denial of service.

Then you'll get a ticket. And now you have to pay it." Grand took his concerns to San Francisco's Municipal Transit Authority. It acknowledged that the San Francisco system had such flaws. But the authority told him it wasn't interested in defending against high-tech attacks. "Once you allow a system to connect to a network," says Grand, "then it's open to a whole different side of attacks. You don't even need to be a hardware hacker. You could be a network hacker or a software hacker. Now you're in a brand-new world."

The more complex technology becomes, the easier a device is to break. Cybercriminals don't necessarily have to know more than we do about a given technology, they just need to know how to defeat it. A group of young carjackers in Indonesia, for instance, will, out of frustration when confronted with a state-of-the-art biometric-protected luxury auto, simply cut off the victim's index finger and use the severed digit's fingerprint to steal the vehicle. In another take on this criminal realm, a streetwise thug in Prague who today uses a laptop with software downloaded from the internet to steal cars is essentially no smarter than the thief who used a screwdriver and a pair of scissors to hot-wire a car ten years ago.

Thanks to a combination of Moore's Law (it says the number of transistors placed on a chip will double every two years) and the passage of time, the costs of hardware attacks have come down dramatically. For example, an individual with a modern dualcore processor in a Dell laptop, loaded with the right software, will be able to defeat a 20-year-old encryption algorithm not in a matter of days, but a matter of minutes.

A cybercriminal can eavesdrop on a public wireless session by performing a "man-in-the-middle" (MitM) attack. This requires setting up a duplicate public access point (AP) using another computer (typically a laptop or smartphone). The attacker adds an antenna to the laptop, then rebroadcasts the actual settings of the legitimate AP so that an unsuspecting café or airport patron logs on to the stronger fraudulent signal.

Unless your firewall informs you, you won't necessarily know this has happened with a Wi-Fi connection. So although you do connect to the internet, you do so through the cybercriminal's laptop. However, to the end user connecting through a criminal's laptop, the internet experience is no different from normal.

Also dubbed "evil-twin attacks", these MitM attacks allow a cybercriminal to "sniff", or read, any data the victim is sending via the internet, such as the login ID and password for an online banking account or any email sent without encryption.

A few years ago, the end user could protect himor herself by switching to a virtual private network ( VPN), which encrypts the connection. Nowadays, criminal attacks are getting more and more sophisticated; cybercriminals can grab the data before encryption in what's called a "man-in-the-browser" attack by installing malware on to your computer.

The convenience of public Wi-Fi is possible, in part, because of

arguably dangerous defaults. If you're just surfing the web, then you aren't risking too much on a public Wi-Fi access point.

But if you log on from a free, public internet café, hotel or airport waiting area, then you're risking a lot. Wireless signals can be captured or sniffed by others with the right equipment.

Let's start with a basic wireless internet public system. First, access must be convenient, so the identity of the access point must be clear: it may broadcast "Airport Café" for all to see. Sometimes there is no password to make it easier for random customers to use the network. And when there is a password, often it is obvious or easy to remember. Unfortunately, Windows remembers these commonly used APs. This is convenient if you want your laptop to work instantly with your home network. But there's an obvious dark side to such convenience.

Most people never change their home router's default information, so there are plenty of "Linksys" and "Netgear" service set identifiers. A criminal could broadcast one of these common router names and connect to your wireless laptop. Microsoft patched this flaw, but Windows XP still connects to an ad hoc network if its name agrees with the internal list kept by the laptop.

How might this be prevented? Another unique identifier assigned to each device is the media access control (MAC) address composed of six octets, which contain eight bits each. The first three octets usually identify the manufacturer and the remaining three are specific to the item itself. So an instrument can additionally be fingerprinted using its MAC address. Thus, a router can be configured to connect only with specific MAC addresses; this works for a home router, but not a public router.

Connecting to a wireless network is complex, and router companies would not have much of a market if they did not create shortcuts and simplify the process. In 2006, a report commissioned by the Consumer Electronics Association found that almost one in ten customers had returned a home network router, hub, bridge or modem within the previous year. Of those returned, only 15 percent were truly defective; the rest had been returned because the average consumer simply could not figure out how to make them work.

So what did the router companies do? They created wizards that walk users through the basic steps necessary to connect to their internet service providers. These wizards don't mention all the possible security configurations; manufacturers just want consumers to connect -- and stop calling their customer-service lines.

Beyond Wi-Fi, smartphones offer internet access via standards such as Evolution-Data Optimised, also known as CDMA2000, and EDGE. Additionally, for GSM, there's the GPRS standard, which can bridge different networks (for example, 2.5G and 3G). By default, many mobile devices today support more than radio systems, among them GPRS/EDGE and 802.11 (aka Wi-Fi); this is so that end users receive uninterrupted data and internet service no matter where they roam.

One of the first signs of a possible mobile MitM attack, says Paul Henry, a security and forensic analyst at IT security firm Lumension, is a pop-up certificate for a website or portal you regularly visit. "That you're prompted to accept a new certificate on a familiar site should be a red flag," he says. Because we're using a mobile phone, adds Henry, we simply accept the new certificate and carry on with our business, unwittingly allowing messages to be intercepted from that point on.

Wi-Fi-enabled smartphones create some interesting attack vectors. If you're conducting mobile banking, having someone eavesdrop on an unencrypted banking session could be a real problem. Even a MitM attack within a GPRS network is a credible real risk, says Henry. In another scenario, he adds, someone could be connected via Wi-Fi to a local AP inside a corporate campus and also use the GPRS connection to the outside world, effectively mining the corporate network from afar.

Digital cameras that store the date and time and even the original image can betray us. Unlike their film counterparts, they automatically imprint a lot of extra data in what's called exchangeable image file format (EXIF) embedded within each image file. Some camera manufacturers also use a similar format called raw image format, which includes a variety of data from the camera.

Either way, data is collected that you often don't see or think about.

If you never crop or otherwise alter the photo, EXIF poses minimal risk and numerous benefits. But if you crop an ex-partner out of the picture, the preview image with partner will sometimes remain within the digital file itself. This is like exposing redacted information in a Word document stored on a government website.

Photos uploaded online today, especially those taken with mobile devices, have the ability to identify the date and time, as well as the exact spot where the photographer stood, as GPS data is incorporated into the image file. Years from now, that information might be valuable, say, in recreating a classic photo. But consider the unintended ways this information might be used. What if you tell your boss you're taking a couple of sick days, when photos online reveal that you were actually attending a festival?

iPhone, BlackBerry and

Android devices automatically take images with geolocation-enabled EXIF. Ben Jackson of independent US security research firm Mayhemic Labs noticed this. Although their primary goal is to "do cool stuff " in vulnerability and malware research, the staff at Mayhemic Labs may be best known for creating a site called ICanStalkU.com. Random tweets that contain images are reposted along with links to a map location, the original tweet, the tweet image and the ability to send a reply to the person.

For his presentation at Next HOPE (Hackers on Planet Earth), a security conference in New York City, Jackson used the information learned from adult-themed images from SexyPeek.com to illustrate what he could learn about anyone posting almost any image online.

Through public records, he was able to find a name associated with the house at the latitude and longitude a photo was taken. The owner, however, produced no online presence.

Using Google, Jackson discovered more geotagged images that could be traced back to the same BlackBerry 9000 that had posted the original image, some at the same longitude and latitude as the house, but posted under a different name. Using that different name, Jackson found a

Facebook account with a birth date, marital status and friends. The man even had a second Facebook account under another name. That this man was keeping secrets was a fact his wife had herself discovered, writing on her Twitter account that her husband "has more secrets than I've ever guessed, guess that's why he thinks I'm always hiding something, cuz he's hiding stuff" (sic).

All of this Jackson learned from just one photo. He has collected thousands of such pictures. At Next HOPE, Jackson made public his database of information gleaned from TwitPic. He admits not knowing what can be done with the information, other than showing just how much information is being leaked by one single image on a public site.

In the physical world, we're adept at sensing danger. Our ears prick up at strange sounds; our skin tingles when something doesn't feel right; we notice subtle body language in a stranger that makes us suspicious. We are hardwired to recognise the authenticity of another human being by a look in the eye or a firm handshake; yet most of our authentication today occurs digitally, by voice, text or email. We don't know when someone tries to extract personal information from us or eavesdrop using our mobile phone. And whenever we do use technology to authenticate a person, too often we invest in simplistic filters or imperfect biometrics that result in many false positives.

With technology we haven't evolved our survival instincts. We make leaps of faith with new technologies based on very few criteria. Devices today are so complex that often we're just happy to get a new product working. We're too intimidated to change any default settings -- but we should.

Manufacturers that simplify their complex technologies only give us the illusion of control, and this in turn opens the door to greater risk.

We believe that a new technology, such as anti-theft circuitry in our cars, somehow trumps all the real-world experience we've gained over the years. Instead, we should be layering our defences, such as parking in well-lit spaces or using a physical lock on the steering wheel or brake pedal. But human nature is such that we prefer convenience over effort. We lock only the outermost doors on our houses, because 90 percent of the threat exists there. We may have sensors that tell us whether our windows have been opened, but they won't tell us whether they have been broken. Similarly, we entrust the security of our cars to a single beep-beep. With keyless entry and remote-ignition cars, physical keys have morphed into a single item that both unlocks and starts the car with a touch of a button. But does this one device make the car any safer from theft?

As a result of misplaced trust in our devices, we're leaving behind a trail of electronic breadcrumbs that, when viewed in the aggregate, may suggest patterns others can exploit. Photocopiers remember our sensitive documents and photos posted to the internet reveal our location at the moment they were taken. The consequences of having a tollbooth transponder monitor our daily comings and goings escapes most of us -- until a divorce lawyer uses that rather bland data to construct a rich narrative about how we were, on certain afternoons between 4 and 6pm, having an affair.

By adding contactless broadcast systems to our worker-access badges, driver's licences and passports, we're speeding up the authentication process -- but we're also creating new kinds of identity theft. Cloning wireless signals is easy. With no authentication and often with little or no encryption, or with trivial encryption, I can become you without ever coming into physical contact with you or your papers or effects. Additionally, retailers are embedding RFID tags in the products we buy. While no personal information is revealed and the tags themselves have only serial numbers, collectively these product tags create a unique electronic proxy that becomes a de facto consumer, and this can now be tracked from store to store.

There is a dark side, a secret life, to smartphones, MP3 players, digital cameras and new wireless laptops that most of us never glimpse; that is, until something goes awry.

We no longer read the manual before powering on; we demand intuitive interfaces that appear up and running right away, while often masking important security settings. Studies show we want complexity, perceiving devices with more capabilities as having more value, even if we don't understand how they work. But how we use devices is only half of the problem; the other half is the hardware itself. We fail to recognise that these same devices can fail. Or that they can be made to lie. Or track our every move.

This article was originally published by WIRED UK