Ubiquitous Robot Surveillance

Charlie Stross’s recent speech How low (power) can you go? is a fascinating and terrifying glimpse into a future where tiny computerized sensors have become ubiquitous thanks to ever-greater circuit density (Moore’s Law) and energy efficiency (Koomey’s Law). Stross performs back-of-the-envelope calculations for all his projections to ensure they are somewhat realistic, but in the following I’ll focus on his conclusions.

So it’s reasonable to assume that a 2040 processor unit of the kind I’m sketching on my used envelope here, with a one square millimetre surface area, could just barely be powered by daylight — but if we increase it to two millimetres on a side it can probably produce sufficient surplus to charge a battery or capacitor for nighttime operation, and to run some significant i/o devices as well. And if one square millimetre doesn’t supply enough electricity, we can always make it three or five millimetres on an edge, and gain an order of magnitude for our calculations.

The reason I picked the one millimetre dimension is simply because, from the eye level of a standing human, a one millimetre square device at ground level is all but invisible. Today we are used to the public sensors around us being noticeable if you know what to look for. In 20 years time this may no longer be the case, and the social implications are worth exploring.

These quasi-invisible devices should have an individual computing power equivalent to a present-day tablet computer, and they could be produced for mere cents apiece.

So for the cost of removing chewing gum, a city in 2030 will be able to give every square metre of its streets the processing power of a 2012 tablet computer (or a 2002 workstation, or a 1982 supercomputer). […]

Our city of 2032 is emitting as much information in a second as Google processes in an hour today: remarkable, but not outrageous in context.

The obvious purpose of blanketing a city with processors is to monitor everything – for example, the spread of epidemics.

With this level of distributed processing […] we should be able to conduct real-time epidemiological surveillance, tracking disease agents even before they have infected human or animal hosts (by sequencing DNA samples taken from airborn particles). Certainly with 1.5 billion processors in a mesh network performing sequence matching on the data from our street-level genome samplers should be practical.

Traffic control with automatic rerouting of cars is another beneficial example. One could also apply sensor chips to individual plants in order to monitor their health and improve harvests. And then there are less benign uses…

Anonymity is possible in crowds today, and even the surveillance cameras can’t always break it. In a city with distributed processing and monitoring of everything down to the square metre level, anonymity breaks down because you just can’t cram enough human bodies onto a square metre of sidewalk to blur the combinations of characteristics which identify us to the machines — even without ambient genome sampling.

It has been said that the internet means the death of privacy — but internet-based tracking technologies aren’t useful if you leave your computer at home and switch off your smartphone. In contrast, the internet of things — the city wallpapered from edge to edge with sensors and communicating processors — really does mean the death of privacy. You’d have to lock yourself in a faraday cage and switch off all the electrical devices near to you in order to regain any measure of invisibility.

Governments and advertisers alike would be ecstatic about cheap pervasive surveillance devices, and given the aforementioned beneficial uses and obvious propaganda angle (protection from criminals and terrorists!) it seems unlikely that there would be any widespread resistance against their deployment.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.