Benevolent Deception in Human Computer Interaction by Eytan Adar, Desney Tan & Jaime Teevan presents the design principle of “benevolent deception” – that is, deliberately misrepresenting a computer system’s functionality to the user, but for the benefit of the user and/or developer. The paper cites many surprising (and amusing) examples of benevolent deceptions in technology, including the following:
-
Katrin C. Arabe: “Dummy” Thermostats Cool Down Tempers, Not Temperatures — Nonfunctional thermostats are routinely installed in office buildings, intended to “keep building occupants feeling comfortable and in control.” Anecdotally, the placebos work well for stopping employee complaints about feeling too hot or too cold, even though the actual temperature regulation doesn’t change. They also reduce wear & tear since the system isn’t excessively readjusted all the time.
-
B.R. Brewer, M. Fagan, R.L. Klatzky & Y. Matsuoka: Perceptual Limits for a Robotic Rehabilitation Environment Using Visual Feedback Distortion — Robots providing variable force resistance help stroke patients regain control of their muscles. A visual feedback system indicates the exerted force to the patients. Brewer et al. introduced “disguised progression” by gradually underreporting the displayed force, but not so much that the patients would realize the discrepancy. This motivated the patients to exert greater actual force, potentially allowing them to recover more quickly.
-
Michael Luo: For Exercise in New York Futility, Push Button — “The city deactivated most of the pedestrian buttons long ago with the emergence of computer-controlled traffic signals, even as an unwitting public continued to push on […]. More than 2,500 of the 3,250 walk buttons that still exist function essentially as mechanical placebos, city figures show. Any benefit from them is only imagined.” Removing the useless buttons would be too expensive, so they stay. The pedestrians gain no benefit, but at least they have a button to push while they wait!
-
P.J. Plauger: “Chocolate,” Embedded Systems Programming, 7(3):81-84, March 1994 (not online, cited from Adar et al.) — The first electronic switching system for telephone networks occasionally failed to make the requested connection. Rather than disconnecting or reporting the error, it would simply connect to the wrong number. The callers would believe they had misdialed, not realizing that the phone system was defective. This example is borderline malicious: the error is unintentional, but customers are irritated and presumably end up paying an extra fee.
-
Harry de Quetteville: Fake bus stop keeps Alzheimer’s patients from wandering off — Faced with the problem of residents who kept getting lost and had to be retrieved by the police, a Düsseldorf nursing home put a fake bus stop in front of its entrance. Despite their loss of short-term memory, the Alzheimer’s patients would still recognize the sign and sit down to wait for a bus, allowing the nurses to bring them back in.
Deception in Software
Deception in software is commonplace. Hackers and dishonest merchants maliciously attempt to deceive users into visiting web pages or clicking on buttons they would rather avoid. On the benevolent side, most progress indicators show some time estimate that the developer knows to be inaccurate, but whose presence assures the user that the computer is still making progress. Other common examples include deliberate “artificial stupidity” to make games more enjoyable, or input devices that silently correct ineffectual coordinates to a nearby “hot spot” which was presumably the intended target. Artificially slowing down operations can also pacify worried users:
In a system built by one of the authors, a security theater was built into the interface. The system was designed to allow users to negotiate for a price using PDAs. […] Though the particular zero-knowledge protocol was complex and highly secure in the cryptographic sense, it was nonetheless nearly instantaneous and disconcerting for the user (there was no “feeling” that the system was secure). An illusion of complexity was generated by using a slow loop whose only purpose was to replace each entered character with a “*,” gradually covering the full text box.
The paper provides many more examples, and defines “a model of deceit framed around the motive, means, and opportunity of benevolent deception.” The discussion is quite instructive, but I must disagree with several remarks on metaphors and skeuomorphs that seem to classify all graphical user interfaces as deceptions. Take, for example, “the internal guts of a file system are not like a desktop, magic, or otherwise.” Certainly they are not… but neither is anything else that most users would ever encounter! In the rest of this post I’ll try to define “deception” more restrictively.
Metaphors Are Necessary
Given the constraints of human physiology, only the operation of simple (electro-) mechanical devices is directly observable. Modern electronic devices always use some form of symbolic intermediation to bridge the gap between the internal system and the user’s mental model. Crucially, this is not because the user is ignorant, but because usefully operating such devices without a mental model would be impossible. Only the oldest computers, and modern teaching kits, have lamps and switches that directly correspond to internal states. Modern computer users are enveloped in metaphors.
Regarding the file system example, one might assume that a command line prompt grants “immediate” access, but that is a fallacy. A directory listing may hide system files and certain file properties, just like a graphical view. Information is always translated for the user’s benefit: file sizes are not actually stored as sequences of decimal digits, typeset in 12-point Consolas. On modern multitasking systems, a static directory listing can be more deceptive than a graphical view, as only the latter automatically updates when the directory contents change. All views outside of specialized tools hide the physical storage locations of files, or the fact that they are fragmented into discontinuous cluster chains.
Calling all graphical or skeuomorphic UIs “deceptive” also seems inappropriate because no user is actually “deceived.” Nobody seriously believes that there’s a physical desktop sitting behind the screen, nor that an iPhone has magically morphed into a leather-bound notebook. In order to narrow “deception” to a more useful definition, I first posit that a highly selective metaphorical representation of system internals is always necessary, and does not inherently constitute deception.
Actual deception, then, occurs only if the metaphorical image is manipulated in excess of what is necessary to match the user’s inevitable mental model. Any useful system makes a promise to the user regarding its purpose and capabilities. Deception only occurs when that promise is broken, not when a fanciful metaphor is employed to fulfill it. This definition should cover everything one would intuitively classify as deception, without stretching the term to encompass all modern UI design.