Skip to main content

The Design of Everyday Things by Don Norman

This looks like a book about foundational concepts of good design, but in reality it's a deep and intelligent book addressing a tremendous range of topics: psychology, cognition, on minding details, on being "meta" about rules and procedures, even how to navigate the modern world. One of the most valuable and interesting books I've read all year.

Pair with The Upper Half of the Motorcycle by Bernt Spiegel. 

Notes: [Warning: Long]
0) Norman's Law: the day the product team is announced, it is behind schedule and over its budget.

Ch 1: The Psychopathology of Everyday Things
1) "Norman doors" confusing doors, or doors that don't work right. "The design of the door should indicate how to work it without any need for signs, certainly without any need for trial and error."

2) "Two of the most important characteristics of redesign are discoverability and understanding.
* Discoverability: Is it possible to even figure out what actions are possible and where and how to perform them? 
* Understanding: What does it all mean? How is the product supposed to be used? What do all the different controls and settings mean?"

3) "Not all designed things involve physical structures. Services, lectures, rules and procedures, and the organizational structure of businesses and governments do not have physical mechanisms, but the rules of operation have to be designed, sometimes informally, sometimes precisely recorded and specified."

4) "Machines have no leeway or common sense. Moreover, many of the rules followed by a machine are known only by the machine and its designers." 

5) The design is to blame! "When people fail to follow these bizarre, secret rules, and the machine does the wrong thing, its operators are blamed for not understanding the machine, for not following its rigid specifications. ...It is time to reverse the situation: to cast the blame upon the machines and their design. It is the machine and it's design that are at fault. It is the duty of machines and those who design them to understand people."


6) Engineers thinking "RTFM": thinking logically, essentially this is "engineer solipsism," although the author doesn't phrase it this way.

7) The author was called into help analyze the Three Mile Island accident, and discovered that the plant's control rooms were so poorly designed that error was inevitable. The plant design was at fault, not the operators.

8) Designers must understand both psychology and technology: good design is an interplay of the two domains. Engineers think mostly in logic terms, thus they design for people the way they would like them to be, not for the way they are. Things must be designed based on the assumption that people will make errors.

9) On how every field or a new technology requires time before it adopts the principles of good design. This makes me think of Bitcoin: so far there has been a dearth of good design for some aspects of the user interface (like wallets, storage, certain coin brokers, staking, etc).

10) Human centered design (HCD): a design philosophy starting with a good understanding of people and the needs the design is intended to meet. People are often unaware of their true needs, even unaware of difficulties as they encounter them. Note also how HCD principles involve not specifying the problem as long as possible into the design process: instead designers modify the problem definition through repeated iterations.

11) It's interesting to hear this author's deep understanding of engineers and how they think: "Great designers produce pleasurable experiences. Experience: note the word. Engineers tend not to like it; it is too subjective. But when I ask them about their favorite automobile or test equipment, they will smile delightedly as they discuss the fit and finish, the sensation of power during acceleration, their ease of control of shifting or steering, for the wonderful feel of the knobs and switches on the instrument. Those are experiences."

12) Discoverability: when we interact with a product we need to figure out how to work it, what it does, how it works, what operations are possible. Discoverability results from the appropriate application of five fundamental psychological concepts: affordances, signifiers, constraints, mappings, and feedback. And then a sixth principle, the most important of all: the conceptual model of the system.

13) Affordance refers to how the object could be used by the agent. A chair affords support, sitting and lifting but if a weak person cannot lift a chair the chair does not afford lifting. Thus the affordance is jointly determined by the object, its qualities and the abilities of the agent with whom it interacts. This is a relational definition: affordance is not a property, it is a relationship.

14) Glass: glass can be used to "see through" but also to "block passage" which would in this case be an anti-affordance ("the prevention of passing through"). affordances and anti-affordances need to be signaled by a signifier if they can't be obviously perceived. See for example people trying to walk through a closed glass picture window or door.

15) Signifiers: the signaling component of an affordance. A plate mounted on a door, a knob or a slot, into which something can be inserted. "Affordances determine what actions are possible. Signifiers communicate where the action should take place. We need both." 

16) "When external signifiers--signs--have to be added to something as simple as a door, it indicates bad design." "Whenever you see hand lettered signs pasted on doors, switches, or products, trying to explain how to work them, what to do and what not to do, you are also looking at poor design."

17) "Creative designers incorporate the signifying part of the design into a cohesive experience. For the most part, designers can focus upon signifiers."

18) Mapping: connecting the relationship between a control and the device being controlled (like a bank of lights for example).

19) Feedback: feedback should be timely and informative, poor feedback (like blinking lights or random auditory cues) can be worse than no feedback at all because they are distracting, uninformative, and in many cases irritating (examples: the dishwasher that beeps at 3:00 a.m. to tell you it's done; backseat drivers are often correct but their remarks are so numerous to be a distraction; also machines that give too much feedback are like backseat drivers, and so we ignore or disable the feedback).

20) Conceptual models: usually highly simplified explanations of how something works: files, folders and icons on a computer screen help you create the conceptual model of documents and folders and apps waiting to be opened, even though there are no literal folders in the computer. "Without a good model we operate by rote, blindly." See the author's example of the GE refrigerator that had misleading dials that conveyed a totally incorrect conceptual model.

21) Paradox of technology: it simplifies life by adding functions to devices, but complicates life by making the device harder to learn and use; see also the challenge of a design incorporating so many disciplines, goals and priorities; see also the collection of needs involved in producing a given design: company needs to make money, it needs to be manufactured at scale, people need to use it and like it, etc.

Ch 2: The Psychology of Everyday Actions
22) What happens when things go wrong when people use a given design? How do we detect when the product/design isn't working and then how do we know what to do?

23) On how people select and then evaluate their actions.

24) On bridging two gulfs: the "gulf of execution" and the "gulf of evaluation": see the author's filing cabinet experience where the drawer wouldn't open. 


25) Seven stages of action: 
Form the goal
Plan the action
Specify an action sequence
Perform the action sequence
Perceive the state of the world
Interpret the perception
Compare the outcome with the goal

26) Getting to the ultimate need being satisfied ("root cause analysis"): these stages of action provide a guideline for developing new products, but you need observational skills to detect where there might be an unsatisfied need; radical ideas or radical new product categories come about when someone really reconsiders the underlying goals and does a root cause analysis. 

27) See also Harvard professor Theodore Levitt's famous quote: "People don't want to buy a quarter inch drill. They want a quarter inch hole!" The author thinks Levitt stops too soon: maybe they don't want the hole; maybe they don't want the bookshelf; maybe they don't even want books that require a bookshelf, etc., as examples of going deeper into root cause analysis.

28) On declarative memory versus procedural memory: many of the brain's functions occur below the level of consciousness (and of course we're not even aware of that either!). We have many misconceptions about how we think. [Readers will see many familiar ideas from Daniel Kahneman's book Thinking Fast and Slow here.]

29) A simple model of mental processing (from Don Norman's book Emotional Design) is to think of three different levels of processing, all working together in concert: reflective, behavioral, visceral
* The visceral level is the most basic level: lizard brain/affective responses/fast processing.
* The behavioral level is the home of learned skills, we are usually aware of our actions but unaware of the details. Here, for designers, "the most critical aspect of the behavioral level is that every action is associated with an expectation."
* The reflective level is the home of conscious cognition, where deep understanding develops, where reasoning and conscious decision-making take place. Reflection is slow, often occurring after the events have happened.

30) Note the intriguing phrase "attractive things work better" which gets to the idea that even when a product has usability problems at the behavioral level, if we have strongly positive visceral responses, our reflective level may weigh the visceral response strongly enough to overlook behavioral level problems. Note also how the strong reflective value associated with a well-known brand may overwhelm our negative judgment on a product.

31) Note how we ascribe causation to things that occur together, often blame the wrong things; often engage in behavior repeatedly when it fails the first time. As an example, this has given rise to so-called "panic bars" on doors" when people in a panic (to escape a fire perhaps) repeatedly push against the doorway, this is a great design with appropriate affordances.

32) Also: "Modern systems try hard to provide feedback within 0.1 second of any operation, to reassure the user that the request was received. This is especially important if the operation will take considerable time. The presence of a "filling hourglass" or "rotating clock hands" is a reassuring sign that work is in progress... More systems should adopt these sensible displays to provide timely and meaningful feedback of results."

33) Learned helplessness, self-blame for us not being able to use a design or device. These may come from the incorrect assumption that something cannot be done because of our inability. Just a few instances of failure in straightforward situations can cause us to generalize to every technological object, every math problem, and we conclude that we suck at math or technology.

34) On doing the reverse, fail often fail fast: recognize that failure teaches and that we need to fail and that failure is an essential part of exploration and creativity.

35) The author's direct advice for designers:
* Do not blame people when they fail to use your products properly.
* Take people's difficulties as signifiers of where the product can be improved. 
* Eliminate all error messages from electronic or computer systems. Instead, provide help and guidance.
* Make it possible to correct problems directly from help and guidance messages. Allow people to continue with their task: don't impede progress--help make it smooth and continuous. Never make people start over.
* Assume that what people have done is partially correct, so if it is inappropriate, provide guidance that allows them to correct the problem and be on their way.
* Think positively, for yourself and for the people you interact with.

36) The world of design is plagued with the idea that a person is at fault when something goes wrong. "Humans err continually... System design should take this into account."

37) "Today, we insist that people perform abnormally, to adapt themselves to the peculiar demands of machines, which includes always giving precise, accurate information. Humans are particularly bad at this, yet when they fail to meet the arbitrary, inhuman requirements of machines, we call it human error. No, it is design error."

38) The Nest Labs thermostat as a good example of a product with an excellent design: the user has an accurate conceptual model of the thermostat as well as the home.

39) The seven stage/question model of the action cycle: 
1: what do I want to accomplish?
2: what are the alternative action sequences?
3: what action can I do now?
4: how do I do it?
5: what happened?
6: what does it mean?
7: is this okay? Have I accomplished my goal?
Anyone using a product should always be able to determine the answer to all seven questions. This puts the burden on the designer to ensure that at each stage the product provides the information required to answer the question.

40) Use of feed-forward via signifiers, constraints and mappings: use of feedback, as well as information about the impact of the action, both need to be presented in a form readily interpreted by people using the system. 

41) This leads to seven fundamental principles of design:
1: Discoverability: the design makes clear what actions are possible as well as the current state of the device.
2: Feedback: the design offers full and continuous information about the results of actions and the current state of product or service. After an action it is easy to determine the new state.
3: Conceptual model: the design projects all information needed to create a good conceptual model of the system, leading to understanding and a feeling of control.
4: Affordances: proper affordances exist to make the desired actions possible.
5: Signifiers: effective use of signifiers ensures both discoverability and that feedback is well communicated.
6: Mappings: the relationship between controls and their actions follow principles of good mapping.
7: Constraints: the design provides physical, logical, semantic, and cultural constraints that guide actions and ease interpretation.

42) "The next time you can't immediately figure out the shower control in a hotel room or have trouble using an unfamiliar television set or kitchen appliance, remember that the problem is in the design. Ask yourself where the problem lies. At which of the seven stages of action does it fail? Which design principles are deficient?"

Ch 3: Knowledge in the Head and in the World
43) On the difference between "knowing" a penny (knowing precisely what a penny looks like) and "using" a penny (by buying something).

44) Four reasons why precise behavior can emerge from imprecise knowledge:
1) knowledge is both in the head and in the world; both count as knowledge
2) great precision is not required to distinguish an appropriate choice
3) the world has many natural constraints that restrict possible behaviors and actions
4) knowledge of cultural constraints and conventions exists in the head, these are learned artificial restrictions on behavior that reduce the set of likely actions

45) Illiterate people can get by and hide their inability, people with hearing deficits learn to use other cues, we can copy the behavior of others around us, etc., one can hide one's ignorance with amazing ability.

46) Likewise the designer can put "knowledge" into a device itself, making it more intuitive and self-evident how to use it (see the photo below note 58, also see note 63).

47) Note how we often navigate reality with a bare minimum of information, and how this produces problems: see for example the 10 Franc coin or the Susan B. Anthony dollar, both of which were confused with similar-sized coins of different denominations, and both of which triggered irritations and confusion among people as examples "of design principles interacting with the messy practicality of the real world. What appears good in principle can sometimes fail when introduced to the world."

48) Constraints simplify the requirement for memorization/remembering: see epic poems, blank verse, rhyme, etc., all examples of constraints that help prod memory. "Most of us do not learn epic poems. But we do make use of strong constraints that serve to simplify what must be retained in memory."

49) On codes and passwords: "Many codes, such as postal codes and telephone numbers, exist primarily to make life easier for machines and their designers without any consideration of the burden placed upon people... All the arbitrary things we need to remember add up to unwitting tyranny."

50) The more complex the password requirements the less secure the system: people write the passwords down and usually put them on a post-it right on their monitor.

51) "Make something too secure, and it becomes less secure." See staffers at Google using a brick to leave open a secure door so the conference covers could use the bathroom.

52) Safest identification methods: something you have plus something you know, like a fingerprint plus password.

53) Short-term versus long-term memory and implications for design: short-term memory information is fragile and limited, but at the same time long-term memory while not limited is also fragile to reconstruction errors, planting of false memories, slowness of retrieval, time required to rehearse and strengthen the memory.

54) Two categories for knowledge in the head: 1) memory for arbitrary things and 2) memory for meaningful things.

55) Conceptual models that are "good enough" to produce correct behavior or a desired situation in the real world, even though the model may not be technically correct or scientifically accurate.

56) Reminding and prospective memory (memory for the future), such as the task of "remembering to do some activity at a future time." Often its wise to transfer some of the memory burden out of your head and onto the world (with notes, calendar reminders, reminding services, an alarm, etc.).

57) Note the two aspects to a reminder: the signal and the message. Tying a string around your finger is a signal but it doesn't tell you the message of what needs to be remembered, writing a note to yourself provides only the message but doesn't remind you to look at it. The ideal reminder has to have both components. Interesting thoughts here!

58) Design trade-offs between knowledge in the world and knowledge in the head, see photo:


59) Transactive memory: information in multiple "heads" where you collectively arrive at information with a group, like a group of friends remembering a restaurant. See also Daniel Wegner's concept of the cybermind and transactive memory, see his books in the "To Read" section below.

60) See also devices like a smartphone that have benefits from using it, but also the fragility of depending on technology: take away a calculator and many people can't do arithmetic, take away your GPS and no one can find their way anywhere--even in their own cities.

61) Natural mapping: see for example the mapping of stove controls with stove burners. "With a good natural mapping, the relationship of the controls to the burner is completely contained in the world; the load on human memory is much reduced. With a bad mapping, however, a burden is placed upon memory, leading to more mental effort and a higher chance of error." A good example of putting information "in the world" in an easy to grasp way and not requiring that information to be "in your head." 

62) Point of view based mapping: see for example clickers on slide projectors, using a mouse to move through text versus using a touch screen, or pilot attitude display using the "inside-out perspective" (his own perspective) versus the "outside-in perspective" (the perspective of someone else on a flat horizon observing his plane). The designer has to consider the point of view of the user.

Ch 4: Knowing What To Do: Constraints, Discoverability, and Feedback
63) "This chapter focuses upon the knowledge in the world: how designers can provide the critical information that allows people to know what to do, even when experiencing an unfamiliar device or situation."

64) The toy Lego motorcycle example: "Constraints are powerful clues, limiting the set of possible actions. The thoughtful use of constraints in design lets people readily determine the proper course of action, even in a novel situation."

65) Four kinds of constraints:
1) physical constraints
Batteries don't have a physical constraint but they could be designed to only allow one way to be placed in a slot, "why not design a battery with which it would be impossible to make an error?" (See also the legacy problem: why inelegant designs persist, collective use of standards, etc.)

2) cultural constraints
"Cultural issues are at the root of many of the problems we have with new machines: there are as yet no universally accepted conventions or customs for dealing with them."

3) semantic constraints
"Semantic constraints are those that rely upon the meaning of the situation to control the set of possible actions." The red light clearly goes on the rear of the Lego motorcycle for example, however these semantic constraints or meanings may change over time.

4) logical constraints
In the case of the Lego motorcycle, the blue light had to go on top because it was the only piece left, thus it was logically constrained. Good natural mappings work by providing good logical constraints, like a logical relationship between the spatial layout of controls and the components they represent.

66) Doors, particularly cabinet doors, with contradictory mechanisms and unclear logic; see also switches:  one popular small airplane has identical looking switches for flaps and for landing gear right next to one another, leading to frequent (and expensive) errors. See also banks of incomprehensible light switches, see photo below of the author's very creative way to map multiple switches in a living room, including a natural anti-affordance of tilting the display so people wouldn't put drinks on the controls. !!!


67) Forcing functions: placing a physical constraint such that failure at one stage prevents the next step from happening. Examples would be keys interlocks to prevent lock-ins or lockouts.

68) Interlocks force operations to take place in proper sequence: for example requiring disconnection of power before operating in a microwave, or disconnecting the power when the microwave doors are opened; also the example of requiring the brake to be depressed before you can put a car in gear; see also a deadman's switch.

69) A lock-in keeps an operation active, preventing someone from prematurely stopping it; an example would be a computer application where you cannot exit without saving your work--or at least dealing with a message prompt asking you whether you want to save your work.

70) Lock-ins also appear when companies try to lock in customers by making all their products compatible with each other but incompatible with the competition.

71) Lockouts prevent someone from entering a space that is dangerous or prevents an event from occurring. Examples: covers for electric outlets, locks on cabinet doors, special caps on drug containers.

72) People often disable forcing functions because they're a nuisance in normal usage. Thus "the clever designer has to minimize the nuisance value or retaining the safety features of the forcing function" to guard against tragedy.

73) Cultural conventions, customary conventions, and examples of "violations of convention" which can be very disturbing to users. See for example "destination control elevators" which are far more efficient but disturbing in how they are different from traditional elevators. Another example of violations of conventions would be the imposition of the metric system. 

74) "On the whole, consistency is to be followed. If a new way of doing things is only slightly better than the old, it is better to be consistent."

75) Several pages of discussion of faucets and hot and cold water, conventions with round knobs versus blade knobs, which knob controls which, how to control the flow, showers and how they function, various mapping problems and conventions that are followed or violated as the case may be. This guy clearly has burned a lot of time trying to figure out inscrutable showers, and I feel his pain: it's something that can happen in any (every?) hotel room...

76) The design principle of desperation: "if all else fails, standardize." This way, at least people only have to learn once. "Standards simplify life for everyone. At the same time, they tend to hinder future development... Nonetheless, when all else fails, standards are the way to proceed."

77) Sound as a signifier: The click when a bolt slides home, the sound of a muffler when it has a hole, a rattle when things aren't secured, a tea kettle when it's in a boil, the increase of pitch when a vacuum cleaner gets clogged, a click when the toast pops up etc. "Experienced mechanics can diagnose the condition of machinery just by listening."

78) Adding sounds to electric vehicles to increase safety: what sound would you want? Adding the sounds of a gas engine would be an example of a "skeuomorphic" design, "incorporating old familiar ideas into new technologies, even though they no longer play a functional role." Derided by purists, although it does ease the transition from the old to the new.

Ch 5: Human Error? No, Bad Design (This chapter is more technical)
79) Good quote here: "When a bridge collapses, we analyze the incident to find the causes of the collapse and reformulate the design rules to ensure that form of accident will never happen again. When we discover that electronic equipment is malfunctioning because it is responding to unavoidable electrical noise, we redesign the circuits to be more tolerant of the noise. But when an accident is thought to be caused by people, we blame them and then continue to do things just as we have always done.... We design equipment that requires people to be fully alert and attentive for hours, or to remember archaic, confusing procedures even if they are only used in frequently, sometimes only once in a lifetime. We put people in boring environments with nothing to do for hours on end, until suddenly they must respond quickly and accurately. Or we subject them to complex, high-workload environments, where they are continually interrupted while having to do multiple tasks simultaneously. And we wonder why there is failure." 

80) "Blame and punish; blame and train. The investigations and resulting punishments feel good: 'we caught the culprit.' But it doesn't cure the problem: the same error will occur over and over again."

81) Root cause analysis; note also that most accidents do not have a single cause.

82) Toyota's the five whys: keep searching for the reason even after you have found one, keep asking why. This helps lead you to questions beyond why the error occurred, like what circumstances led to it, why those circumstances had happened, etc.

83) "If the system lets you make the error, it is badly designed. And if the system induces you to make the error, then it is really badly designed." But it's much easier to replace a person (once we've concluded "human error" is at fault, naturally) than it is to redesign a whole system.

84) People are creative, constructive, exploratory; machines are exact, precise, rote, dull and logical. And yet we ask humans to conform to the input requirements of machines. This is a common structural problem with many systems and it's exacerbated in times of interruption or stress.

85) Note also the occurrence of deliberate violations which are often ignored in error literature, but play an important role in many accidents. Many times rules are written with a goal toward legal compliance (interesting example of a principal/agent problem here) rather than an understanding of the work requirements, some rules are so extensive that if workers follow them all they couldn't get their jobs done.

86) Two types of errors: slips and mistakes, widely used in the study of error, a topic of extreme importance:
* Slips: "A slip occurs when a person intends to do one action and ends up doing something else." Two classes of slips: action-based (the wrong action is performed: I pour milk into my coffee then put the coffee cup into the fridge, in other words the correct action applied to the wrong object) and memory-lapse (memory fails so the intended action is not done or the results are not evaluated: I forget to turn off the gas burner on my stove after cooking dinner).
* Mistakes: "Mistakes occurs when the wrong goal is established or the wrong plan is formed. From that point on, even if the actions are executed properly they are part of the error, because the actions themselves are inappropriate--they are part of the wrong plan... It is the plan that is wrong."

87) Three categories of mistakes: 
* rule-based (the person followed the wrong rule)
* knowledge-based (the person had erroneous or incomplete knowledge)
* memory lapse (when there is forgetting at the stages of goals, plans, or evaluations)

88) See the "Gimli Glider" Boeing 767 Air Canada emergency landing where a knowledge-based mistake was made where the weight of fuel was computed in pounds instead of kilograms... but also a memory lapse mistake was made where a mechanic failed to complete troubleshooting because of a distraction.

89) Slips paradoxically tend to occur more frequently to skilled people than to novices because slips result typically from a lack of attention to the task.

90) Classification of slips: 
* capture slips: instead of the desired activity, a more frequently or recently performed activity gets done instead: it "captures" the activity. Designers must avoid procedures that have identical opening steps but then diverge... sequences should be designed to differ from the very start.

* description-similarity slips: the error here is to act upon an item similar to the target item (e.g., rolling up your shirt into a ball and tossing it into the toilet instead of the laundry basket). Here "designers need to ensure that controls and displays for different purposes are significantly different from one another. A lineup of identical looking switches or displays is very apt to lead to description similarity error." Airplane cockpits make controls shape coded so they look and feel different.

* mode error slips: mode errors occur when a device has different states ("modes") in which the same controls have different meanings. These errors creep into our lives when we have a single control serve multiple purposes. The design might be simpler and much less expensive and easier to use but "this apparent simplicity masks the underlying complexity of use."

* memory lapse slips: Memory lapse failures happen often because we're interrupted or because a machine-based system requires more steps than can be held in working memory. Examples: making copies and walking off with the copy but leaving the original inside the machine; forgetting a child; losing a pen because you're interrupted in the middle of a task to write something (also borrowing someone's pen and not giving it back which is also a type of capture error); leaving your bank card in an ATM (which is such a frequent error that now ATMs have a forcing function requiring you to remove the card before delivering the money). A design solution here will minimize the number of steps, or add vivid reminders of steps that need to be completed, or using forcing functions like the ATM machine example above, or putting a chain on the end of a pen to prevent people walking off with it.

91) Other mistake categories:
* Rules-based mistakes
* Knowledge-based mistakes
* Memory-lapse mistakes

92) Also relevant: social or institutional pressures on accidents or environments. A group may arrive at a mistaken diagnosis of a problem and then interpret all information from then on through the prism of this mistaken diagnosis. Also there might be considerable economic pressure to keep a system running (like a plant, or an airplane), despite problems.

93) On the use of checklists: powerful tools to increase the accuracy of behavior and reduce error, particularly slips and memory lapses. See Atul Gawande's book The Checklist Manifesto. See also the paradox of adding more people to check a task makes it less likely that it will be done right. See also how certain professions (uh, surgeons, arrogant docs, etc.) resent or resist checklists as an insult to their professional competence.

94) On checklist design: it must be iterative, always being refined, so that it covers key items yet isn't burdensome to perform. Many who object to checklists are actually objecting to badly designed lists.

95) On reporting errors: social pressures, institutional pressures, people may not want to reveal errors made by their staff, etc.

96) On "explaining away" errors: most major accidents are preceded by warning signs, equipment malfunctions, or unusual events. "Often there is a series of apparently unrelated breakdowns and errors culminate and a disaster. Why didn't anyone notice? Because no single incident appeared to be serious."

97) In hindsight events seem logical: distinguishing between retrospective evaluation of explanations given in hindsight that couldn't have been made with foresight. "The contrast in our understanding before and after an event can be dramatic." 

98) Great quote here on this hindsight bias problem: "The accident investigators, working with hindsight, knowing what really happened, will focus on the relevant information and ignore the irrelevant. But at the time the events were happening, the operators did not have information that allowed them to distinguish one from the other... The investigators have to imagine themselves in the shoes of the people who are involved and consider all the information.. "

99) "Designing for error": making a product well-designed for when things go wrong, not only for when things go right or when the device is used perfectly.

100) On creating "sterile periods" as an error-limiting device to reduce interruption-based error. On interruptions and how costly they are see how in aviation (the FAA requires a "sterile cockpit configuration" whereby pilots are not allowed to discuss any topic not directly related to the control of the airplane during specific critical periods, this eliminates many interruptions which typically were a major problem during critical phases of flying like landing and takeoff). See also conventions of stopping a conversation when doing something difficult while driving (like merging onto a highway).

101) See also warning bells lights and noises, all of which go off at once during emergencies, adding to the confusion and stress, all competing with each other to be heard. "The design of warning signals is surprisingly complex."

102) Adding constraints to block errors: segregating and changing the shape of various fluid containers in a car; also changing the colors of the different fluids to prevent putting the wrong fluid into the wrong place.

103) The "undo" command: "The best systems have multiple levels of undoing, so it is possible to undo an entire sequence of actions."

104) Code scanning for medical errors, automated medical code to make sure that the script matches the filled bottle, that the nursing staff confirms the med with the tag worn around the patient's wrist to make sure it's the right patient, etc., also the system can be designed to flag repeated administrations of the same medication.

105) The Swiss cheese model of how errors lead to accidents: "In well-designed systems, there can be many equipment failures, many errors, but they will not lead to an accident unless they all line up precisely. Any leakage--passageway through a hole--is mostly blocked at the next level... This is why the attempt to find 'the' cause of the accident is usually doomed to fail." This leads us to various fallacies about avoiding accidents, basically we think if we would replace one of the "pieces of Swiss cheese" it wouldn't happen, See also the "if only" explanation ("if only I hadn't decided to take a shortcut"), even though this was not the cause of the accident.

106) "The Swiss cheese metaphor suggests several ways to reduce accidents":
* Add more "slices of cheese."
* Reduce the number of holes (or make the existing holes smaller).
* Alert the human operators when several holes have lined up." Very interesting! Basically increasing design redundancy and layers of defense, but also a more abstract solution in this third example (see also note 107 for a sad example of this).

107) "One of my favorite examples in aviation is of a pilot who, after experiencing low oil-pressure readings in all three of his engines, stated that it must be an instrument failure because it was a one-in-a-million chance that the readings were true. He was right in his assessment, but unfortunately, he was the one. In the United States alone there were roughly 9 million flights in 2012. So, a one-in-a-million chance could translate into nine incidents." Jesus.

108) Paradoxes of automation:
* Handles dull dreary tasks but fails with complex ones
* When it fails, it often fails without warning and a human is out of the loop

Ch 6: Design Thinking
109) On solving the correct problem: One of the author's rules in consulting "never solve the problem I am asked to solve." Invariably the problem is not the actual problem, it's not the root problem, but rather a symptom.

110) Giving his MBA class a problem to solve and then asking them "how do you know you solved the correct problem?" Everyone would be puzzled, and then ask, "Why would anyone ever give them the wrong problem?" This is a GREAT example of a ludic fallacy here: the point of course is in the real world problems do not come in nice, neat packages; and usually it is too easy to see only the surface problem.

111) "Good designers never start by trying to solve the problem given to them: they start by trying to understand what the real issues are." Even then, instead of solving that problem they continue to consider a wide range of potential solutions. This is what the author calls design thinking. All great innovators practice it, not just designers.

112) On being iterative, expansive, and always resisting the temptation to jump immediately to a solution for the stated problem. 

113) Human centered design [HCD]: the process of ensuring that people's needs are met, that the resulting product is understandable and usable, that it accomplishes the desired tasks, and that the experience of use is positive and enjoyable. 

114) On the Double Diamond Diverge-Converge model of design (see photo below)


115) The DDMoD involves diverging to expand the scope of the problem and to examine all the fundamental issues that underlie it, then to converge upon a single problem statement, then to expand the space of possible solutions, then finally to converge upon a proposed solution.

116) HCD involves four steps, sometimes described as a spiral method, repeated over and over, getting closer to the desired solution:
1) observation (for example of would-be customers in their natural environment)
2) idea generation (avoiding criticism, considering even crazy ideas or obviously wrong ideas because of insights that could be behind them; also questioning everything, the author is particularly fond of stupid questions which often turn out to be profound with less obvious answers than expected)
3) prototyping (using the Wizard of Oz method, mimicking a huge powerful system long before it can be built)
4) testing (use the prototypes with small groups of, say, five people to test out, observe and even record their experiences and then iterate from there)

117) On the design concept of "fail frequently, fail fast": rational executives never understand this aspect of design ("Why would you want to fail?").

118) On activity-centered design versus human centered design: activity-centered design focuses on the activity, not the person. For example, rice cookers all the world round cook rice in a very similar way so the conceptual model of the product is built around the conceptual model of the activity. This works when people's activities across the world tend to be similar.

119) Note that iterative design processes (to develop and modify designs) allow you to go back and modify a design more flexibly, whereas a "linear" or "waterfall method" design process makes it difficult or impossible to go back. Iterative methods are designed to defer the formation of rigid specifications. It has trade-offs and doesn't scale well to large projects, doesn't scale well across hundreds or thousands of developers.

120) Once again, recall Don Norman's Law of Product Development: the day the product development process starts, it is behind schedule and above budget.

121) On various constraints in design: 
* products have multiple conflicting requirements: price, the end user isn't the one making the buying decision, etc.
* manufacturing, service, and salespeople are in a way also "users" of the product in the sense that they have to deal with the output of the design team, so designers (and the design itself) need to accommodate them as well.
* There's no such thing as an average person, so designs need to think about "physical anthropometry": various dimensions of the human body. Also what happens when you average a left-hander with a right hander?
* The stigma problem, devices to aid people with particular difficulties often fail because they are rejected by their intended users--who don't want to advertise their infirmities. See however Sam Farber's company OXO and his tools designed for someone with arthritis, which were advertised as better for everyone. See also inclusive design or universal design: including flexibility in a product that allows everyone to use it.

122) On living with complexity: "Complexity is essential: it is confusion that is undesirable"

123) On the necessity of standards/the establishment of standards: see the everyday clock or the QWERTY keyboard; note however that standards can be so slow to develop that the technology outruns them, or you standardized too soon and lock into a primitive technology or you standardized too late and then there's too many standards to agree on one (early digital cellphone transmission protocols are a good example here: TDMA/CDMA/GSM, etc.).

124) On designs that deliberately make things difficult: a difficult-to-operate door for a school with handicapped children to keep them from leaving without adult supervision. "Most things are intended to be easy to use, but aren't. But some things are deliberately difficult to use--and ought to be." See for example doors designed to keep people in or out, security systems, dangerous equipment, secret doors, safes, medicine bottles.

Ch 7: Design In the World of Business (this chapter covers real world problems and constraints on design)
125) Featuritus, creeping featurism; Matching the competitors features; versus resisting the urge to match all the competitors features and instead focus on your product's specific strengths.

126) Incremental product innovation versus radical product innovation

127) Technology is simultaneously rapid and slow: most of daily life is dictated by conventions that are centuries old, many of which no longer make sense. 

128) Most modern technologies follow a time cycle of 
* fast to be invented, 
* slow to be accepted, 
* even slower to fade away and die. 
For example: touch displays have been in existence for almost 30 years before becoming widely available, in part because it took decades to transform the research technology into components that were inexpensive and reliable enough for everyday use.

129) "A rule of thumb is twenty years from first demonstrations in research laboratories to commercial product, and then a decade or two from first commercial release to widespread adoption."

130) Typewriters and the QWERTY keyboard: so constructed not because the original typists were too fast and jammed the machines (this is an urban legend), but because the mechanical type bars needed to approach each other from wide angles thus minimizing the chance of collision and getting tied up. In fact, the QWERTY arrangement helps speed typing because letter pairs are usually typed with different hands. Note also that the Dvorak keyboard is indeed superior to the QWERTY keyboard but not as much as most people assume, thus the improvement is not enough to counteract the legacy costs. "'Good enough' triumphs again."

131) She also "chord keyboards" like the ones stenographers use, where keys are pressed simultaneously (each combination is called a "chord") to code digits, punctuation and phonetic sounds of English. The problem is there's a very steep learning curve and all the information needs to be "in the head"; thus it's not like a keyboard that you can walk right up to it and use right away because the keys are visible.

132) On how technology makes it smarter, but also makes us more dumb: even Socrates complained about books, arguing that reliance on written material diminishes not only our memory but our very need to think, and thus was far inferior to learning through discussion.

133) This chapter is the weakest and most repetitive of the book, The author is trying to wrap up the subject and is struggling to do so.

134) The author and his publisher put a lot of effort into making interactive versions of three of his books; it was a failure, produced on a computer system called Hypercard that Apple developed but then eventually stop supporting; thus these interactive books will not run on any existing machine. Note however how today anyone can record video or voice and do simple editing.

135) On moral obligations of design: "We are surrounded with objects of desire, not objects of use." Usability is not the primary criterion in the marketing of home and office items in the consumer economy, so these are therefore things that are good for business but bad for the environment. See also planned obsolescence, products built with a limited lifespan, yearly fashions in women's clothing, style changes with vehicles. 

136) Fascinating: "A story tells of Henry Ford's buying scrapped Ford cars and having engineers disassemble them to see which parts failed and which were still in good shape. Engineers assumed this was done to find a weak parts and make them stronger. Nope. Ford explained that he wanted to find the parts that were still in good shape. The company could save money if they redesigned these parts to fail at the same time as the others." (!!!)

137) Design is successful if the final product is successful. People need to buy it, use it, enjoy it; it needs to be manufacturable, profitable, it has to satisfy all these needs.

138) On designing "small": the rise of small efficient tools, small 3D print shops, self-publishing, YouTube-type video sites where people can teach or offer content... the author dreams of "a renaissance of talent."


Other books by Don Norman: 
The Design of Future Things
The Psychology of Everyday Things (the 1988 edition of The Design of Everyday Things)
**Living With Complexity
Emotional Design
Learning and Memory
**Things That Make Us Smart
Memory and Attention: An Introduction to Human Information Processing (textbook)

To Read:
J.J. Gibson: The Ecological Approach to Visual Perception
Charles Perrow: Normal Accidents
Alfred Bates Lord: The Singer of Tales
Daniel Wegner: The Mind Club
Daniel Wegner: The Illusion of Conscious Will
James Reason: Human Error
James Reason: A Life in Error
Youngme Moon: Different: Escaping the Competitive Herd
**Tim Brown and Barry Katz: Change by Design
Jan Chipchase and Simon Steinhardt: Hidden in Plain Sight
J.D. Lee and A.Kirlik: The Oxford Handbook of Cognitive Engineering
Daniel Schacter: The Seven Sins of Memory
Baruch Fischhoff: Judgment and Decision Making

More Posts

The Great Taking by David Rogers Webb

"What is this book about? It is about the taking of collateral, all of it, the end game of this globally synchronous debt accumulation super cycle. This is being executed by long-planned, intelligent design, the audacity and scope of which is difficult for the mind to encompass. Included are all financial assets, all money on deposit at banks, all stocks and bonds, and hence, all underlying property of all public corporations, including all inventories, plant and equipment, land, mineral deposits, inventions and intellectual property. Privately owned personal and real property financed with any amount of debt will be similarly taken, as will the assets of privately owned businesses, which have been financed with debt. If even partially successful, this will be the greatest conquest and subjugation in world history." Sometimes a book hits you with a central idea that seems at first so preposterously unlikely that you can't help but laugh out loud (as I did) and think, &quo

The Two Income Trap by Elizabeth Warren

What is wrong with the following statement? "But the two-income family didn't just lose its safety net. By sending both adults into the labor force, these families actually increased the chances that they would need that safety net. In fact, they doubled the risk. With two adults in the workforce, the dual-income family has double the odds that someone could get laid off, downsized, or other wise left without a paycheck. Mom or Dad could suddenly lose a job." You've just read the fundamental thesis of The Two-Income Trap. If you agree with it--although I truly hope you're a better critical thinker than that--you'll have your views reinforced. Thus reading this book would be an unadulterated waste of your time. If on the other hand you are capable of critical thinking and you can successfully see through hilariously unrigorous "logic" of the above statement, then this book will still be a waste of your time (unless you like reading books for the s

Net Wars by Wendy M. Grossman

Workmanlike book about the early Usenet message boards that made up much of the internet's landscape in the early- to mid-1990s. While it offers helpful analogies for certain internet controversies today, I'd only recommend it to serious internet history geeks. It's not interesting enough of a read for the casual reader. However, books on technology ( and investing ) from past periods can offer surprisingly useful insights for current-day readers. The flame wars of the early days of Usenet rhyme with today's malevolently sarcastic social media arguments. Censorship battles of the 1990s give us a tiny hint of what they look like now. Spam, surveillance--we are grappling with the same problems today, just in far more extensive forms.  And then again, there are some issues that seemed like a really big deal to everyone back then that, once enough time passes, end up hardly mattering at all. I wonder what things we think matter today that don't, and what things we think