Facets of Decision Making
Making good decisions seems to be the core of success. When I observe corporate life, it often seems to me that making decisions at all is the key to success. Being overwhelmed by information and stifled by unrealistic expectations, I observe that people feel unable to take any decisions, but rather wait for fate to decide. So if things go wrong, at least they cannot be blamed.
Over the years I have researched and experienced decision making from different angles:
- By reading about how people make decisions. Decision-making research comes mostly from economics, but also psychology/cognitive science, (interaction/UI) design, philosophy and others.
- By constructing (or rather, trying to construct) robots and computer programs that make intelligent,or at least acceptable, decisions.
- By observing how people around me make decisions and experiencing how I go about them myself.
In this blog series I want to join these perspectives into a more coherent picture. Practical decision making is rarely informed by science, and AI fully ignores empirical results in other disciplines (just as other areas of science seem to ignore each other). Obviously such a blog series cannot be exhaustive. I concentrate on the aspects that I see underrepresented in the literature and public opinion.
When you make a decision, you choose what should be done or which is the best of various possible actions. [Collins]
A decision involves picking one of several options, and the choice will be put into some kind of action. A typical study object is to ask participants who of serveral hypothetical job candidates they would employ. In such an experiment the choice has no consequences, but in real life the chosen candidate would be hired. But also the choice of eating the marshmallow in front of me or not is a decision. Other denominations for decision-making are theory of choice or action selection.
Picking an option means that you first need to identify or define options. In the definition above this step is not explicitly mentioned, but as we will see it is a crucial one that is often more difficult than settling on an option. The term problem solving covers this more explicitly, but I like to avoid the word "problem" as it is rather associated with well-defined mathematical problems. In real life when we decide what to have for lunch, we don't feel like solving a problem.
Now, what differentiates a good decision from a bad decision? Here it starts to get tricky. Or not, if we just choose to disregard reality.
Normative Decision Theory
Normative decision theory simply ignores reality and defines rationality based on outdated philosophical assumptions. Outdated because these assumptions have scientifically shown to be wrong (looking out of the window now and then would do the trick as well). Unfortunately, it is taken as the default in many branches of science (e.g. economics, artificial intelligence) and seems to be stuck in people's heads as the "right" way to make decisions.
Normative decision theory treats every decision like a mathematical problem, with given alternatives and a known set of mechanisms, such as rules of formal logic or probabilities. Based on these assumptions, there are different ways to define "rationality", i.e. optimal choice. One typical mechanism is maximizing the expected utility of actions. For example, if I am faced with the decision of whether to devour that yummy piece of chocolate in front of me, I would need to consider the consequences of each potential action (eating or not eating) expressed as a number (let's take values between -1 and 1), while taking into account the probability of the outcome to occur.
- I may enjoy the chocolate, assuming a pleasure value of 0.5. With a full record of all my chocolate-eating experiences I might know that the probability of me enjoying the chocolate is 0.98. We may complicate the situation slightly with a full probability distribution of how much I might enjoy this particular piece of chocolate, getting pairs of pleasure values and probabilities. In case of not eating the chocolate I know that I will not have any pleasure, say 0 with probability 1. But again, I might assume a probabilistic model, because I could feel negative pleasure values in the near future knowing that I denied myself this chocolate.
- I may have a bad conscience regarding my health if I eat the chocholate. I leave the data retrieval to the reader.
- The piece of chocolate could get stuck in my throat. My numerical outcome would be as bad as can be, -1. I have never experienced such an event happening, but I am aware that it could potentially happen. So I might assume (!) a low probability of, say, 0.00000001. Alternatively I could try to get data on how often this has happened to others. However, deciding of whether to do this research I would need to consider my time investment, the probablity of really finding such a number, possible costs of database services etc.
Let's get back to reality. In real-life decisions our options are not predefined, reality is not based on logic and we do not have full probability distributions. Normative decision theory cannot provide an answer. One way to define the quality of a real-life decision is to put it into action and evaluate the outcome. But here we encounter several problems:
- We don't know anything about alternative results. For example, the train you take home from work stops due to technical problems. You may have different choices of how to get home (wait for the train to be fixed, wait for the next train, take the bus, take a taxi, call someone to pick you up). In such situations I could never figure out afterwards whether my decision was good. I simply don't know whether the alternative bus was stuck in a traffic jam, when the train was finally fixed, etc.
- We have to account for good or bad luck. Some really stupid decisions, such as driving on the wrong side of the road, can result in no harm and temporarily even pleasure, while other reasonable decisions may be go wrong, such as opening a hotel just before the COVID crisis.
- Some decisions may have far-reaching outcomes so that we cannot afford to just try .
Instead of evaluating a decision with respect to a single outcome, we could try to define a statistical measure of possible outcomes . For example, a financial investment could be evaluated not by its one-time result, but by simulating different trajectories of how the market might have evolved. Summing over different outcomes we can see whether the investment was safe, i.e. producing positive results in most of the possible worlds, or risky, producing high results in some possible worlds, but huge losses in others.
Like in normative decision theory, we need some model of our domain to be able to do such a simulation of different worlds. Some domains are well modeled such as physical processes to simulate materials. Sometimes we may not need specific values, such as the development of stock markets, but we do need to understand the process. If we have such an understanding and the time and ressources to build a simulation, it can certainly help to give insights into the consequences and important parameters of a decision. But in general, we have neither the ressources nor the knowledge to do this.
My personal favorite is to look at the process of how we reach decisions. The third part of this blog series is dedicated to principles and mechanisms for making good decisions. If you follow such a (very roughly specified) process, your decisions will most of the time lead to decent results.
March  provides a list of even more views on good decisions. The main point for me is that in real life there is no such thing as an optimal (i.e. the one and only best) decision.
- Bounded rationality, ambiguity, and the engineering of choice. The Bell Journal of Economics, 9(2), pp. 587 – 608, 1978.
- Dilemmas in a General Theory of Planning. Policy Sciences, 4 1973.
- Fooled by Randomness. Penguin Books. 2007.
We could not even settle on a definition of good decisions. So don't expect a clear answer to whether people make good decisions. But the question is important as is can provide self-confidence or self-consciousness to humans as decision-makers in general. In corporate settings and the society as a whole, there seems to be a tacit assumption that computers would be better decision-makers than humans. They cannot get drunk, so they must be better drivers. They cannot be tired, so they cannot overlook warning signals. They have no feelings, so they cannot be biased.
Daniel Kahneman and Gary Klein represent two distinct research traditions. In a joint paper  they summarize their different viewpoints:
For historical and methodological reasons, HB [heuristics and biases, Kahneman] researchers generally find errors more interesting and instructive than correct performance; but a psychology of judgment and decision making that ignores intuitive skill is seriously blinkered. Because their intellectual attitudes developed in reaction to the HB tradition, members of the NDM [naturalistic decision making, Klein] community have an aversion to the word bias and to the corresponding concept; but a psychology of professional judgment that neglects predictable errors cannot be adequate.
In addition to these rather personal motives, I see the difference in these and other research traditions in the types of problems they approach.
Artificial Intelligence started to look at processes that require intelligence in humans, such as playing chess, solving logic puzzles or doing math. It took the field a few decades to discover that human everyday activities are much harder to understand and master than the problems that are difficult for humans. But also other fields like to work with well-defined simplified problems, for example to provide clear experimental conditions. Such artificial problems are characterized by
- a clear problem statement (or goal)
- a well-defined set of alternatives
- a well-defined set of mechanisms relating actions to results
- a correct or optimal decision or solution
Such artificial problems happen to be the only ones that can be solved with normative decision theory, because they provide the prerequisites assumed by the field. For such problems it is therefore easy to define a baseline for a correct decision as it is defined as part of the problem. They are the background for the heuristics and biases program of Kahnemann and Tversky. In their studies, participants were presented with math problems that were formulated in a rather day-to-day fashion. Participants often gave the "wrong" answer that is not compatible with mathematical principles such as logic or probability theory.
Over the years, a long list of biases has been accumulated, which has led to the perception that humans are bad decision makers. Fraser and Smith "caution researchers and practitioners in referring to well known biases and errors" :
[T]here is considerable variation across subjects and considerable variation among experimental results. [...] In other cases, the existence of a behavior has been established but significant doubts have been raised about whether the subject has mad an error. [...] In other cases, it has been established that the behavior occurs and that subjects make an unreasonable interpretation of the problem as stated by the experimenter, but their interpretation is reasonable for a more realistic version of the problem. [...] In other cases, the behavior occurs and can be called an error, but the behavior seems to occur only under some circumstances.
Just because artificial problems can be solved by methods of normative decision theory, does not mean that it is the only way to approach them. Artificial problems can be very useful for targeted research. Newell and Simon  used such problems in thinking aloud experiments to gain insights into the mental processes of participants, showing among other things that a large part of the solution process involves revising mental representations. Shaping the problem in different ways is also one of the methods in Polya's "How to Solve it" . The purpose of the book is to provide guidelines to teachers and students of how to approach math textbook problems. Even though the problems are formal and have well-defined solutions, the process described by Polya reads like an instruction to Design Thinking (a term that did probably not exist when the book was first published in 1945)—a technique for solving real-life problems (see below).
Simon  started to raise awareness that not using normative decision theory can be advantageous for humans. He claims that human solutions are usually "good enough", arguinig that the time and ressources invested in retrieving an "optimal" solution, are often not justified. Simon agrees with Kahneman and Tversky that we use heuristics, or rules of thumb, as a means to efficiently making decisions. According to his concept of "satisficing" solutions, people find a decent trade-off between solution quality and the consumption of time or mental resources.
While Kahnemann and Tversky, and to some extent also Simon, consider heuristics as an efficient, but suboptimal means to decide, Gigerenzer et al. have shown that heuristic-based decisions are better in real-life situations than methods of normative decision theory . Gigerenzer and his collaborators diplomatically differentiate "small worlds" and "large worlds". Their small worlds correspond to what I have called artificial problems, while in large world problems we do not know the mechanisms relating actions to results. The distinction is a gradual one as some mechanisms may be known or partly known, while we ignore others. But I find the few everyday tasks that are simple enough to fall into the small world/artificial domain not worth considering.
The main finding of Gigerenzer and colleagues is that the more complex the task and environment, the more it makes sense to simplify the problem by ignoring information. They call this the less-is-more effect . At first this idea may look foolish, and most of us intuitively do the opposite: the more a problem stresses us out the more we try to base our decision on as much information as we can get. But we have to remember that we are talking about problems where we do not fully understand the mechanism of action consequences. Trying to apply statistics, for example, means we need to know the exact probability distributions. If we have to guess those distributions, we can just as well guess the result without running made-up data through formulas.
Katsikopoulos et al.  describe tallying rules and fast-and-frugal trees as techniques for making decisions in professional contexts and how to construct them. They provide examples from medicine, politics, finance and military operations. They show how simplified heuristics do better than methods of normative decision theory in situations that do not meet the necessary assumptions.
The techniques coming from Gigerenzer's school of thought remedy the need for perfect information about the mechanisms of actions. But they do assume a given set of alternatives and a clear problem statement. In many cases we lack this luxury as well. Rittel and Webber  have coined the term "wicked problems". They use the word "tame problem" for what we have called "artificial problem". Wicked problems are defined in the context of social policy, dealing with far-reaching problems such as which rules to establish during a pandemic. Wicked problems are also never really solved, any decision will trigger new problems.
Rittel and Webber  "suggest that the social professions were misled somewhere along the line into assuming they could be applied scientists—that they could solve problems in the way scientists can solve their problems. The error has been a serious one." I would contradict insofar as I think scientists (except mathematicians) should treat their problems as wicked problems rather than using normative decision theory. After all we pay for science funding to understand reality. And even though Rittel and Webber were thinking about large social policy problems, their warning against confusing them with tame problems can be transferred to any situation that does not meet the criteria for artificial problems (so, basically, all).
Now that we have done away with all the criteria of artificial problems, how do we know that we have a problem if we cannot define any of the characteristics of a classicial artificial problem? Human beings seem to have an awareness of problems, even though their scope is rather defined by gut feeling. "The formulation of a wicked problem is [sic] the problem!" 
So how can we solve such problems-that-are-not-really-problems? Design Thinking provides guidance and tools: iterating through cycles of problem understanding, option generation, option testing and deciding (settling on an option). Unfortunately Design Thinking has degenerated into an empty buzzword, in which managers try to find the magic formula of making corporate decisions. Design Thinking only works when people understand that they are trying to "solve" a wicked problem, which means there is no well-defined goal, there are no clear measures of success and no "right" solution. It is all about an unfolding process to clarify and improve situations that we cannot even formulate as problems, but where we feel something needs to be done.
If we take the characteristics of wicked problems, but scale them to very short periods of time, we get the class of problems that the field of Naturalistic Decision Making has been focusing on. Klein  and colleagues have examined how firefighters, military decision makers and hospital staff make decisions in time-critical situations.
The main technique they identified was a matching of the current situation to situations from the past. In time-critical situations we cannot sit down with sticky notes to generate options and have a group vote to pick one (as one might do in a Design Thinking session). Instead we rely on our memory.
The tricky part is to identify the memories that will help in the specific situation. If a pilot decides whether to make an emergency landing, it won't help to know that today is Thursday and trying to recover situations on Thursdays. Instead the pilot must recall situations from training sessions or other flying experience, possibly matching parameters such as weather conditions and sensor readings.
And this is where experience comes in: obviously we must have experienced other situations so that we can recall them. But experts also change their way of representing a situation. In a famous study de Groot  showed that experienced chess players represent situations on a chessboard in a more abstract, strategic way than novices. In his experiment, participants were shown a chessboard for a few seconds, which was then removed. The participants then had to reconstruct the pieces they had seen on the board on a blank chessboard. If participants were shown a situation from a real game of chess, experts did much better in reconstructing the situation than novices. And if they did misplace pieces, they would move a group of pieces so that the strategic situation would not change. In contrast, novices placed the pieces without any context and might just put a piece on an adjacent field. When the pieces were put randomly on the chessboard, experts did not do better than novices in reconstructing the setup.
The situations above cover explicit decision-making moments. Even though we cannot always define a clear problem, in all these situations, the actors are aware they are making a decision, and possibly an important one. But all of us make hundreds of decisions every day that we are not aware of. I did become aware of them when I tried to make robots behave "normally", i.e. move through space without hitting furniture or people, or grasping objects.
I do not know whether the exact same mechanisms are at work in the brain when we decide which hand to use for grasping a glass of water, when we decide to read a book or when we decide how to invest our money. But I believe that all these smaller and larger decisions follow similar basic mechanisms 1) because I cannot draw a line between them, there are gradual changes from unconscious everyday decisions into momentous life-changing ones, and 2) treating small motoric decisions the same way as the more important ones that have been studied in psychology has proved useful to me for controlling robots .
Somewhere in between the very small decisions in robotics and the more obvious ones discussed in the sections above, Barbara and Frederick Hayes-Roth  have observed in a thinking-aloud experiment how people plan errands; Tenbrinck and Seiffert have done a similar experiment about the planning of itineraries . Both show how people switch between mental layers of abstraction. Contrary to techniques developed for artificial problems, people do not plan in a top-down fashion, but rather alternate between planning and sequential decisions. The same observation was made by Newell and Simon with participants working on artificial problems .
Another aspect of everyday decisions are habits. Personal habits or organizational procedures also play a role in professional decisions, and habits seem to be related to the experience-based mechanisms identified in Naturalistic Decision Making. Habits are also related to heuristics. They are a shortcut to a decision, in this case relying on former situations. Just like heuristics they can lead to bad or "irrational" decisions , but they can also be seen as a quick "satisficing" solution. Habits seem to be largely driven by input from our environment  rather than "solving problems".
The reasons for making decisions seem not always (or maybe even rarely) to be specific problems we need to solve. Most decisions we make over a day happen unconsciously, and even when we engage in conscious problem-solving activities, we rarely find a well-defined problem. If at all, we can work on transforming a situation into something like a (well-defined) problem. The main body of scientific literature, however, starts with a problem and uses variants of methods from normative decision making. It is high time, science stopped working on a special case that hardly ever occurs in reality, and instead had a closer look at the situations and methods of real decisions.
Coming back to the question of whether people make good decisions: When we look at the number of decisions we make every day, the simple fact of our survival indicates that we are generally good decision-makers. We usually fail in situations where normative decision theory can be applied, i.e. artificial problems. We also feel like we were making bad decisions in situations of high uncertainty or with important consequences. But we should always keep in mind that in reality there is no optimal solution. We have built ourselves a world in which some of our natural decision-making methods fail. But instead of changing the methods, we might think about changing the world.
- Conditions for Intuitive Expertise. American Psychologist, 64(6), pp. 515 – 526, 2009.
- A catalog of errors. International Journal of Man-Machine Studies, 37, pp. 265 – 307, 1992.
- Human Problem Solving. Prentice Hall. 1972.
- How to solve it. Princeton University Press. 2014.
- A Behavioral Model of Rational Choice. The Quarterly Journal of Economics, 69(1), pp. 99 – 118, 1955.
- Simple Heuristics That Make Us Smart. Oxford University Press. 1999.
- Less-is-more effect. https://en.wikipedia.org/w/index.php?title=Less-is-more_effect&oldid=994925754 (retrieved 18 Feb 2022)
- Classification in the Wild: The Science and Art of Transparent Decision Making. The MIT Press. 2020.
- Dilemmas in a General Theory of Planning. Policy Sciences, 4 1973.
- Sources of Power: How People Make Decisions. MIT Press. 1999.
- Thought and Choice in Chess. Mouton. 1965.
- A Modular Approach of Decision-Making in the Context of Robot Navigation in Domestic Environments. In: 3rd Global Conference on Artificial Intelligence (GCAI), pp. 134 – 147. Miami, Fl., USA, 2017. [pdf from HAL]
- A Cognitive Model of Planning. Cognitive Science, 3(4), pp. 275 – 310, 1979.
- Conceptual layers and strategies in tour planning. Cognitive Processing, 12, pp. 109 – 125, 2011.
- Predictably irrational: The hidden forces that shape our decisions. Harper Perennial. 2010.
- Lapses of attention in everyday life. 1984.
- A new look at habits and the habit-goal interface.. Psychological Review, 114(4), pp. 843, 2007.
What makes decisions difficult?
Revisiting the types of decisions described in Part 2, the easiest class of decisions are artificial problems. Paradoxically, most people consider them as difficult, because such problems are often described in formal or half-formal (and very often pseudo-formal) language. Given that we hardly ever face such problems in our everyday experience, this type of problem and mathematical formulations need some exercise to get comfortable with. But from then on, the only thing that stops us from mechanically solving the problem may be computing power. But the way to the solution is clear.
All "real" problems are harder, and it is their not being artificial that makes them hard.
- Lacking a clear problem statement (or goal). We often do not know "where to start" to tackle an issue, we cannot even clearly state it. In groups, we often talk across each other, simply because we have different understandings on what we are working on. The decision processes in reality are rarely ever finished. One decision follows another, and our focus of attention wanders from one "problem" to another. But it is hard to draw a line between them.
- Lacking a well-defined set of alternatives. Generating alternatives becomes an additional step in the decision-making process. Possible errors can now be due to either having considered the wrong alternatives or having chosen the wrong one. This makes self-improvement and correcting mistakes harder.
- Lacking a well-defined set of mechanisms relating actions to results. We may have too little information, for example, we do not fully understand all the processes in the human body and therefore cannot compute the result of treatments. We may also have too much information, leaving us with the hard task of figuring out which of the variables we know are important for the task at hand. We can measure thousands of values of a human body, but lacking a full understanding of their working mechanisms, we often do not know, which values are relevant to confirm or disconfirm some diagnosis. While math problems usually provide exactly the information necessary to solve them, in real life we have to figure out which information we need, and possibly take action to get more information or to test the value of a piece of information. To make things worse, the mechanisms of the real world tend to change over time. A virus transforms, the social dynamics in a company change, our own preferences are not the same today as they were yesterday.
- Lacking a correct or optimal decision or solution. We can never be sure whether we took "the best" decision. In politics the same action can be interpreted as a full success ("we lowered the unemployment rate") or a complete failure ("you triggered an inflation"). And nobody can prove any of this to be connected to the actions taken at all, because we never know how the world would have evolved without them (maybe the unemployment rate would be even lower or the inflation more severe). We try to learn from mistakes, but this is quite hard if we never know for sure whether we have made a mistake.
When it comes to decision mechanisms, we re-encounter the opposition of artificial and real problems. The decision mechanisms I describe here have been observed in human decision making for non-artificial problems, but there are more formalized analogues used on artificial problems. It is important to understand the differences, because the informal real-world mechanisms work well, while the reduced formalized versions only work for artificial problems.
Representation and Hierarchy
There is a general consensus that difficult problems need to be broken down into smaller subproblems. For example, I could break down my "problem" of planning my next vacation into booking a hotel, booking a flight, planning excursions, etc. I could further break down each task into smaller steps, thus forming a hierarchy of ever smaller problems. Methods developed for artificial problems assume
- the subproblems to be solvable independent of one another;
- a "command and control" way of interaction between the hierarchies, where the upper level passes a (well-defined) problem to the lower level and expects a clear answer or set of actions to be performed;
- (sometimes) a fixed number of levels.
This way of taking apart problems is not just the standard in robotics, artificial intelligence and management, it has also found its way into software engineering in the form of the waterfall model. This approach assumes that software development is a linear process with requirements analysis and architecture design followed by programming and testing. Each step refines information taken by the previous step. So the requirements document will describe certain boundaries of what the new software is supposed to do. The architecture that is then defined, must stay within those boundaries, refining aspects to a more technical level. Once the architecture has been defined, code for each operation is added (and can be added independently to different modules).
As clean and proper this approach may sound, it does not work! There is no doubt that human beings use different ways of representing a problem, some of which are more coarse, others more fine-grained. It also makes sense to group aspects of a problem into some kind of subproblems. But:
- The subproblems are hardly ever (fully) independent of one another. Take the simple vacation example. Booking a hotel and a flight are coupled via the date. If I see that flights are very expensive or hotels booked out on my envisioned traveling dates, I may change the dates, but I have to do it in both subtasks. Such dependencies are often overwhelming in the decision process and one gets the feeling of "whoa, where do I even start?". The only answer I have is to take an iterative approach (see also section on Dynamics): start on one subtask without any final decisions, work on the next, check the overall solution, get back to the subtasks.
- Human thinking does not follow a "command and control" hierarchy. This is not possible, because we never solve artificial problems, so we can hardly expect the subproblems to be artificial ones. But more important is the temporal interaction when working on different levels of abstraction: in everyday thinking, humans switch constantly between abstract and specific. The study of Barbara and Frederick Hayes-Roth  mentioned in Part 2 (of participants planning errands) demonstrates this nicely:
- "OK. Break up town into sections. We'll call them northwest and southeast."
People use abstract representations to break down a problem.
- "Oh, real bad. Don't want to buy the groceries now because groceries rot. You're going to be taking them with you all day long. Going to have to put the groceries way towards the end."
People do not blindly fill abstract plans with actions. If the lower level requires something else, they break out of the framework provided by the abstract level.
- "We're looking good. We've knocked off a couple of secondaries that really we hadn't planned on, but because of the locations of some stores that are in the way that could be convenient"
People do not work in a top-down way. We use opportunities on lower levels of abstraction and adapt our abstract plan to the actions taken.
- "OK. Break up town into sections. We'll call them northwest and southeast."
- Levels of abstraction can be arbitrarily deep. And they change! This is best observable in the representation of categories. When I used to see a bird, my brain would recognize "bird". For a year or so I have been looking up any bird I encountered, and now when I see one, my brain starts to respond with "blackbird", "tomtit" or "redstart". If I were to dig deeper into ornithology, my brain would respond with even more fine-grained concepts. The same is true when we work on a "problem". When I start to write a new program, I may have things on my mind such as "arrange main boxes on the screen", "create input fields", etc. When I take on each of these tasks, I will better understand them and will be breaking them up into smaller units. For example the input fields may turn into different widgets: for some inputs I will use checkboxes, for others text fields, or text fields that only accept numbers. I will also have to think about the arrangement of the widgets, etc. Once I have nearly finished a topic with only a few details left to finish, it will go back to a more abstract description.
How do these insights help us in making better decisions? For me the point is to structure problems, but not to over-structure, and to abandon a structure when we find that it doesn't serve us. When advising students doing their theses I observed two extremes (and a lot in between):
- Doers: These students would sit down programming the minute they had been told their topic. After a week, they would present me a more or less finished program (some really good, others not so much) and use the rest of the time to tweak this initial solution.
- Thinkers: This type of students produced a lot of paper, sticky notes, concepts and anlyses. They would usually run into time problems, because they never started to implement any solution, and therefore had nothing to test.
So good decision making means to find the sweet spot between planning/analyzing/understanding and doing. The tricky part is to find this sweet spot. We are often caught in one of the two (or possibly more) thinking modes, and it is hard to know when it is time to switch. The best answer I have is experience and self-observation. The tradition of Naturalistic Decision Making  emphasizes the matching of current situations to previous ones. And I believe we not only match parameters of a situation, we can also train ourselves to develop a good process in which we are aware of our different abstraction levels and switch to the most appropriate mode. We also have to accept that this switching decision (like any other decision) will not be "optimal". But there are better and worse ways to do it.
I have described the less-is-more effect propagated by Gigerenzer and colleagues  in Part 2. It states that we often make better decisions if we consciously ignore information. Let's have this again: we make better decisions if we consciously ignore information. It is true, it has been empricially studied, and it makes a lot of sense, because much of the information we have at our hand is either irrelevant to the task or so incomplete that we cannot rely on the information (e.g. we cannot do statistics with a single measurement).
I made this the motto of my life. Even in everyday situations, I find myself trying to put unnecessary pressure on my short-term memory or my ability to do combinatorics. Even when cleaning the toilet I find myself optimizing my actions: "put the cleaner in, while I wait for it to operate, I can start with the shower, but first I need some clear water to do the mirror, and then I shouldn't forget to do the drain before I do the shower tray, otherwise that will get dirty again...". But no matter how much I plan, I will always overlook certain details (I like to put away the toilet brush to clear the floor, before having cleaned the toilet), or forget the order I wanted to perform the tasks in. You can argue that this is just a problem of my limited human mind (and I'd better get a computer to help me), and this may be true for toilet cleaning. But in most situations, we simply do not know everything from the start. When I design a piece of software, there is no way for me to know exactly how users will react to it, which trouble some library will give me or whether some web component will be declared as deprecated. I sometimes think that our mind is limited on purpose to keep us from too much optimizing. How I have often wished for a larger short-term (or any type of) memory. But being limited, I won't even try to do combinatorics in my brain, and this is appropriate, because the world will have changed before I have figured out the solution. We know from computer science theory that there are problems for which no algorithm exists that is both optimal and fast for any problem size (the phenomenon is called NP completeness; to be fair, it is not fully proven, but it is accepted generally).
So instead of firing ever more complicated algorithms and decision-making methods on complex problems, we must accept that there is no optimal solution and that our only chance of finding a decent solution is simplification. Simplification can be the aforementioned reduction of information, it can also be the reliance on experience and habits, or starting to implement a (seemingly) half-finished software specification.
Again this is not a clear and easy rule, again we have to make tricky decisions about how to make our decisions. Not all simplifications are beneficial. For example, when thinking about insurances, having no data on the probability that I could crash into someone else's car, does not mean that I should ignore the possibility of this to happen. Taleb  argues that we should focus on the consequences of the outcome rather than the probability. Knowing I could face financial ruin by destroying someone's Porsche, I rather pay the smaller amounts for the insurance. So a good simplification is to focus on the consequences rather than the probabilities, a bad one would be to ignore the consequence. But I can offer no general rule of always choosing the right type of simplification.
Another question is how much thinking effort we should spend before settling on an action (compare the problem of doers vs. thinkers above). Both phenomena of thinking too much and thinking too little  have been observed. I like to propagate the Design Thinking approach with an explicit ideation process, i.e. taking the time to state explicit alternatives and place them side by side. Since in real life the options are not given, it seems to make sense to explicitly create them and make sure we have not overlooked promising ideas. Often I found good solutions after squeezing some really stupid ideas out of my brain, just to get on the right track of thinking. Ideation does not only generate alternatives, it also helps us to restructure the problem representation.
On the other hand, our first hunch is often remarkably good. The examples from the Naturalistic Decision-Making literature show how people can make very good decisions without ideation, just trusting their intuition and experience. And I can also confirm this. Often after an ideation session, I fell back to my first idea (somehow having a bad conscience doing so).
As a solution to this contradiction, I can only offer the same answer as before: being aware of the dilemma, relying on experience and self-observation, and accepting that we sometimes overthink and at other times underthink.
At the risk of repeating myself: The world changes. This includes "variables" like the position of the cup I put down half an hour ago (someone may have moved it), the mechanisms of the world (I may have a new housemate who puts things away at other places than I would), my (sub)tasks and goals (I may not need my cup any more because I found that I ran out of tea), or my own preferences (I may simply decide not to have another cup of tea).
A popular approach to dealing with uncertainty in variables is simulation. It has a long tradition in weather forecasting and stock market analysis, and it becomes more and more popular in the business world. The main prerequisite for simulation is that we have a good understanding about the mechanisms of the world. We do not necessarily need a problem description, so the technique is somewhere between artificial problems and real-world problems. And I think the value of simulation depends on how it is used and how flexibly it is implemented. Simulations help to obtain a gut feeling of important and not-so-important factors for a decision, thus helping to find appropriate simplifications. But simulations can be downright dangerous if numeric results are taken at face value.
In a changing world, nature did well to equip us with decision procedures that cope naturally with the dynamics of the world. "[...] one cannot understand, then solve." , because everything about the problem is changing, even the problem itself. All we can do is appreciate the dynamics of the world (they also bring opportunities!) and use the techniques already described: iterating between thinking and doing, observing and correcting.
The problem with this approach is just that we don't like to find our own mistakes. For example, when we encounter an error in a program we write, we instantly feel bad: first of all, we made some kind of mistake in the code (reminding us that we are fallible), and second we will need time to fix the bug. Herbert Klaeren, retired professor of computer science at the University of Tübingen, used to tell his students to rejoice when they detect an error in their program. These are the "good" bugs, the known ones that you can do something about. The "bad" bugs are those you don't know and that will crash the program in production. And I think this is exactly how we should feel about any kind of "bug" we encounter in our decision-making processes.
In organizations iterative processes are often unsupported or prevented by incentive systems and rigid reporting procedures. Also if corporate culture overemphasizes harmony, problems may not be named and therefore not corrected. Gharajedaghi  describes misconceptions in corporate culture and proposes structures suitable to cope with the dynamics of the world.
Resisting the Optimization Culture
There is one more factor that I consider to make decisions hard: the common belief in optimization and standard procedures. The decision mechanisms described above are only rough guidelines that have to be filled with common sense and constant practice, self-observation and the acceptance that we cannot always make great decisions. What managers (and most other people) dream about are fixed procedures that you can follow step by step like a cooking recipe, always giving you an optimal solution.
In this blog series and previous ones (Goals Considered Harmful, Agile, Design and Buddhism) I have argued enough why such procedures cannot exist. But even knowing from my heart that my decision procedures are the way to go and that reality knows no such thing as optimal results, I find it hard to follow my own methods and convince others to do the same. Optimization and waterfall model thinking are so deep in our culture (or species?) that it is hard to get out of it. So an important step to become a better decision-maker is training oneself to leave these thinking patterns behind and to become aware of the trade-offs we have to face in real life.
- A Cognitive Model of Planning. Cognitive Science, 3(4), pp. 275 – 310, 1979.
- Sources of Power: How People Make Decisions. MIT Press. 1999.
- Simple Heuristics That Make Us Smart. Oxford University Press. 1999.
- The Black Swan. Penguin Books. 2010.
- From thinking too little to thinking too much: a continuum of decision making. WIREs Cognitive Science, 2(1), pp. 39-46, 2011.
- Dilemmas in a General Theory of Planning. Policy Sciences, 4 1973.
- Systems Thinking: Managing Chaos and Complexity. Elsevier. 2011.
I have often heard the argument that we should invest in AI so that we can get rid of error-prone, subjective and emotional decisions that humans make. For example in discussions about autonomous driving, people seem to assume that computers drive like humans, just better. I usually try to explain that an autonomous car is rather like Excel or a browser, just a lot more complex (and the word "crash" needs to be taken literally, with oneself in the middle of it). To better understand the possibilities and limitations of computers, let us get back to some of the concepts presented in Part 2.
Computers are constructed to do math, so we can expect them to be good at solving artificial problems. All algorithms in Artificial Intelligence are based on the concept of state spaces. A state is a representation of problem-relevant properties of the world. So for an autonomous car a state would contain things like the boundaries of the road, the presence of traffic signs, and objects such as cars, pedestrians etc. For an industrial machine, states may contain temperature and properties of the material to be handled.
A state space is the set of all possible states of a domain. State spaces can be modeled as discrete entities or continuous space, they can be infinitely large. Based on state spaces, we can define artificial problems:
- a clear problem statement (or goal) given by a goal state, a goal condition or an objective function (i.e. optimization criterion)
- a well-defined set of alternatives given by a set of possible actions, often annotated with costs
- a well-defined set of mechanisms relating actions to results given by a function mapping a state and an action to a follow-up state; this function can be defined probabilistically, so when we know the current state and evaluate a proposed action, we will get a set of possible follow-up states, each annotated with a probability
- a correct or optimal decision or solution is implied by the problem definition
To solve an artificial problem, we can simply use the rules defined by normative decision theory. The only obstacle to finding optimal solutions may be computational power. For large state spaces, applying these rules in their strictest form may simply take too long to compute, even with the fastest computers we have. In these cases we have to live with approximations. The goal of the process is still to find the optimum or reach the specified goal, but we know that we will only get a solution close to the optimum.
Computers can help to solve real-life problems that happen to be close to artificial ones. This can work well in factories, where environment and processes can be relatively well controlled. But in general, as discussed in Part 2, most real-life tasks are more elusive, we cannot specify any of the components of artificial problems. The more we move into everyday life, the less we can and should trust artificial formulations of real-world problems.
[...] you may agree that it becomes morally objectionable for the planner to treat a wicked problem as though it were a tame one, or to tame a wicked problem prematurely, or to refuse to recognize the inherent wickedness of social problems. This does not only concern social problems, but all types of decisions that cannot easily be represented as artificial problems. And for those that can be represented, the main work is done by humans, not computers.
Take an optimization model. [...] But setting up and constraining the solution space and constructing the measure of performance is the wicked part of the problem. .
Now if we try to make computers make real-life decisions, we have a very fundamental problem: How can we tell a computer about the decision problem? If we assume we have no explicit goal, no description of the mechanisms and no enumeration of alternatives, how would a computer know what we want it to do? We need to give it some kind of input.
As we observed in previous parts of this blog series, humans start with some gut feeling of what needs to be done. The decision process consists mostly in transforming this gut feeling into more tangible forms, without necessarily getting to a level of specificity that would be necessary for a mathematical formulation. For making the kinds of decisions that people make every day, a computer would need a full understanding of the world around it, it would need the same types of gut feeling, it would need to communicate with humans and transform different versions of gut feelings to finally get to a conclusion and act. This is currently completely out of reach for computers, and maybe it should stay that way. If we put gut feeling into computers, they may become just as error-prone, subjective and emotional as we are, and then what would we have gained?
We have looked at the extremes: artificial and wicked problems. But we have seen in Part 2 that there is a continuum with some problems lying in between. At least for some classes of problems, could we not transfer some of the techniques that humans use for their decisions into computers?
Heuristics can mean lots of different things. They are often introduced as "rules of thumb". In psychology they may explain very specific behavior such as the conjunction fallacy, or they may describe more generic strategies such as the take-the-best heuristic. And sometimes "heuristic" is used as a catch-all term for "any behavior we don't understand".
Heuristics have been picked up in AI. In contrast to psychology, we have a clear definition: "A heuristic function, also simply called a heuristic, is a function that ranks alternatives in search algorithms at each branching step based on available information to decide which branch to follow."  These heuristics are a compromise between the original idea of letting computers do the work and the pragmatic approach to help the computer do so. Heuristics can speed up the search by using additional knowledge, which, of course, must be provided by a human in addition to the problem statement. Heuristics can also be a tool to get approximate solutions.
In my own work I used some concepts of heuristics from psychology and combined them with the classical AI approach . To some extent this can lead to more robust solutions that depend less on specific parameters in the problem specification. But there is one basic obstacle when trying to replicate human heuristics in computers: knowledge representation and memory organization. Heuristics often rely on an implicit attribution of importance by the order or strength in which alternatives are retrieved from memory, like the take-the-first heuristic, which describes how decision-makers use the first viable option they can think of. But a hard disk is not organized like memory, every possible solutions is prespecified and as likely as the other ones.
Gigerenzer and colleagues have propagated the use of fast-and-frugal trees as a means to consciously use heuristics for professional decisions. Such trees can easily be put into a computer as simple if-then rules. But where is the AI here? Humans have done the work.
Naturalistic Decision Making propagates the Recognition Primed Decision Model . As the name suggests, the main idea is to rely on experience by recognizing a situation, comparing it to the present one and possibly adapt the solution.
This idea has been explored in AI early on by Sussman in his Hacker program . He used it to solve problems in a blocks world, such as building a tower out of given blocks. His system contained the basic ingredients of the Recogniton Primed Decision Model: a matching of situations to previous solutions, a critics mechanism that checks and adapts the solution, and an implicit learning mechanism that stores actions and their outcomes. As impressive as this system is, at the end it comes down to a heuristic of saving computation power when solving artificial problems, because all the matching, checking and adaptation relies on rules of formal logic. The matching of situations, and the understanding of the mechanisms of the world, need to be modeled carefully by humans and must be fully customized to the domain.
The same ideas have been carried on in the Case-Based Reasoning  community. But at the end we always get back to the point where people have to do the work by carefully defining the state space, defining matching rules and considering possible changes to action plans. None of this comes close to the flexible way in which people match situations to prior experience, how they adapt solutions and how they gradually develop an understanding of a situation.
How Computers Can Help
When it comes to computer decision-making we always end up in the same dilemma: computers need clear specifications of a problem, and they are unable to imitate the mental transformations of problems that humans show. There has been some research on how to transform representations, but all of them come back to the point that the real work has been done by the programmer, not the computer.
Instead of solving problems for us, I think computers can better help us amplifying our inborn most basic-decision making mechanism: transforming our internal, often elusive, representations until they start to make sense. Computers can amplify the tools that humans have been using for centuries in the form of pondering, communicating, and writing.
I was involved in the prototypical implementation of Sort It, a tool to write down ideas or solution alternatives, using tags to provide some order. The main point of Sort It is that the content can be re-arranged easily: tags can be renamed and merged, they can be assigned and deleted from single items or groups, and it includes an automatic transformation of hierarchical levels. Using these kinds of transformations has often helped me to better understand a task at hand. The data I enter in Sort It usually has a short life span. When I have understood the problem well enough I go on using more formal tools such as spreadsheets, or I don't need any other tools, because I have made my decision.
Working on tools such as Sort It provides an opportunity to better understand the processes involved in human decision making. We are still very far from understanding them to a point where we can fully reproduce them in computers, but by building tools for people, we can get into a positive feedback loop: provide useful tools for people to make better decisions (or at least feel better about their decisions), thereby better understanding human decision processes, to build even better tools.
- Dilemmas in a General Theory of Planning. Policy Sciences, 4 1973.
- Heuristic (computer science). https://en.wikipedia.org/w/index.php?title=Heuristic_(computer_science)&oldid=1071826355
- A Unifying Computational Model of Decision Making. Cognitive Processing, 20(2), pp. 243 – 259, 2019. [pdf from HAL]
- Recognition primed decision. https://en.wikipedia.org/w/index.php?title=Recognition_primed_decision&oldid=1067910815
- The virtuous nature of bugs. In: AISB'74: Proceedings of the 1st Summer Conference on Artificial Intelligence and Simulation of Behaviour, pp. 224-–237. 1974.
- Case-based reasoning. https://en.wikipedia.org/w/index.php?title=Case-based_reasoning&oldid=1070872850