Blog

Goals Considered Harmful

Goals are a widespred concept in management and everyday life. Goals are also an important concept in Artificial Intelligence and I have spent a lot of time trying to figure out how to formulate and use goals to make "computers do interesting things that are not in their nature". In doing so I have realized that goals are the wrong way, not only for AI systems, but also for our personal and professional decisions.
Sie brauchen einen Blick von außen bei wichtigen Entscheidungen zu Ihrer IT?

Part 1: The Harmfulness of Goals

02.07.2021
Defining goals as conditions is a modeling process, often an unconscious one. Like any type of modeling it abstracts away details, in many cases going so far that the underlying purpose is lost. Goals can make people dissatisfied, or even dishonest. Overall, goals are a manifestation of the traditional western belief in static environments and planable tasks.

Goal-directed behavior seems to be the very essence of humankind. While most animals seem to live from moment to moment, humans have the ability to guide their actions towards desirable future states, so that we not only ensure our necessary food supply for the moment, but can achieve more sophisticated things such as building pyramids.

It is not surprising, then, that we are surrounded by goals: we aspire to run 10 km in less than an hour, we want to have saved enough money by the end of the year to buy the car of our dreams, we may even want the hedge in the garden to have grown to two meters next year to be safe from the view of passersby. Management theories are based on strategic goals and operational goals, the whole discipline of controlling is focused on defining and monitoring goals.

Goals are also a basic concept of Artificial Intelligence. It seems obvious that intelligent machines must encompass the (possibly) unique human trait of goal-directed behavior and this is directly connected to the definition and achievement of goals. Trying to define and organize goals in AI over decades got me thinking about our goal concept in general and that it is not only harmful in AI.

Goals as Conditions

The Oxford Advanced Learner's Dictionary defines a goal as something that you hope to achieve. AI is more specific: a goal is defined as a set of states. States are the basic mechanism of AI to describe the world. Actions (your own or those of others) transform the world from one state into another. In complex state spaces it is unwieldy or impossible to enumerate the goal states, so instead they are defined by a condition. The condition on(mug,table) defines the set of states in which the mug is on the table, leaving all other variables open, like whether the dishwasher is running.

This definition of goals works nicely in classical AI problems such as playing chess. States of checkmate can be clearly defined and they are indeed what the game is about. In real life this gets trickier. Or even in not-so-real life when you try to program a simulated household robot. If you give it the innocent goal condition on(mug,table), assuming that the robot is able to determine whether the mug is on the table (in simulation this is no big deal), you may be surprised when you venture to give your robot two consecutive goals to achieve. Having fulfilled the goal of putting the mug on the table, the robot will directy move on to its next goal, maybe on(plate,table), with the mug still in its gripper! Sure, you will go on to adapt your goal to on(mug,table) ∧ open(gripper). Again, you will be disappointed. The robot will open its gripper, but its arm is still in the same position, so it will knock over the mug when moving on to the next action (and the goal condition may still be fulfilled, we did not specify that the mug should be standing). In good engineering manner, we will go on to define the goal as on(mug,table) ∧ open(gripper) ∧ retracted(arm) This kind of works, but it looks clumsy, because the robot will fully retract its arm between to actions, while it could sometimes use its arm to move directly to the next one.

This example may appear to be specific to robotics or AI. However, we do encounter the same problems in other domains. If you tell your purchasing department to buy at the lowest possible prices, your employees will do exactly that, because their bonus and career future depend on it. What you really mean is to buy at the lowest possible prices for reasonable quality that meets business requirements, but this is much harder to measure and formalize, so it is usually not done. This problem is well-known in management and lots of authors have pointed out the need for more meaningful goal requirements in companies to make divisions more aware of the overall business goals. But when you do that, you end up with nothing to measure the specific performance. All this is a modeling process, where companies try to put complex life into a few measurable numbers. Any modeling process loses detail and that may be ok, but often the whole purpose of an undertaking is completely destroyed by the attempt to model it as measurable goals.

Thresholds

Let's assume for a minute that we could formalize our goals accurately into measurable quantities. Turning any quantity into a goal condition means we have to provide a threshold or range of target values, for example to have sold 50,000 products of a certain kind or to sell 10% more products this year than we sold last year.

First of all note that such numbers are completely arbitrary. Why 10% more? If our culture had agreed on working in the duodecimal system, we would probably have chosen 12%. Sometimes such numbers can come from the outside, such as sales numbers of a competitor or benchmarks. Those numbers may give us an idea what is realistically possible, but it does not mean that they are achievable in our own unique setup in a specified time (again an arbitrary choice). Thresholds can also come from initial assumptions, like the productivity of a machine promised by its vendor. Such numbers are important to remember and compare to as a way of learning so that next time we may make better judgements based on more realistic assumptions. But setting them as a fixed goal can be overambitious or make us lazy if we stop improving simply because we have achieved the goal.

The more problematic point, however, is what happens when we fail to achieve our arbitrarily defined conditions. Sit down and cry? Such goals do not help to improve, they only make us indifferent if we achieve them or unhappy if we do not. Goals are often connected with pressure. This can be purely psychological like in a Scrum sprint where the team tries to get all tasks done, just because they were initially planned [1]. It can also be more tangible in the form of bonus payments or career advancements. When people are pressured they start to do stupid things. I might start to run faster than my body allows and get injured just to achieve my 10 km-in-under-an-hour goal. Often people start to lie, be it by manipulating the numbers that are measured or by using dishonest means to get the numbers right.

I think this phenomenon was the root cause of the Dieselgate scandal. Engineers were pressured to build diesel engines that met the legal threshold values while at the same time staying in a given cost boundary. If the goal is unachievable, people find a way to make the numbers work, while missing the original (in this case contradictory) intentions.

One can argue that missing a goal is an indication that somebody didn't do their job well enough. Sometimes this may even be the case, people may be incompetent or lazy. But most of the time, the world around us may simply not behave as we would have wished. We live in a complex world that we can only control in a tiny proportion. Goals as conditions are a manifestation of western rationalist philosophy, which ignores the fuzziness and uncertainty around us. By the way, the same philosophy has been keeping AI from making any real progress in the last 60 years [2].

Milestones

The belief in predictable outcomes also manifests itself in project milestones. Again we find an analogous concept in AI, where goals are traditionally disassembled into subgoals (this works best in a logic-based representation, in today's AI the goals are based on simpler numerical representations where subgoals cannot be defined automatically).

For defining milestones, we first need a pretty good idea of the overall project goal. This may not be exactly in numbers, but the tacit assumption is that the outcome of the project is defined from the beginning. Even though agile development is being considered the standard of modern software development, people still cling to the traditional idea of a fixed project goal. We must get used to the idea that any project outcome is a moving target. Projects building well-known things (like a house or an online banking app) somewhat approximate the ideal of a fixed outcome, but even those simpler projects come with uncertainty: you may find, while building the house, that there is an old tree with a rare species of birds nesting in it that cannot be felled as planned, or while you program, a new standard for authorization in online banking apps may have appeared. So better think of project outcomes as evolving with the project. The more complex and unexplored the topic, the less predictable the outcome will be.

Milestones add assumptions about the temporal dimension, again arbitrary numbers of how long it may take to achieve some intermediate goal of which we do not even know its relevance at the outset of the project.

But if we take project goals and time planning away, what is left of the project? Is a project not exactly defined by its goals and duration? My answer is that we should have an idea about the purpose of a project and its first step. You may object that I just avoided the word goal by replacing it with purpose. So staying within the goal framework, the purpose may be an initial assumption about the project outcome or (informal) goal, but from the beginning it should be clear that this assumption will change with the project.

When reviewing proposals for scientific projects, I have observed that applicants seem to find it a lot easier to specify a project goal and milestones of three years into the future than to say what their first specific step will be. There is not much uncertainty in the first step (other than the starting date of the project and maybe the exact team that will be working on the project), so why are people unable to formulate them? I think one reason is that applicants spend too much time trying to define the elusive goal of their project (it will evolve anyway) and then adapt their work programme to this badly defined goal. Why not try the other way around, start with a specific step and see where it leads us?

 

Part 2: Beyond Goals

09.07.2021
The key to making informed decisions without goals is to develop a mindset for processes rather than states. Roughly-defined and constantly questioned priorities can guide actions more effectively and realistically than rigid goals.

As I have argued in the first part, capturing goals in conditions is a modeling process. And like any modeling process, it omits detail, often important detail, about the underlying intention. It also adds detail by setting arbitrary thresholds, either by quantifying the outcome or by defining time schedules. All this leads to dissatisfaction or even dishonesty when trying to achieve the goals, it also binds cognitive resources and time for defining goals rather than doing the actual work. In sum, our common concept of goals is a mirror of our cultural assumptions of a deterministic world containing puzzles that need to be solved or optimized.

One can raise (at least) the following objections against my argumentation:

  1. Goals are the basis of goal-directed behavior, how are we to achieve anything without setting ourselves goals?
  2. Defining goals as conditions is a very narrow view, many goals are much less formal.
  3. How are we to control the quality of projects and teams without goals?
Let us take them one by one.

Beyond Goal Conditions: Process over State

If we put aside goals, we experience a kind of emptiness. If it is not goals that guide our actions, how are we to achieve results that go beyond spontaneous unorganized actions? Varela, Thompson and Rosch [1] describe a similar experience of groundlessness about the concept of self. Their answer is a middle way that they call enaction. The whole dilemma of whether there is an objective outer world and an independent self versus the subjectivist view that we construct our world only mentally, can be resolved by focusing on the interaction of world and self.

Adapted to our goal discussion, we can resolve the dilemma by shifting our attention from states to processes. Goals focus on the states, they are defined as states. But achieving great things is a process that unfolds as we go along. Varela, Thompson and Rosch use the wonderful metaphor of Antonio Machado of laying down a path in walking. We should take each step on this path consciously with regard to our current priorities.

Instead of trying to achieve goals we should remember our original motivation and adapt it when necessary. So when we start any type of project, be it business transformation, software development or a private endeavour, we usually have some initial motivation and assumptions about the path we are starting to unfold. Especially in longer, business-type projects, it makes full sense to write down these motivations and assumptions. But we should neither formulate nor use them as a test for success, and we should not cling to them. Writing those things down is more like a letter to our future self to remind us of our initial state of mind.

In the unfolding of a project, we should constantly be aware of our initial mindset, and treat it as a raw material that we can and should adjust and detail during the process. As the project and the world evolve together, we will see whether our initial assumptions hold. If they do, we can stick with our original motiviation, if they turn out to be slightly off, we may adapt our motivation, if they turn out to be fully wrong, we may decide to stop the project.

The First Step: Self-Observation

When voicing my opinion towards goals and offering a more process-oriented view, the typical answer I get is: Sure, of course it is more about priorities. When I say goal, I mean your priorities. Typically the very next moment they start to formulate goal conditions without even noticing.

Substituting goals with priorities or motivations is not about exchanging words, it is about embracing the dynamics of the world and changing our habits. I do not know whether setting goals as conditions is a basic human reflex or if it is a specific trait of the culture I live in. I just know that goal conditions are a deeply cemented part of my mindset and everybody's around me. To overcome the problems of goals as conditions, we first have to be honest with ourselves. We do not have to do this publicly, it is a personal journey to observe the goals we set ourselves and that we get set by others.

Once we observe our habit of goal formulation as conditions, we can start to think about alternatives (see also my blog series on Agile, Design and Buddhism). This can start small, for example when I catch myself setting goals such as running 10 km in less than an hour. Why 10 km? Why an hour? Why running at all? I may shift my priorities to spending more time on training or to design my training in a way that focuses on speed more than stamina. Or I can decide to train less and live with the fact that I cannot run as fast as others.

This all sounds nice for private amusements, but in a professional environment, must we not always be ahead of the competition? On an abstract level I agree: any business should fulfill the goal condition to earn more money than they spend (even this is debatable in times of venture capital funding). But for everything else we have the freedom to set our own priorities. Benchmarks help to see where one stands compared to others. Whether the same numbers or even the same way of gathering numbers makes sense, is a conscious decision. Numbers help us as an orientation as long as we do not overemphasise them. Any measurement is an abstraction that takes away information, helping us focus, but also limiting the information content. As long as we are aware of that, numbers can be a useful tool to determine our priorities and actions.

Trust

Goals are often part of contracts and job assessments. Even in science nowadays agreements of objectives decide over tenure. So when a young scientist starts out, the first thing is to set a goal of how many papers she is to get published in three or five years. If she reaches the goal, she will have a job for life, if not, she is forty without a career. I have written enough about the problems of such goals, you can imagine yourself how human beings deal with the situation.

Getting back to the arbitrariness of goals, we cannot expect them to give us any useful information about the quality of work done. Most of the time the world simply evolves differently than expected. Evaluating the contribution of people and teams is a very delicate task that should be based on insight and common sense, needing a full understanding of what was done by whom under which circumstances. Trying to measure people by numbers may make some (little) sense in the context of factories, it makes absolutely no sense for knowledge workers.

My alternative for evaluating performance will be unsatisfactory to most managers: trust. In the current mindset, companies treat people like unwilling donkeys that have to be incentivised and controlled in order to do anything. In bad working conditions this may even be true. But if you put people in an adequate environment, interact with them appropriately, and let them do their job, they will work without being constantly pressed and measured. The will to work may not always be enough, there is also skill to be considered. But overall, it would do us good to change the default perspective from having to control everything and everyone to one where we trust people to make an effort and things to work out well if we keep our minds open and adjust to the situation.

Sie brauchen einen Blick von außen bei wichtigen Entscheidungen zu Ihrer IT?
← Zurück zur Blog-Übersicht