top of page
  • Photo du rédacteurErwan Hernot

Managing With AI: How It Will Impact The Decision Making Process (2)

AI will, sooner than later, be the manager’s assistant. How will this happen? After a first article (find it here: ??????) Let’s look again at the decision making process from another point of view.

Decision making models

As decision makers, we are mainly pictured by 3 models: 1. The Rational-Actor model. In this model, an individual’s preferences are represented by a mathematical utility or payoff function defined over a set of possible known actions. The individual chooses the action that maximizes the function’s value. Not fully rationale though: in a game, that choice may require beliefs about the actions of other players. 2. The bounded rationality model holds that an individual’s rationality is limited by the information she has, the cognitive limitations of her mind, and the finite amount of time she has to make a decision. Bounded rationality shares the view of model 1 (that decision-making is a fully rational process), however, it adds this condition that people act on the basis of limited information. 3. The intuitive reasonning. Cognitive science no longer distinguishes between cognition and emotions. Emotions are specific calculations, which have evolved to point out dangers or opportunities that are useful to the person and that mobilize the whole body. Then how Artificial intelligence can help? Artificial intelligence is very powerful in analysis. It is less effective though in the fields of problematization or creativity. AI will automate decisions relevant to the rational actor model. For the two others, AI could be a decision-making assistant and could propose decisions to be made by managers (based on prediction).

Defining the problem

Defining a problem starts by documenting it. AI can help in that task. Capturing information is indeed the most mature technology today. Functions like Image Recognition, Speech Recognition , Search, Clustering will help a manager to go from gathering data to finding facts. Fact finding is a partnership between AI and the manager. AI crunches numbers and analyses structured and unstructured data. The manager does the extra mile and goes to the source (aka some people in the organization) in order to have a more qualitative view of the problem. For instance, AI could search existing policies and procedures and the manager could put them in the political landscape of the company to understand how they affect the issue. She could add her own knowledge of the field and identify unwritten rules or practices that form the context of the issue. In parallel, AI’s algorithms would have discovered more hidden patterns. In this step, AI could help this manager to stay objective because it wouldn’t become swayed by the people involved. With AI, the manager would focus on data, not on opinions or personalities. AI would quickly separate the information that is required from that which is not. It would compartementalize the facts as it would gather them and let the manager delve deeper only where necessary. Doing this, AI would help the manager stay on track and avoid becoming overwhelmed.

Eliminating bias?

Defining a problem continues with answering the question “What is happening ?” AI already helps with functions like Natural Language Understanding, Optimisation, Prediction. It is a slightly less mature technology than the former Capturing information capability. Still, AI can help explain the data by highlighting correlations, thus helping the manager to understand the connections between ideas and have a clear picture of reality. But AI goes beyond that, providing the fact that bias have been suppressed in the algorithms. The manager has her own bias when trying to represent reality. She does not always reconstruct this reality, she tends to rebuild it by infering. From data to information and then knowledge, there is leap: the manager will have indeed to determine if she can separate correlations from causality. In addition of AI, for that purpose, the manager could ask her people to challenge the problem definition and assumptions behind that definition because, if she identifies the wrong problem, the solution she picks is not likely to succeed. AI will help her to detect inconsistencies in her own (or the team: like group thinking for example) reasoning, looking for inferences mistaken for deductions. Consider the following proposition as a deduction: you went to a business school, work in a team dedicated to financial swaps, and saw a person you referred to as a client. I deduct that you are a trader. An inference is less concrete. You said you were a trader on swaps. From that, I infer that you are intelligent, work long hours in a big bank and are money obsessed. Think of a deduction as taking a lot of information and distilling it down to one fact. And inference is the opposite: you take one fact and extrapolate in out into several inferences.

Inferential activity

This inferential activity is so present that the manager is not fully aware of it. Yet it lies at the root of every one of her judgments, every one of her decisions or forecasts. Thanks to it, she builds reality and uses her knowledge. AI will allow the manager to go further than the usual inferential activity. AI will push her to not stop after the first and obvious conclusion. Further facts may support an alternative conclusion and even invalidate the original one. Whatever the case, the manager is almost always working with incomplete or bad information. She understands that deciding is going to be a leap. But with AI, she could make sure that it is a smaller validated leap rather than a big unknown one. AI would systematically bring logical and probabilistic laws that have been designed to ensure the validity (the adequacy between her conclusions, forecasts and judgments on the one hand and the reality of the other) of her reasoning. Until now only the critical thinkers were able to check on their own reasoning. AI will protect the manager from the usual bias in her decision making process:

  1. Recency bias. This is the tendency to be overly swayed by the most recent piece of data the manager would have received or by the last argument that would have been presented to her.

  2. Representativeness heuristic: it is translated either by the negligence of part of the formation of a data problem, or by the focus on another part of the information; in other words, in both cases by the expression of a bias.

  3. Individualizing information (that is, describing an individual story) has more impact on human brain – but not on AI – than statistical information. Indeed, it appears richer in meaning than the other which is most often encrypted and whose meaning is not immediately available.

Answering the question “What do we do?”

In this part, AI would help the manager to identify and analyze alternative courses of action relative to the time and resources they consume and the risks they entail, based on decision criteria. These criteria are principles, guidelines or requirements that are used to make a decision. This can include detailed specifications and scoring systems such as a decision matrix supported by AI. Evaluating these alternatives often requires input from multiple sources, such as peer executives, accountants, forecasters, and customer focus groups. It is then the manager’s job to add more of them to data gathered by AI. Because there is a limit to how much she knows, conversations with her people will reveal and evaluate possibilities based on everyone’s collective knowledge and will result in more effective decisions. For example, finding a solution to declining sales might require the manager to obtain feedback from the sales force and to supply AI with product performance data, a market analysis, and customer satisfaction surveys. After having, with AI, defined potential solutions and collected information about each alternative, the manager would be ready to evaluate the pros and cons. Before pulling the trigger and deciding, she would gauge the importance of any missing data. And that is a task that only humans can do, having the big picture in mind. As she would analyze her options in light of the facts before her, she would compare what she knows with what she might like to know. Then ask herself and her team: what kind of information are we now lacking? How difficult and time-consuming would it be to gather that information? Is any data missing due to tactical efforts by people who might want to manipulate the decision? How material is it likely to be—that is, how likely is it to change our ultimate decision?

Answering the question “Why?”

You could also want AI to answer this big question. AI technology would be Understanding. This is the most familiar AI model that people refer to. But this function really requires cognition; it requires many inputs from many different sources, the ability to draw on many experiences, to ponder them and to conceptualise them into models that can be applied to different scenarios and uses. This is something that the human brain is extremely good at. But AI, to date, simply can’t do. All of the previous examples of AI capabilities have been very specific. Understanding requires general AI (or strong AI or reaching singularity), and this simply doesn’t exist yet outside of our brains. HAL or Jarvis (Iron Man’s assistant ;)) won’t be corporate colleagues any time soon…


bottom of page