top of page
  • Photo du rédacteurErwan Hernot

Learning Leaders Are Key To Artificial Intelligence Success

In the past, things were almost simple: executives focused on using machines to automate specific workflow processes. These processes were linear, stepwise, sequential, standardized, repeatable and measurable. Knowledge was acquired – almost – for once. Over the years, processes have been optimized through various times and motion analysis. Companies have now exhaust performances from this mechanistic automation. This age of standard business process is over ; companies can’t no longer aim to replicate the best in class process of an industry leader. Enters artificial intelligence – or cognitive – technologies.

Integrating AI is easier said than done

But big companies have difficulties to integrate AI into their businesses (including their people’s mind;). They have a set of legacy IT systems and well developed business processes (for instance: planning process or capital allocation processes) that drive and inform their activities. Take a insurance company who has developed a new ranking to identify the most likely buyers of a particular service ; integrating the recommendations into customer relationship management systems and processes and into the behavior of sales people is not likely to be easy. Or take experts. Since artificial intelligence involves the management of knowledge, powerful and autonomous knowledge workers could consider it as threatening. If these experts don’t trust or like using the new artificial intelligence-based system, it will simply fail. As a consequence, cognitive technology projects are not just about technical changes but also about changes in organizational culture, processes, behaviors and attitudes. Managers are the key. But not all of them.

New power is in new partnership: human and AI

In this area, some managers are a serious obstacle to consider. They tend to rely heavily on hierarchical authority backed up by data they don’t necessarily share rather than exploring human possibilities empowered by AI. For them, an intelligent agent is a competitor rather than a partner. Training it is a threat. They don’t get the exploratory mindset of the time, feel insecure (“how can I keep my power?”) and so grab onto whatever controls are available. Consequence 1 is overcentralization in some areas though making of decisions should have been delegated AND backed up with algorithms. Consequence 2 is overformalization in others areas by expecting systems to control what direct decision making cannot. Consequence 3 is poor or non existant AI experiences involving employees. Those managers are too controlling and too disconnected from their staff and technological evolution at the same time. They still promote the persistent use of a simplistic outdated organizational model in which leaders dream up strategy, devise a corporate structure to support it and install systems to make sure employees toe the line.

Learning leaders vs static managers

Too bad: all leading edge companies are organized for agility instead of formal authority, platforms instead of silos and networks instead of organizational charts and meaning instead of top down information. Add the fact that AI is better than the average manager for planning, searching, retrieving information and recognizing known patterns. As it will take over all of these routine tasks, companies will need AI compatible managers : I call them learning leaders. To exploit full AI potential, learning leaders have begun to embrace a new vision of business processes: as more fluid and adaptive. They tailor these new processes to the idiosyncrasies of their own business. In essence, they are moving beyond rigid lines towards the idea of organic teams that partner humans with advanced artificial intelligence systems. This partnership with AI is thought on an evolutionary mode because its performance and its functionalities will evolve permanently as will their knowledge. As cognitive technologies are developed, learning leaders think through how work will be done with a given new application, focusing specifically on the division of labor between humans and artificial intelligence.

Intellectual humility

Some cognitive projects will involve 80% machine based decision and 20% human ones; others the converse. Systematic (re)design activity is necessary to determine how humans and machines will reinforce each other’s strengths and compensate for their weaknesses. The details of how humans and machines will collaborate to accomplish key tasks have to be discovered, negotiated and revisited on a case by case basis AND involving teams. Intellectual humility is also needed here because it is the ability to acknowledge what managers know is sharply limited. Then they are more apt to see that the world is always changing and that future will diverge from the present. In others words: they recognize the power of exploration. Learning leaders rely on personal learning : they constantly examine and change theirs attitudes and identity. The advices they offer are data driven and improve over time with more data ; these recommendations are also drawn on human intelligence because cognitive technologies are just – and for the based foreseeable future – on narrow intelligence.

Tapping in general intelligence

AI is indeed below the average human performance for creativity, identifying new patterns and logical reasoning and problem-solving. Humans by contrast have general intelligence which means they can think abstractly. They can plan for things that might happen and solve problems at a general level without nailing down all the details. Humans can innovate and develop thoughts and notions that are not based directly on past experience. Which is, on the other hand, how AI “learns”. Algorithms are trained by machine learning techniques, they can’t really think in the human meaning of the word. AI is just taking in data and guessing at what… the data corresponds to. It can do this by taking bits of data in and comparing it at an incredibe fast pace. But it can only recognize things it as seen in its training data set.

This management of human intelligence lead by this new breed of managers is difficult. Egos will need to be flexible to say the least. Implicit theories entrenched in their inner being will have to be unveiled and beliefs suspended so they can be put under scrutiny and compared wit relevant data. Then when framing a problem, AI will help them to conceive alternate models to their usual thinking. For all this to happen, learning leaders design environments and teams where people are curious, alert, engaged. There is no high technology which can take teams in this direction, but good old fashion low technology of management skills, personal, social and intense learning!

bottom of page