top of page

THE EXPLAINER

Intermittent posts on buying and selling enterprise software, construction software, AI-enabled applications and more.

Enterprise AI: Why Keep Humans In The Loop?

Balancing human review against artificil intelligence outputs in enterprise AI software

When do we defer to artificial intelligence (AI), and when do we hold the reins?


The answer, as is the case with most complex matters, is ... it depends.


Can We Understand the AI Output?

In simple enterprise software AI processes--processes that may really be rebranded algrorithmic processes--human involvement can be removed more easily. Barriers to complete automation may in some cases depend on establishing a comfort level with end users. A construction estimating process may have to earn the trust of estimators who use it as a guide-on-the-side until it proves itself over time.


In situations where a user can look at the output of AI and deem it reliable, this seems to be a common pattern in many environments. More difficult to manage are even algorithmic tools that take into consideration more variables than a human professional could consider at once. Dynamic scheduling, business rules engines, estimating processes that factor in market value for a finished prjeuct or hundreds of variables in project configuration, require more faith in the outcome.


It may be analogous to a chess computer that has the ability to analyze potential outcomes from opening to end-game, while the human can think two or three moves ahead at best. If AI sacrifices a pawn, we may think we've fooled it, must to our chagrin later.


Deference to AI?

But in many cases, humans may struggle to stay in the loop on AI processes when they really should, according to AI guru Rohan Paul. Paul cites a pattern of "cognitive surrender," as humans, struggling to mindfully review AI outputs, find themselves drifting towards passivity.


According to AI guru Rohan Paul,




Adding Friction to AI Processes

One of my heroes, the inimitable Bob Sutton (yes, this is "The Asshoole Guy") advocates for deliberately adding human review back into AI processes. His recent work focused on removing friction from human mediated-processes, with some excepions. The goal is to make it easier for people to do the right things, but harder to do the wrong things.



If You Believe In Things You Don't Understand

... you will suffer. So says Stevie Wonder, anyway. And this may contain a key additional concept for enterprise leaders grappling with the application of AI.


Some AI systems like large language models are non-deterministic--ask it a question on two different days and you'll get two different answers. In an attempt to emulate human thinking, LLMs plug in holes in their understanding with nonsense--just like people. AI for more mission-critical environments will be, if not fully algorithmic models, fully predictable and auditable in how they make decisions. Even if a human cannont understand in real time given our limited computing abilities, we can reverse engineer the decision and understand.


Does the AI make a decision that creates liability? You need to show how that happened, what rules were given to the AI, and increasingly what values or philosopy the AI started off with as operating assumptions.


We've entered a time in history when our technology outstrips our own mental capacity to perform the work, but we must be able to parse each decision and ensure its validity.


How are you dealing with this emerging need?



bottom of page