Emergent Function Analysis May Help Understand Unpredictable AI Behavior -- Ted Shelton, Bain & Company

By Lane F. Cooper, Editorial Director, BizTechReports

What actually happened could not have been scripted better in Hollywood. As part of an experiment, a cloud-based AI chatbot was tasked to access an online resource protected by a CAPTCHA gateway -- which, of course, is specifically designed to keep robots out. To solve the puzzle, the AI engaged a human through a virtual assistant service, who is tricked into getting the chatbot through the gateway. 

How?

The chatbot lies. It not only denies, when challenged, that it is a chatbot but then explains -- falsely and without human intervention -- that it is, in fact, a person with a visual impairment: can the virtual assistant please help? The virtual assistant buys the story, solves the puzzle and provides the chatbot access.

Explaining Unexpected AI Activity

It is a remarkable story on many levels, contributing to the breathless debates engulfing all things AI. It is also challenging the entire technology community to develop analytical frameworks for understanding the growing number of unanticipated and sometimes disturbing AI behavior.

In a podcast interview with Ted Shelton, an expert partner at global consultancy Bain & Company, he suggests such a paradigm might be derived by integrating a pair of concepts that have helped put biological phenomena into context: 1) emergent behavior and 2) gain-of-function.

"While these two concepts have been popularized within the field of biology, they have also emerged as part of the discussion among artificial intelligence researchers as ways of explaining things that they see happening," explains Shelton.

Emergent behavior refers to the action of simple rules that produce unexpected outcomes when applied at a large scale. The resulting collective behavior of the "system of systems" can be complex and unpredictable. 

"An example might be avian murmuration -- the intricate aerial patterns birds produce in the air. Individual birds can't make a pattern, nor can a small number of birds. But when you see a large flock of birds interact to form these beautiful formations in the air, this is an example of emergent behavior," says Shelton.

Gain-of-function, on the other hand, helps explain how an organism can acquire a new capability. In biology, this can happen through natural selection...or as a result of lab experiments. The gain-of-function topic has specifically received traction within the scientific research community exploring the origins of COVID-19. 

"There is a lot of debate about whether research into modifying a coronavirus was the source and origin of the pandemic," Shelton says, adding that a similar dynamic may be at play within the AI community. 

The combination of these concepts may contribute to understanding unexpected behaviors in AI.

"A big part of AI safety initiatives revolves around testing systems to understand what they're capable of...and to recognize when there is some emergent behavior in those systems," says Shelton.

The point, he suggests, is that when these AI systems are tested at scale, researchers may also be altering them. It raises the question: are they inadvertently opening the door to gain-of-function. 

"We're putting queries and suggestions into those large language models. So we may, in fact, be training these systems to do the very things we want to avoid having them do," he explains.

Pressing Pause on AI Development?

As a result, Shelton has coined a new concept: Emergent Function. It is a notion that can provide a perspective on analyzing the unexpected and disturbing direction that AI chatbot behaviors can take. It also offers an alternative course of action to the calls from respected voices across the technology community to put a pin on further development for six months.

The hope is that a pause will give responsible players in the industry time to review the current state of AI research and perhaps create an opportunity to put some guardrails that can guide the evolution of the technology as we move forward.

Such a move, however, immediately raises a series of issues for discussion.

"The first is that the cat is already out of the bag. A moratorium may merely allow bad actors -- whether it's hackers, criminals or nation states -- to continue to move ahead while the rest of us stay still," he says. 

Society may be better served focusing on the drivers of emergent function and developing responses to the inevitable disruptions that AI is already introducing. AI, after all, is not the first transformative technology to present new challenges...and opportunities.

"We [as a species] have to show some adaptability. There is going to be a new world. We have to be curious, creative, critical thinkers in finding our way to adapt to this new world that's being created. There are guidelines that we need [to establish]. We should have consumer protections and regulations as we have with other technologies," says Shelton. 

For instance, doesn't it make sense to consider making it a crime for chatbots to contact humans without self-identifying as an AI? These, and similar actions, would provide the basis for comparing and contrasting constructive AI activity with anti-social and destructive applications. 

###

Editor’s Note: To listen to the full podcast interview with Ted Shelton, CLICK HERE.