The Army Research Laboratory’s plan to use human brains to train machines
The human brain is responsible for making us adaptable and widespread — a singularly adept instrument to help humans survive and thrive. Even as artificial intelligence quickly progresses, when it comes to military conflicts, people still outpace robots in crucial split-second decision-making. Slowly but surely, though, the gap is lessening, and training robots’ targeting capabilities using human brain responses may help close it.
When humans make decisions or respond to specific stimuli, our brains emit what’s known as a P300 response. We can measure that response, and the evaluation of those responses is used primarily with medical patients who have some form of neurodegenerative disease or disability. For example, a P300 speller is a device that allows a person to input text or commands using thought, based on their P300 reactions to certain letters.
The potential for P300 responses and their applicability outside the field of medicine is great and has been recognized. For a number of years, the U.S. Army Research Laboratory (ARL), the U.S. Army’s corporate research laboratory, has been exploring the scientific methods and potential for using P300 impulses for several military projects through its Cognition and Neuroergonomics Collaborative Technology Alliance. One of these projects is a neural net that can learn from P300 responses. This neural net would allow the ARL to better train AI in areas such as targeting, threat recognition, and situational awareness, and even allow for more immersive human training. At some point, soldiers may even wear electroencephalograms (EEGs), which measure brain responses, allowing ARL researchers to monitor and input their P300 responses into a neural net in real time.
In a paper on the subject presented at the annual Intelligent User Interfaces conference, held in Cyprus last March, researchers from the ARL and DCS Corp, a professional services firm that works with the Department of Defense, documented how they fed datasets of human brain waves into a neural network to teach the neural network to recognize when a human is deciding what to target.
“We were interested in building a neural network that can learn from laboratory data, so, in the lab, you have someone sit down, and [you] show them pictures rapidly, or you have them look around at a small scene, and from that we want to train a neural network,” says Stephen M. Gordon, one of the paper’s authors and a contractor with DCS Corp. “So, when they fixated on a stimulus, an object of interest, something that was salient in the environment, something that was relevant to their task…Could we record that? Could we do that without using any specific training data?”
This kind of research and the projects they lead to, however, deserve further scrutiny. “While this sort of research applied militarily may increase the ability of AI battlefield weapons to be as capable, proportionate, and protective of noncombatants as humans are — which is good — the wider question is: Do we want to live in a world suffused with cheap, effective, difficult-to-attribute, and remorseless lethal autonomous weapons? We should all be thinking about whose interest that’s in,” says Anthony Aguirre, co-founder of the Foundational Questions Institute, which tackles new frontiers and innovative ideas integral to a deep understanding of reality.
Neural nets require some form of data to train them. And generally, the more specific you can make the data to the task you want the neural net to understand, the better the net will perform. One goal of the ARL program is to see how generalized researchers could make this data while still retaining the effective performance of the neural net. This is because identifying a target in the real world is incredibly difficult for computers, as they rely on structured data, but the real world is chaotic, spontaneous, and full of ever-shifting variables when it comes to decision-making. So, for example, an enemy combatant popping out from behind a building will be difficult for a neural net to recognize effectively — particularly if the environment contains other shifting stimuli, such as gunfire or other soldiers. That’s where P300 responses may be able to help.
This is why the ARL program is looking for a way to generalize P300 responses across individuals. By examining the neural impulses of multiple individuals and using a neural network to look at the way their responses are triggered, the computer can start to piece together various scenes and their commonalities. In essence, it’s a bit like putting together a puzzle by drawing from the different P300 responses of numerous individuals until the neural net is able to evaluate a battlefield situation in the same way a human might. If that puzzle is a team of Navy SEALs outfitted with sensors that monitor their eye movements, for example, a neural net could draw and generalize from the entire team’s responses and perspectives, without requiring the SEALs to be in a lab. The neural net would also learn from the brain waves of the entire team.
Applicability goes beyond just targeting to potentially even more mutually adaptive human-AI systems, whether it be in an airplane cockpit or providing analysts with the most noteworthy footage from thousands of hours of satellite footage.
“If I told you to do a task, and then I did the same task, we’d probably do it in slightly different ways, based off past experiences or how we were trained. If AI could leverage the uniqueness of certain individuals and be mutually adaptive in the sense that you have a particular way of doing something, and AI infers that,” says Vernon Lawhern, a civilian scientist at ARL. “We are trying to look at some aspects, such as how AI systems infer a person’s state, and how to use that state to modulate different behaviors in some closed-loop system.”
Obstacles to such systems remain, and the program is still in its initial steps. This type of project starts to translate human experiences into data that is capable of being used to teach a neural net. In many ways, humans are the most adept sensors in the world, allowing us to digest and adapt to multitudes of stimuli per second. Translating this capability to neural nets opens up a range of possibilities, including those that have astounding potential but may also allow for lesser restrictions should such a system be integrated with weapons. As Aguirre notes, such a program “may increase the ability of AI battlefield weapons to be as capable, proportionate, and protective of noncombatants as humans.” Then why would we need further regulations of autonomous weapons systems? These are the questions this research needs to grapple with as it continues to develop.
“Science isn’t all about answering questions; it creates them. So, we’ve answered one, and we’ve created one or two more,” says Gordon. The military continues to integrate AI into a variety of systems. Even the CIA has 137 pilot projects directly related to artificial intelligence. The push will continue to make systems more effective, and, for better or worse, the Cognition and Neuroergonomics Collaborative Technology Alliance at the ARL will have an important role to play.