Artificial intelligence (AI) technology, and especially “deep learning,” are increasingly being applied to real-world defense tasks such as analyzing reconnaissance images.
In order to rely on information detected by such sophisticated technology, defense planners and commanders need to understand how deep learning works. And precisely because it is such a powerful technology, fully understanding it turns out to be a very challenging task.
Deep Learning and Image Recognition
The technology of deep learning, as Will Knight reports at MIT Technology Review, is inspired by the complex architecture of the human brain. The AI software simulates the multiple layers of neurons and synapses, and also simulates the learning process of the human mind.
A classic example of the deep-learning process is teaching the system to recognize a cat. The system is provided with a large library of imagery, some of which includes images of cats. In a “tuning” or training process, the system attempts to identify whether or not there is a cat in an image, and records a positive score whenever its human trainers confirm that there is in fact a cat in the image.
Because the system can swiftly crunch through vast numbers of training images, it can soon learn to identify cats with high reliability.
Identifying cats is rarely a vital security issue. But the same technology can be used for other image recognition tasks — for example, to scan thousands or millions of satellite images for evidence of tanks or missile launchers. Hence, the interest of defense planners in deep-learning technology.
How Do You Recognize a Cat?
But this is where things get challenging. While deep-learning technology can be taught to recognize a cat — or a missile launcher — it cannot tell us how it performs this accomplishment. Says Knight, “it isn’t clear whether the system may be focusing on the whiskers, the ears, or even the cat’s blanket in an image.”
Sometimes this is not a problem. Perhaps the AI system can pass its results on to human intelligence analysts, who can double check to make sure the AI is correct in its identification of, say, missile launchers. But in an urgent crisis situation, there might be no time to wait for direct human confirmation — the launchers need to be identified and engaged now.
In that situation, planners and commanders need to know exactly how much confidence they can place in an AI system. Which means learning how the AI learns, and understanding its (simulated) “thought” processes.
Understanding how deep learning works is so challenging that the Defense Advanced Research Projects Agency (DARPA) has no fewer than 13 different research projects underway, using a range of different techniques to improve our understanding of the deep-learning process.
Which also explains why Northrop Grumman — a traditional leader in autonomous systems and defense robotics — is at the forefront of research into understanding deep learning.