The military has used electro-optical and infrared (EO/IR) sensors for decades to track stationary and moving targets. Now, these sensor technologies are being combined with artificial intelligence for a variety of military and commercial applications. Using artificial intelligence to handle visual processing allows these systems to do a better and faster job of image recognition.
Cutting-edge AI
Here are some cutting-edge examples of how AI is being used for visual processing:
- At Stockholm’s Princess Margaret Cancer Centre, researchers have combined artificial intelligence with image processing to create technology that can determine the best radiotherapy treatment plan for an individual patient, according to DOTmed News.
- At Stanford, computer scientists have combined deep learning, a form of AI, and visual processing to create an algorithm that could diagnose skin cancers as accurately as board-certified dermatologists could. They hope to create an app that would provide people who can’t travel with remote diagnoses.
- Neurala, a startup with the tagline “brains for bots,” has already used AI to process images from low-end cameras to help a Mars rover explore the red planet, according to IEEE Spectrum. Now it wants to bring that same technology to home robots, toys and self-driving cars.
- Military applications have expanded beyond acquiring and tracking weapons targets, according to Military & Aerospace Electronics. Naval experts, for example, are looking to EO/IR sensors to provide situational awareness. Different assets in the field can transmit what they’re seeing to a central command; then, that consolidated view can be broadcast back out to the field. The U.S. Navy currently has an experimental Situational Awareness System that uses two EO/IR sensors placed at the front and back of an aircraft carrier’s deck, as well as video cameras and a laptop-based controller.
- In the same vein, Northrop Grumman’s LITENING G4 combines sensors, laser imaging and advanced image processing to provide combat ID, range recognition and situational awareness.
Smarter Vision
In these examples and in general, artificial intelligence is necessary to make sense of multiple inputs and/or lower-resolution images. While the optics and sensors used in these implementations are impressive, it’s the software that makes sense of all the incoming data.
Many multisensor systems rely on a process called sensor fusion to gain a complete view of the surroundings, and that situational awareness is crucial for military operations. As All About Circuits explains, many sensing technologies besides EO/IR can help create an accurate view of surroundings.
LiDAR, radar, sonar, GPS and physical sensors such as accelerometers can fill in for each other. For example, cameras don’t work well in low-light or bad-weather conditions, while LiDAR emits its own light. LiDAR, however, is expensive, so developers of autonomous vehicles use both, along with an array of other devices.
Sensor Fusion
Fusing all these inputs together is no simple feat. For one thing, different sensors may provide conflicting data, while occasionally a sensor may fail. So a sensor fusion system must have enough intelligence to determine which inputs to trust in a split second.
Unguided equipment, such as autonomous cars or unmanned aircraft systems such as the RQ-4 Global Hawk, must be able to avoid obstacles and respond almost instantly to changing conditions.
For example, the automated driving solution from Delphi and Mobileye combines cameras, real-time mapping, radars and LiDAR, according to Auto Connected Car News. However, instead of using a database of driving rules for guidance, the system employs artificial intelligence that actually can learn to drive better with experience.
Computer-vision systems that combine multiple sensors with a layer of intelligence mimic the way the human body takes in information from the sensory organs and processes it in the brain. Will these systems ever be as good as the human body and brain?
In 2015, software beat humans in an image classification test, according to the Computer Vision Foundation. And Tech Times says Google’s self-driving cars are safer than human-piloted cars. In other words, they’re already better.