How AI Will Transform Anti-Submarine Warfare

New Navy projects seek to capture more data about the oceans’ depths — then train computers to out-think human captains.





In just about every submarine movie, there’s a scene where the heroes, aboard one sub, engage the villains, in another, in some sort of deep-sea shootout. Neither side knows exactly where the other is, and the savvier captain usually the good guy — turns that ambiguity to deadly advantage. A new program seeks to apply artificial intelligence to ocean data and thereby help submarine operators understand where their adversaries are, what they’re doing, and what they can see.

Even today’s best sonar technology doesn’t give a sub captain a very good sense of the battlespace, says Jules Jaffe, a research oceanographer at Scripps Oceanography at the University of California San Diego who is embarking on the U.S. Navy program. 

“What the submariners get is a low-dimensional picture. So if you are towing an array, you get information like bearing and sometimes frequency information,” Jaffe said at the Defense One / Nextgov Genius Machines Summit here on Tuesday. 

There’s a lot of potentially valuable data that towed sonar doesn’t capture because it’s only collecting one type of data and only at one point. If you could collect and properly analyze sound and wave data from other points in the ocean, you could develop a much better sense of what an adversary is doing. 


Last December, the Office of Naval Research, or ONR, asked for white papersthat explore “analytic techniques linking physical oceanographic variability with acoustic propagation, including field efforts to collect relevant data sets,” and “Analysis of large oceanographic and acoustic data sets, including the development and use of artificial intelligence and machine learning techniques.”

ONR has selected more 30 projects funding to fund under the Task Force Ocean research program, which an official said would total more than $60 million spread over the next three years. “These will have various start dates, but all should be underway by October 2019” said an ONR spokesperson in an email. 

Now the office plans to fund about 15 research groups under a new program that largely aims to develop better ocean-sound sensors. It also funds Jaffe’s work, which aims to turn all that new undersea-sound-propagation data into 3-D pictures that submarine crews could use to immediately see where they were in relation to other objects and figure out how they might move to get a better picture or hide their activities.

“I visualize more of a three-dimensional environment where the submariner could understand where they are, geologically [in terms of ocean topography], geographically, what the radiated noise levels were…which is something they [meaning the U.S. Navy] are very concerned about because they don’t want their adversary to know they can hear them,” he said.

But Jaffe is also looking to build an AI engine that can do better than human submarine captains and crews. And he’ll teach it on data that comes from the submarine captains and crews themselves. 

“If I know this adversary is located along a certain bearing angle, then where might I go to optimize my ability to localize them? Those are decisions that submariners are making all the time,” he says. “They’re really playing a game. They want to find the bad guy without the bad guy knowing that we know where they are.”

If you can take that data and use reinforcement learning— essentially, showing the software lots of examples of submarine captains executing missions — you could train an AI that would outperform a human crew on some of these decisions. 

“We can watch them while they are making these decisions and then the reinforcement algorithm will learn what they’re doing in order to minimize ambiguities and understand what the results are,” he says. 

It’s similar to the way the researchers at DeepMind trained an algorithm to play a very difficult game of Go better than the most accomplished human player, using reinforcement learning.