DARPA seeks to leverage rapid advances in AI to help train data to make computer analysis more reliable
The Pentagon’s science and technology research arm is launching a vigorous push into a new level of advanced artificial intelligence, intended to integrate advanced levels of “machine learning,” introduce more “adaptive reasoning” and even help computers determine more subjective phenomena.
It is called the "AI-Next" effort, a Defense Advanced Research Projects Agency program to leverage rapid advances in AI to help train data to make computer analysis more reliable for human operators, agency Director Steven Walker recently told a small group of reporters.
DARPA scientists explain the fast-evolving AI-Next effort as improving the ability of AI-oriented technology to provide much more sophisticated “contextual explanatory models.”
While humans will still be needed in many instances, the 3rd Wave can be described as introducing a new ability to not only provide answers and interpretations - but also use “machine learning to reason in context and explain results,” DARPA Deputy Director Peter Highnam said.
In short, the AI-Next initiative, intended to evolve into a 3rd Wave, can explain the reason “why” it reached the conclusion it reached, something which offers a breakthrough level of computer-human interface, he added.
“When we talk about the 3rd wave, we are focused on contextual reasoning and adaptation. It requires less data training,” Highnam said.
This not only makes determinations more reliable but massively increases an ability to make more subjective interpretations by understanding how different words or data sets relate to one another.
A computer can only draw from information it has been fed or given, by and large. While it can add seemingly limitless amounts of data almost instantaneously, AI-driven analysis can face challenges if elements of the underlying stored data change for some reason. It is precisely this predicament which the 3rd Wave is intended to address.
“If the underlying data changes then your system was not trained against that,” Highnam explained.
For instance, 3rd wave adaptive reasoning will enable computer algorithms to discern the difference between the use of “principal” and “principle” based on an ability to analyze surrounding words and determine context.
This level of analysis naturally creates much higher levels of reliability and nuance as it can empower humans with a much deeper grasp of the detailed information they might seek.
“That is the future - building enough AI into the machines that they can actually communicate, share data and network at machine speed in real time,” Walker said.
Yet another example of emerging advanced levels of AI would be an ability to organize hours of drone collected video very quickly - and determine moments of relevance for human decision makers. This exponentially increases the speed of human decision making, a factor which could easily determine life or death in combat.
“In a warfighting scenario, humans have to trust it when the computer gives them an answer...through contextual reasoning,” Highnam said.
Given these emerging 3rd Wave advances, making more subjective decisions will increasingly be a realistic element of AI’s functional purview. For this reason and others, DARPA is working closely with the private sector to fortify collaboration with silicon valley and defense industry partners as a way to identify and apply the latest innovations.
DARPA’s 1st, 2nd & 3rd Wave of AI
The third wave, described in DARPA materials as bringing ‘“contextual explanatory models” and a much higher level of machine learning, is intended to build upon the 1st and 2nd Waves of DARPA’s previous AI progress.
The 1st Wave, according to available DARPA information, “enables reasoning over narrowly defined problems.” While it does bring certain elements of learning capability, it is described as having a “poor level of certainty.”
This points to the principle challenge of AI, namely fostering an ability to generate “trust” or reliability that the process through which it discovers new patterns, finds answers and compares new data against volumes of historical data -- is accurate. Given this challenge, certain existing models of AI integration might have trouble adjusting to changing data or determining sufficient context.
The 2nd Wave enables “creating statistical models and training them on big data,” but has minimal reasoning, DARPA materials explain. This means algorithms are able to recognize new information and often place it in a broader context in relation to an existing database.
The 2nd Wave, therefore, can often determine meaning of previously unrecognized words and information by examining context and performing certain levels of interpretation. AI-enabled computer algorithms, during this phase, are able to accurately analyze words and information by placing them in context with surrounding data and concepts.
With this 2nd wave, however, DARPA scientist explain that there can be limitations regarding the reliability of interpretation and an ability to respond to new information in some instances; this can make its determinations less reliable. Highnam explained this as having less of an ability to train existing data when or if new information changes it. Therefore, this Wave is described by DARPA information as having “minimal reasoning.”
Can AI Make Subjective Determinations?
Raytheon, for example, is currently exploring a collaborative research deal with the Navy to explore prognostics, conditioned-based maintenance and training algorithms to perform real-time analytics on otherwise complex problems. It is a 6-month Cooperative Research and Development Agreement (CRADA) to explore extensive new AI applications, company developers said.
Raytheon developers were naturally hesitant to specify any particular problems or platforms they are working on with the Navy, but did say they were looking at improved AI to further enable large warfigthing systems, weapons and networks.
Todd Probert, Raytheon’s Vice President of Mission Support and Modernization, told Warrior Maven in an interview what their effort is working on initiatives which compliment DoD’s current AI push.
“Part of deploying AI is about gaining the confidence to trust the AI if operations change and then break it down even further,” Probert said. “We are training algorithms to do the work of humans.”
Interestingly, the kinds of advances enabled by a 3rd Wave bring the prospect of engineering AI-driven algorithms to interpret subjective nuances. For instance, things like certain philosophical concepts, emotions and psychological nuances influenced by past experience might seem to be the kind of thing computers would not be able to interpret.
While this is of course still true in many ways, as even the most advanced algorithms do not yet parallel human cognition, or emotion, in some respects, AI is increasingly able to make more subjective determinations, Probert said.
Probert explained that advanced AI is able to process certain kinds of intent, emotions and biases through an ability to gather and organize information related to word selection, voice recognition, patterns of expression and intonations as a way to discern more subjective phenomena.
Also, if a system has a large enough database, perhaps including prior expressions, writing or information related to new information - it can place new words, expressions and incoming data within a broader, more subjective context, Probert explained.
AI & Counterterrorism - Torres AES
Other industry partners are using new levels of AI to fortify counterterrorism investigations and cyber forensics. For example, a US-based global security firm supporting DoD, the US State Dept. and friendly foreign governments, Torres Advanced Enterprise Solutions, employs advanced levels of AI to uncover otherwise obscured or hidden communications among terrorist groups, transnational criminals or other US adversaries.
While much of the details of this kind of AI application, company developers say, are naturally not available for security reasons, Torres cyber forensics experts did say advanced algorithms can find associations and “digital footprints” associated with bad actors or enemy activity using newer methods of AI.
As part of its cyber forensics training of US and US-allied counterterrorism forces, Torres prepares cyber warriors and investigators to leverage AI. Torres conducts cyber forensics training of US-allied Argentinian and Paraguayan counterrorism officials who, for instance, often look to crack down on terrorist financial activity in the more loosely-governed “tri-border” area connecting Paraguay, Argentina and Brazil.
“The system that we train builds in AI, yet does not eliminate the human being. AI-enabled algorithms can identify direct and indirect digital relationships among bad actors,” said Jerry Torres, Torres AES CEO.
For instance, AI can use adaptive reasoning to discern relationships between locations, names, email addresses or bank accounts used by bad actors.
To illustrate some of the effective uses of AI for these kinds of efforts, Torres pointed to a proprietary software called Maltego - used for open-source intelligence analysis and forensics.
"AI can be a great asset in which our defensive cyber systems learn about the attackers by increasing the knowledge base from each attack, and launching intelligent counter attacks to neutralize the attackers, or feign a counter attack to get the attacker to expose itself. AI is critical to countering attackers," Torres added.
The software uses AI to find relationships across a variety of online entities to include social media, domains, groups, networks and other areas of investigative relevance.
The Growing Impact of AI
AI has advanced quickly to unprecedented levels of autonomy and machine learning wherein algorithms are instantly able to assimilate and analyze new patterns and information, provide context, and compare it against vast volumes of data. Many now follow the seemingly countless applications of this throughout military networks, data systems, weapons and large platforms.
Computer autonomy currently performs procedural functions, organizes information, and brings incredible processing speed designed to enable much faster decision-making and problems solving by humans performing command and control. While AI can proving seemingly infinite amounts of great relevance in short order - or almost instantaneously - human cognition is still required in many instances to integrate less “tangible” variables, solve dynamic problems or make more subjective determinations.
When it comes to current and emerging platforms, there is already much progress in the area of AI; the F-35s “sensor fusion” relies upon early iterations of AI, Navy Ford-Class carriers use greater levels of automation to perform on-board functions and Virginia-Class Block III attack submarines draw upon touch screen fly-by-wire technology to bring more autonomy to undersea navigation.
Other instances include the Army’s current experimentation with IBM’s AI-enabled Watson computer which, among other things, can be used to wirelessly perform real-time analytics on combat relevant maintenance details on Stryker vehicles. In a manner somewhat analogous to this, a firm called C3IOT uses AI-empowered real-time analytics to perform conditioned-based maintenance on Air Force F-16s.
“Despite higher levels of autonomy, in the end a human will make the decision, using computers as partners. We see the future as much less having machines do everything but rather humans and machines working together to fight the next battle,” Highnam explained.
Ultimately, Highnam said: --“Future warfare will be about speed - turning information into knowledge faster.” ---
defensemaven.io