top of page

MEDIA

Pairing Humans and AI for Enhanced National Security

By Allen Badeau, Vice President and CTO, NCI Information Systems, Inc.


What role will artificial intelligence (AI) play in homeland security and defense? AI is rapidly emerging as one of our most essential national security tools, with far-reaching applications for cybersecurity, intelligence and military readiness.

But the rise of the machines doesn’t mean that humans are any less important. To the contrary—the biggest value of AI may come in its potential to augment human abilities. AI tools are available now that can help the people protecting our country detect and respond to emerging threats, make better decisions and prepare for evolving security challenges.

Scaling Human Performance with AI

When people hear “AI”, most think of sci-fi scenarios featuring fully autonomous, self-aware machines. While some forms of AI are capable of independent decision making and action within the scope of their programming, many of the most exciting applications of AI today pair computers and people to amplify the power of both. At NCI, we are leveraging machine learning, robotic process automation (RPA) and other AI technologies to scale human performance—in other words, to help people do more with less effort.

NCI’s approach keeps humans at the center of AI development and deployment. Our work with government agencies ranges from military training exercises to detection of fraud, waste and abuse in billing data. Now, we have teamed up with Tanjo, Inc. to infuse new machine learning and data analytics capabilities into NCI’s AI’s platform, Shai (Scaling Humans with Artificial Intelligence). Combining Shai with the Tanjo Enterprise Brain and Tanjo Automated Personas (TAP) will open up new possibilities for advanced document categorization, threat detection and development of complex AI personas for national security applications.

A human-centric approach to AI has several applications for homeland security and defense.

Threat Detection and Response

AI makes it possible to comb through millions of data points to look for trends and patterns not apparent to the human eye. With machine learning, algorithms develop statistical models using information gleaned from training data sets. These models are then applied to detect specific patterns in real-world data.

In the security realm, this means training AI using machine learning to detect patterns in data that indicate an attack or emerging threat. The most obvious and immediate application is for cybersecurity. AI can monitor vast amounts of digital traffic and detect even small signals indicating a possible cyber attack. The same methods can be applied to detect threats in other types of data, such as intelligence data, social media traffic or other available data sources.

In some cases, the AI may also be programmed to respond directly to certain types of attacks. But in others, a “human in the loop” approach, which escalates the data to a human expert for verification and response, is preferable. While humans do not have the information processing ability of AI, some types of decisions are better left to humans who can put the data into a broader context and apply moral, ethical, political or psychological judgments.

Threat Profiling and Modeling

In addition to detecting current threats, machine learning can be applied to develop profiles of possible future threats. For example, what patterns of behavior indicate that a person may be an insider threat? What indicators can we use to determine whether a region is gearing up for serious conflict or simply engaging in routine political posturing?

AI can help us understand how different behaviors, characteristics and indicators are correlated with specific types of threats. Developing models of what these threats look like as they are emerging will help the humans involved better understand, recognize, respond to and potentially avert these threats.

We can also develop AI “personas” based on specific threat profiles. These personas are given capabilities, goals and even “personalities.” The personas can then be thrown into simulated scenarios. By running thousands of simulations with different personas, we can start to predict how people with similar characteristics may behave under different circumstances in the real world.

Decision Support

AI can reduce the administrative and cognitive burden for analysts and others working to support military missions and security initiatives. Using AI tools such as RPA and machine learning, we can automate much of the information gathering and initial analysis needed for effective decision making.

For example, we can train an AI assistant to help an analyst responsible for interviewing people as part of a security investigation. The AI can pull together all of the documentation the analyst needs and perform the first level of analysis. This could involve categorizing documents and other digital information, identifying suspicious patterns that require further investigation, and prioritizing who should be interviewed and what questions should be asked.

Automating the more routine and time-consuming parts of the investigative process allows analysts to focus their time and energy more efficiently on areas that require human judgment and expertise. An AI-augmented review process saves time for busy analysts, enables more rapid response, and reduces the cost of investigations.

Training

Training is another area where we can leverage AI to get better results with less time and expense. Real-world training exercises are expensive and time consuming. However, getting hands-on experience in applying skills and responding to realistic scenarios is an essential part of learning. This is especially important for people who will be placed in complex, high-pressure situations, such as emergency responders, soldiers and contractors working in battlefield conditions, and the people on the front lines of homeland security initiatives.

With AI, we can build complex, realistic virtual training scenarios that closely mimic the situations trainees will face in the real world. These virtual training worlds can be populated with dozens, hundreds or even thousands of AI personas filling specific roles. These roles may include teammates, adversaries, superiors, direct reports, civilian victims or any other type of player the scenario requires. Unlike the static “non-player characters” of the video games of yesterday, which had a limited repertoire of responses, these AI personas can be programmed to respond in complex, realistic and even unpredictable ways. This provides a more meaningful and effective training experience for human trainees.

AI-based virtual training can be used to expose trainees to experiences that are impossible, unethical or prohibitively expensive to simulate in the real world. Trainees can run through variations of the training as many times as required to cement the skills or decision making abilities we want to enhance. This type of virtual training will facilitate military and workforce readiness by getting personnel up to speed quickly and better preparing them for the challenges they will face in their missions.

Preparing for an AI-Driven Security Landscape

These AI technologies exist now and are already being deployed in some agencies. Over the next few years, we can expect AI’s role in national security and defense to continue to grow and evolve.

As we expand AI’s role in our security processes, it will be important to monitor its use and impact to avoid potential mistakes and misuse. AI is programmed by humans, and is therefore subject to human biases. Biases in the algorithms or poor quality training data can compromise results or lead to injustices, such as profiling that unfairly targets people based simply on race, ethnicity, country of origin, religion or sexual preference. Human judgment, review and correction will be required to ensure that results produced by AI are both accurate and fair.

The American AI Initiative has made development and deployment of AI a national priority—and for good reason. AI capabilities will be a decisive factor in military readiness and homeland security in the coming years. It is vitally important that we continue to nurture AI capabilities and expertise within American companies and our workforce. Doing so will enable us maintain a strategic advantage in the development of new AI tools for security and defense, while ensuring that those tools reflect our values and priorities as a nation.

Allen Badeau is Vice President and CTO of NCI Information Systems, Inc. and the Director of the NCI Center for Rapid Engagement and Agile Technology Exchange (NCI CREATE). NCI is a leading provider of enterprise solutions and services to U.S. defense, intelligence, health and civilian government agencies.


bottom of page