CHIMERAS: Rethinking Human-AI Teamwork in National Security Screening

Arizona State University professor of Human Systems Engineering, Dr. Nancy Cooke is at the intersection of machine learning, cognitive science, and national security. Along with other researchers at Arizona State University, they are charting new territory in how humans and artificial intelligence can function together, not just as tools and users, but as teammates. The project, known as CHIMERAS (Collaborative Human-AI Interdependence Models for Explainable, Resilient, and Accurate Screening), is led by Dr. Cooke, a nationally recognized leader in cognitive engineering and Myke Cohen, the doctoral student leading CHIMERAS as his dissertation. Funded by the Department of Homeland Security through the Center for Accelerating Operational Efficiency (CAOE), CHIMERAS takes aim at a pressing challenge: how to improve the performance of high-stakes security screening tasks through better human-AI interaction.

Balancing Innovation and Application and Moving Beyond the “Black Box”

Security screening at airports and borders often involves fast-paced decisions under pressure, whether matching a traveler’s face to a document or identifying prohibited items in a carry-on bag. While AI is increasingly used to assist in these tasks, it’s not always clear how—or when—it helps. CHIMERAS focuses on improving that interaction by evaluating how AI can be integrated in ways that complement human strengths rather than complicate them.

“CHIMERAS is truly about effective teaming of humans and machines,” said Cooke. “This research builds on over eight decades of science directed at human capabilities and limitations and the implications for the design of technology that humans will use.”

Using two experimental platforms—Facewise for face-matching and AIID for automated baggage screening—Cooke’s team is testing how different combinations of decision-making sequences and explainable AI features impact outcomes. With support from CAOE and in partnership with Aptima, Inc., the project blends rigorous experimental design with real-world insight to identify what works—and what doesn’t—in high-pressure operational settings.

Teaming With Intelligence

CHIMERAS doesn’t view AI as a tool to replace humans, but as a teammate in a joint task. Rather than asking whether the AI gets the decision “right,” the research asks how the interaction between a person and an AI system evolves over time—and whether those interactions lead to stronger, more consistent results.

As Cohen, explained in the project’s kickoff meeting, the team is also refining the underlying AI models to perform at or above human levels. In earlier studies, some models lagged behind Transportation Security Officers (TSOs), limiting the potential for productive collaboration. In CHIMERAS, those algorithms are being reworked to ensure the AI pulls its weight in the partnership.

The team will also evaluate how human-machine decision-making emerges as a system-level property—not just a sum of its parts. “We want to see whether these joint systems show different strengths and weaknesses than humans or AI systems alone,” Cooke said.

Dr. Cooke brings decades of leadership in both research and government advisory roles to CHIMERAS. As Senior Scientific Advisor to ASU’s Center for Human, AI, and Robot Teaming, she’s long advocated for technology designs that reflect how humans actually work—under stress, with limitations, and in collaboration.

“This kind of work is exciting because it contributes to foundational research in human-machine teaming, but also provides solutions to problems faced by TSA and other agencies,” she said.

It’s an ethos that runs through Cooke’s work and her advice to emerging researchers: “Identify a national security problem that attracts your interest and assemble a multidisciplinary team to address it.”

One of the driving forces behind CHIMERAS is Myke Cohen, a Human Systems Engineering PhD student whose leadership and vision helped bring the project to life. Cohen not only led the proposal for CHIMERAS but has taken the helm of the project itself—an experience that reflects both his academic commitment and future aspirations in academia. “Myke Cohen… has been integral to the CAOE work that our team (including Erin Chiou and Mickey Mancenido) has accomplished over the years,” said Cooke. “Starting in my lab in the fall of 2020, Myke worked remotely for the first year from his home in the Philippines due to the pandemic. He changed his sleep cycle to keep up with the Zoom meetings.” That early determination paid off. Since then, Cohen has published, presented at conferences—including international venues—and become a standout collaborator in Cooke’s lab. “His contributions and development as a PhD student have been exceptional,” she added. “We wish him the best in his career, but will miss him terribly when he graduates.”

Security Screening as a Test Case to go From Research to Resilience

The screening tasks chosen for CHIMERAS are far from hypothetical. Face matching and baggage inspection are vital to Transportation Security Administration operations, and they reflect broader challenges facing decision-makers in high-stakes settings. These scenarios involve uncertainty, time constraints, and the need for rapid but reliable conclusions—conditions that are ripe for studying the dynamics of human-AI interdependence.

Rather than rely on retrospective data alone, the project gathers real-time performance metrics from both crowdsourced participants and expert users. Future phases will include in-person testing at airport field sites, with data collection designed to inform practical design recommendations for decision-support tools used by DHS personnel.

While the term “explainable AI” often implies that systems must justify their decisions to humans, CHIMERAS goes further, showing that how and when AI recommendations are delivered may matter just as much as what is said. The research acknowledges that humans don’t just need clarity—they need AI systems that fit into their workflows, match their pace, and adapt to their needs.

Dr. Cooke’s work is driven by the belief that AI should enhance—not complicate—human performance. “We can’t do this with one discipline,” she said. “Humans should be at the center. But we need people who are well versed in AI and robotics and technology in general, as well as the end user.”

With CHIMERAS, her team is setting the stage for smarter, more resilient screening environments—and, in the process, redefining what it means to work side-by-side with artificial intelligence.