CAOE Seminar Series: Perils and Promises of Artificial Intelligence for the Homeland Security Enterprise

Graphic with dark background with white, red, and pink electrical currents passing in the background. "CAOE Seminar Series: Perils and Promises of Artificial Intelligence for the Homeland Security Enterprise" is written across the graphic.

Artificial Intelligence (AI) has taken the world by storm with applications in self-driving cars, medical diagnostics, biometric analysis, and more. However, the use of AI is not without its own drawbacks. As such, it is critical to understand the current state of the art in AI, where the field is going, and what can (or cannot) currently be accomplished with Artificial Intelligence. In this seminar series, we invite AI luminaries to discuss current advances in Artificial Intelligence and offer insights into the perils and promises of such technology for the Homeland Security Enterprise.

Upcoming Seminars

AI’s Challenge of Understanding the World

Speaker: Melanie Mitchell, Professor at the Santa Fe Institute
Date: October 23, 2024
Time: 12:00 PM (EST) / 9:00 AM (MST)

Abstract:
The AI research community is deeply divided on whether current AI systems genuinely “understand” language and the physical or social contexts it represents. In this seminar, Professor Melanie Mitchell will survey the ongoing debate surrounding AI’s ability to understand. She will explore what constitutes humanlike understanding and discuss methodologies for evaluating AI’s understanding and intelligence.

Speaker Bio: Melanie Mitchell is Professor at the Santa Fe Institute. Her current research focuses on conceptual abstraction and analogy-making in artificial intelligence systems.  Melanie is the author or editor of six books and numerous scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her 2009 book Complexity: A Guided Tour (Oxford University Press) won the 2010 Phi Beta Kappa Science Book Award, and her 2019 book Artificial Intelligence: A Guide for Thinking Humans (Farrar, Straus, and Giroux) was shortlisted for the 2023 Cosmos Prize for Scientific Writing. 

Past Seminars

Natural Language Processing in the Age of Generative Artificial Intelligence: Challenges, Risks, and Opportunities

Speaker: Dr. Bonnie Dorr, Professor and Director of the Natural Language Processing & Culture (NLP&C) Laboratory
Date: October 4, 2024
Time: 12:00 PM (EST) / 9:00 AM (MST)

Abstract:
This talk presents challenges, risks, and opportunities for Natural Language Processing (NLP) Applications, focusing on the future of NLP in the age of Generative Artificial Intelligence (GenAI). A case is made for moving beyond “just the words of the language” to support more reliable and transparent output, i.e., representation-driven NLP that adopts the power of GenAI but incorporates explanatory internal structures to capture principles that apply across human languages. By adopting hybrid approaches, it is possible to combine linguistic generalizations together with statistical and neural models to handle implicitly conveyed information (e.g., beliefs and intentions), while supporting “explainable” outputs that enable end users to understand how and why an AI system produces an answer. Representative examples of GenAI output are provided to illustrate areas where more exploration is needed, particularly with respect to task-specific goals.

Speaker Bio: Bonnie J. Dorr is a Professor in the Department of Computer and Information Science and Engineering at the University of Florida, where she directs the Natural Language Processing & Culture (NLP&C) Laboratory. She is also an affiliate of the Florida Institute for National Security, former program manager of DARPA’s Human Language Technology programs, and Professor Emerita at University of Maryland. Dorr is a recognized leader in artificial intelligence and natural language, specializing in machine translation and cyber-aware language processing. Her research explores neural-symbolic approaches for accuracy, robustness, and explainable outputs. Applications include cyber-event extraction for detecting and mitigating attacks, detecting influence campaigns, and building interpretable models. She is a NSF PECASE Fellow, a Sloan Fellow, and a Fellow of AAAI, ACL, and ACM.

Thinking Clearly About AI Capabilities and Risks

Speaker: Arvind Narayanan, Professor at Princeton University and Director of the Center for Information Technology Policy
Date: September 4, 2024
Time: 12:00 PM (EST) / 9:00 AM (MST)

Abstract:
AI holds both exciting potentials and concerning risks. However, the capabilities and dangers of AI are often overstated. In this seminar, Professor Arvind Narayanan will address the gap between AI performance on benchmarks and real-world applications, highlighting the significant and growing differences. He will discuss how, although scaling has improved AI models, this trend may not continue indefinitely. Furthermore, he will argue that sci-fi scenarios of AI risks are not grounded in reality and will explore the effective strategies we already have for defending against AI misuses, such as disinformation and bioterrorism.

Speaker Bio: Arvind Narayanan is a professor of computer science at Princeton and the director of the Center for Information Technology Policy. He is a co-author of the book AI Snake Oil and a newsletter of the same name, widely read by researchers, policy makers, journalists, and AI enthusiasts. He previously co-authored two widely used computer science textbooks: Bitcoin and Cryptocurrency Technologies, and Fairness in Machine Learning. He led the Princeton Web Transparency and Accountability Project to uncover how companies collect and use our personal information. His work was among the first to show how machine learning reflects cultural stereotypes. Narayanan is a recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE).