CAOE Seminar Series: Perils and Promises of Artificial Intelligence for the Homeland Security Enterprise

Graphic with dark background with white, red, and pink electrical currents passing in the background. "CAOE Seminar Series: Perils and Promises of Artificial Intelligence for the Homeland Security Enterprise" is written across the graphic.

Artificial Intelligence (AI) has taken the world by storm with applications in self-driving cars, medical diagnostics, biometric analysis, and more. However, the use of AI is not without its own drawbacks. As such, it is critical to understand the current state of the art in AI, where the field is going, and what can (or cannot) currently be accomplished with Artificial Intelligence. In this seminar series, we invite AI luminaries to discuss current advances in Artificial Intelligence and offer insights into the perils and promises of such technology for the Homeland Security Enterprise.

Upcoming Seminars

Check Back Soon!

Past Seminars

The Future of Discovery Assistance

Speaker: Dr. Ed H. Chi, VP of Research at Google DeepMind
Date: Wednesday, December 11, 2024
Time: 12:00 PM (EST) | 10:00 AM (MST) | 9:00 AM (PT)

Abstract:

Our field has shifted from traditional machine learning techniques that are mostly based on pattern recognition to sequence-to-sequence models. The future of universal personal assistance for discovery and learning is upon us. How will multimodality image, video, and audio understanding, and reasoning abilities of large foundation models change how we build these systems? I will shed some initial light on this topic by discussing some trends: First, the move to a single multimodal large model with reasoning abilities; Second, the fundamental research on personalization and user alignment; Third, the combination of System 1 and System 2 cognitive abilities into a single universal assistant.

Speaker Bio: Dr. Ed H. Chi is VP of Research at Google DeepMind, leading machine learning research teams working on large language models (from LaMDA leading to launching Bard/Gemini), and neural recommendation agents. With 39 patents and ~200 research articles, he is also known for research on user behavior in web and social media.  As the Research Platform Lead, he helped launched Bard/Gemini, a conversational AI experiment.  His research also delivered significant improvements for YouTube, News, Ads, Google Play Store at Google with >930 product landings and ~$9B in annual revenue since 2013. 

Prior to Google, he was Area Manager and Principal Scientist at Xerox Palo Alto Research Center‘s Augmented Social Cognition Group in researching how social computing systems help groups of people to remember, think and reason. Ed earned his 3 degrees (B.S., M.S., and Ph.D.) in 6.5 years from University of Minnesota. Inducted as an ACM Fellow and into the CHI Academy, he also received a 20-year Test of Time award for research in information visualization. He has been featured and quoted in the press, including the Economist, Time Magazine, LA Times, and the Associated Press.  An avid golfer, swimmer, photographer and snowboarder in his spare time, he also has a blackbelt in Taekwondo.

Can Chatbots Help Us Better Understand People?

Speaker: Kristina Lerman, Senior Principal Scientist at the Information Sciences Institute and Research Professor in the USC Viterbi School of Engineering’s Computer Science Department.
Date: November 21, 2024
Time: 12:00 PM (EST) | 10:00 AM (MST) | 9:00 AM (PT)

Abstract:
Large Language Models powering modern chatbots have demonstrated impressive capabilities in conversational AI and task-solving. However, researchers also worry about the risks of harm, such as propagating historical biases embedded in the data used to train chatbots. This talk offers a novel perspective: rather than treating bias as a risk to be minimized or eliminated, I show how to leverage it to gain insights into people. First, I demonstrate that it is possible to skew a chatbot’s ideological perspective on a given topic with minimal training data. Surprisingly, altering its perspective on one topic makes the chatbot sound like a partisan on other topics as well. Next, I show that this manipulation strategy can be used to align chatbots to human populations, effectively creating “digital replicas” that mirror their language, style, and biases. These digital replicas can be “surveyed” by prompting population-aligned chatbots to reveal the perspectives, beliefs and even psychological states, of diverse populations. While this work opens new possibilities for studying human behavior and social dynamics at scale, it also raises critical questions about fidelity, validation, and the ethical development of AI systems capable of emulating human perspectives.

Speaker Bio: Kristina Lerman is a Principal Scientist at the Information Sciences Institute,a unit of the University of Southern California (USC) and a Research Professor in the USC Viterbi School of Engineering’s Computer Science Department. An expert in complex multi-agent systems, Dr. Lerman’s research on social data and other topics has been funded by the National Science Foundation (NSF), Defense Advanced Research Projects Agency (DARPA), Airforce Office of Scientific Research (AFOSR) and the Army Research Office (ARO). Her current work revolves around deciphering the structure and dynamics of social media and crowdsourcing platforms, such as Twitter, Reddit, and Stack Exchange among others. Among her goals: understand the role of networks and plaform’s content curation algorithms in shaping collective behavior, discover the structure of user-generated communities, predict emerging trends and group behavior, and identify the role of cognitive constraints in online interactions. Projects explore network-based and machine learning-based approaches to harvest concept hierarchies from social metadata and automate semantic annotation on the social web. Dr. Lerman’s research includes statistical text analysis, semantic modeling of data, and mathematical modeling of multi-agents systems, as well as social networks and social computing. She has published a variety of journal articles, book chapters, and refereed conference and workshop proceedings, and has written numerous workshop papers and technical reports. She also holds a patent involving document-based data extraction. Dr. Lerman teaches a USC Computer Science Department course on social media analytics. She briefly worked for a tech startup prior to joining ISI. Dr. Lerman was awarded her Bachelor of Arts in Physics by Princeton University, and her Ph.D. in physics by the University of California at Santa Barbara. 

AI’s Challenge of Understanding the World

Speaker: Melanie Mitchell, Professor at the Santa Fe Institute
Date: October 23, 2024
Time: 12:00 PM (EST) / 9:00 AM (MST)

Abstract:
The AI research community is deeply divided on whether current AI systems genuinely “understand” language and the physical or social contexts it represents. In this seminar, Professor Melanie Mitchell will survey the ongoing debate surrounding AI’s ability to understand. She will explore what constitutes humanlike understanding and discuss methodologies for evaluating AI’s understanding and intelligence.

Speaker Bio: Melanie Mitchell is a Professor at the Santa Fe Institute. Her current research focuses on conceptual abstraction and analogy-making in artificial intelligence systems.  Melanie is the author or editor of six books and numerous scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her 2009 book Complexity: A Guided Tour (Oxford University Press) won the 2010 Phi Beta Kappa Science Book Award, and her 2019 book Artificial Intelligence: A Guide for Thinking Humans (Farrar, Straus, and Giroux) was shortlisted for the 2023 Cosmos Prize for Scientific Writing. 

Natural Language Processing in the Age of Generative Artificial Intelligence: Challenges, Risks, and Opportunities

Speaker: Dr. Bonnie Dorr, Professor and Director of the Natural Language Processing & Culture (NLP&C) Laboratory
Date: October 4, 2024
Time: 12:00 PM (EST) / 9:00 AM (MST)

Abstract:
This talk presents challenges, risks, and opportunities for Natural Language Processing (NLP) Applications, focusing on the future of NLP in the age of Generative Artificial Intelligence (GenAI). A case is made for moving beyond “just the words of the language” to support more reliable and transparent output, i.e., representation-driven NLP that adopts the power of GenAI but incorporates explanatory internal structures to capture principles that apply across human languages. By adopting hybrid approaches, it is possible to combine linguistic generalizations together with statistical and neural models to handle implicitly conveyed information (e.g., beliefs and intentions), while supporting “explainable” outputs that enable end users to understand how and why an AI system produces an answer. Representative examples of GenAI output are provided to illustrate areas where more exploration is needed, particularly with respect to task-specific goals.

Speaker Bio: Bonnie J. Dorr is a Professor in the Department of Computer and Information Science and Engineering at the University of Florida, where she directs the Natural Language Processing & Culture (NLP&C) Laboratory. She is also an affiliate of the Florida Institute for National Security, former program manager of DARPA’s Human Language Technology programs, and Professor Emerita at University of Maryland. Dorr is a recognized leader in artificial intelligence and natural language, specializing in machine translation and cyber-aware language processing. Her research explores neural-symbolic approaches for accuracy, robustness, and explainable outputs. Applications include cyber-event extraction for detecting and mitigating attacks, detecting influence campaigns, and building interpretable models. She is a NSF PECASE Fellow, a Sloan Fellow, and a Fellow of AAAI, ACL, and ACM.

Thinking Clearly About AI Capabilities and Risks

Speaker: Arvind Narayanan, Professor at Princeton University and Director of the Center for Information Technology Policy
Date: September 4, 2024
Time: 12:00 PM (EST) / 9:00 AM (MST)

Abstract:
AI holds both exciting potentials and concerning risks. However, the capabilities and dangers of AI are often overstated. In this seminar, Professor Arvind Narayanan will address the gap between AI performance on benchmarks and real-world applications, highlighting the significant and growing differences. He will discuss how, although scaling has improved AI models, this trend may not continue indefinitely. Furthermore, he will argue that sci-fi scenarios of AI risks are not grounded in reality and will explore the effective strategies we already have for defending against AI misuses, such as disinformation and bioterrorism.

Speaker Bio: Arvind Narayanan is a professor of computer science at Princeton and the director of the Center for Information Technology Policy. He is a co-author of the book AI Snake Oil and a newsletter of the same name, widely read by researchers, policy makers, journalists, and AI enthusiasts. He previously co-authored two widely used computer science textbooks: Bitcoin and Cryptocurrency Technologies, and Fairness in Machine Learning. He led the Princeton Web Transparency and Accountability Project to uncover how companies collect and use our personal information. His work was among the first to show how machine learning reflects cultural stereotypes. Narayanan is a recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE).