Workshop on Metacognitive Prediction of AI Behavior

This event was sponsored by the Army Research Office held on Nov. 13-15, 2023 in Scottsdale, AZ.

As artificial intelligence become more prevalent in military systems, improved characterization of such systems will, in-turn, become important to ensure that such systems are safe and reliable in supporting the warfighter.  However, while AI systems, often using supervised machine learning or reinforcement learning, have provided excellent results for a variety of applications, the reasons behind their failure modes or anomalous behavior are generally not well understood.  The idea of metacognition, reasoning about an AI system itself, is a key avenue to understanding the behavior and performance of machine learning systems.  Recently, a variety of methodologies have been explored in the literature, which including stress testing of robotic systems [1], model introspection [2], model certification [3], and performance prediction [4].  Moreover, researchers across multiple disciplines including computer science, control theory, mechanical engineering, human factors, and business schools have explored these problems from different angles.  The objectives of the workshop are as follows:

Specific topics to be covered include, but are not limited to:

 Christian Lebiere Carnegie Mellon University  An architectural approach to metacognition
Sergei NirenburgRensselaer Polytechnic InstituteMutual Trust in Human-AI Teams Relies on Metacognition
Hua WeiArizona State UniversityTrustworthy Decision Making in the Real World through Uncertainty Reasoning
Ufuk TopcuUniversity of Texas  Multi-Modal, Pre-Trained Models in Verifiable Sequential Decision-Making
Visar BerishaArizona State UniversityA Theoretically-Grounded Framework for Assured ML in High-Stakes Domains
Chandan ReddyVirginia TechBridging Symbolic and Numeric Paradigms: Unified Neuro-symbolic Models for Mathematical Understanding and Generation
Paulo ShakarianArizona State UniversityMetacognitive AI through Error Detection and Correction Rules
Nikhil KrishnaswamyColorado State UniversityReasoning About Anomalous Object Interaction Using Plan Failure as a Metacognitive Trigger
Tianlong ChenMassachusetts Institute of Technology Metacognitive Intervention for Accountable LLMs through Sparsity
 YooJung ChoiArizona State University Tractable Probabilistic Reasoning for Trustworthy AI


Yu SuOhio State UniversityLanguage agents: a critical evolutionary step of artificial intelligence
Soroush VosoughiDartmouth CollegeAn Overview of the Social and Factual Failures of Large Language Models
Swarat ChaudhuriUniversity of TexasSymbolic Reasoning using Language Model Agents
Linyi LiUniversity of Illinois Urbana-ChampaignTowards certifiably trustworthy deep learning at scale
Yezhou YangArizona State UniversityRobust and compositional concept grounding for lifelong agent (visual) learning
Leon ReznikRochester Institute of TechnologymLINK: Machine Learning Integration with Network and Knowledge
Michael MahoneyUniversity of California, BerkeleyWeight Analysis for Principled Diagnostic Analysis of Metacognitive Models
Gavin StrunkScientific Systems Company Inc.Uncertainty Quantification’s Role in Metacognition
Asim RoyArizona State UniversityFrom CNNs to Symbolic Explainable Models to Natural Protection Against Adversarial Attacks
Mark RiedlGeorgia TechHuman-Centered Explainable AI
Robert GutzwillerArizona State UniversityBridging psychological metacognition into human-AI interactions


Sarath SreedharanColorado State UniversityA Human-aware Approach to Metacognition
Taylor JohnsonVanderbilt UniversityMetacognition in Autonomous Cyber-Physical Systems with Neural Network Verification, Repair, and Monitoring
Zhe XuArizona State UniversityInterpretable and Data-Efficient Learning for Autonomous Systems
Ransalu SenanayakeArizona State UniversityInterrogating Learning-based Autonomous Agents