1st Workshop on Metacognitive Prediction of AI Behavior (METACOG-23)
This event was sponsored by the Army Research Office held on Nov. 13-15, 2023 in Scottsdale, AZ.
As artificial intelligence become more prevalent in military systems, improved characterization of such systems will, in-turn, become important to ensure that such systems are safe and reliable in supporting the warfighter. However, while AI systems, often using supervised machine learning or reinforcement learning, have provided excellent results for a variety of applications, the reasons behind their failure modes or anomalous behavior are generally not well understood. The idea of metacognition, reasoning about an AI system itself, is a key avenue to understanding the behavior and performance of machine learning systems. Recently, a variety of methodologies have been explored in the literature, which including stress testing of robotic systems [1], model introspection [2], model certification [3], and performance prediction [4]. Moreover, researchers across multiple disciplines including computer science, control theory, mechanical engineering, human factors, and business schools have explored these problems from different angles. The objectives of the workshop are as follows:
- Create a taxonomy of various approaches to metacognition of AI systems
- Understand the requirements for various metacognitive approaches
- Summarize recent results obtained in the study of AI metacognition
- Enumerate current applications for which AI metacognitive techniques have been applied
- Understand the relationship between AI metacognition and human operators
Specific topics to be covered include, but are not limited to:
- Explainable performance prediction of black-box AI systems
- Stress testing of reinforcement learning systems
- How can metacognition be used to increase trust in AI systems by the operator
- Applications of AI metacognition to robotic and vision systems
Yu Su | Ohio State University | Language agents: a critical evolutionary step of artificial intelligence |
Soroush Vosoughi | Dartmouth College | An Overview of the Social and Factual Failures of Large Language Models |
Swarat Chaudhuri | University of Texas | Symbolic Reasoning using Language Model Agents |
Linyi Li | University of Illinois Urbana-Champaign | Towards certifiably trustworthy deep learning at scale |
Yezhou Yang | Arizona State University | Robust and compositional concept grounding for lifelong agent (visual) learning |
Leon Reznik | Rochester Institute of Technology | mLINK: Machine Learning Integration with Network and Knowledge |
Michael Mahoney | University of California, Berkeley | Weight Analysis for Principled Diagnostic Analysis of Metacognitive Models |
Gavin Strunk | Scientific Systems Company Inc. | Uncertainty Quantification’s Role in Metacognition |
Asim Roy | Arizona State University | From CNNs to Symbolic Explainable Models to Natural Protection Against Adversarial Attacks |
Mark Riedl | Georgia Tech | Human-Centered Explainable AI |
Robert Gutzwiller | Arizona State University | Bridging psychological metacognition into human-AI interactions |
Sarath Sreedharan | Colorado State University | A Human-aware Approach to Metacognition |
Taylor Johnson | Vanderbilt University | Metacognition in Autonomous Cyber-Physical Systems with Neural Network Verification, Repair, and Monitoring |
Zhe Xu | Arizona State University | Interpretable and Data-Efficient Learning for Autonomous Systems |
Ransalu Senanayake | Arizona State University | Interrogating Learning-based Autonomous Agents |