top of page
Human AI Handshake.jpg

Adaptive Human-AI Systems Design

I believe the future of Agentic AI lies not simply in stronger models, but in proactive, context-aware systems capable of representing human knowledge, beliefs, intentions, and norms so they can anticipate coordination breakdowns and adapt before performance degrades. Effective human–AI collaboration requires more than responsiveness; it requires systems that understand interaction dynamics as they unfold.

​

My work approaches human–AI collaboration as a dynamic system. Rather than optimizing humans and AI in isolation, I model the joint human–AI interaction state, examining how trust, coordination, workload, and resilience evolve over time. Using dynamical systems theory, information-theoretic approaches, and advanced multilevel and longitudinal modeling, I design real-time measurement frameworks intended for integration within agentic architectures. These frameworks specify how AI systems can monitor and interpret interaction dynamics in complex, high-stakes environments.

​

Grounded in behavioral, communication, and psychophysiological data collected through experimental and field studies, my work develops measures that detect emerging misalignment, overload, or coordination breakdown before failure occurs. The goal is not evaluation alone but architectural specification, defining the coordination and performance metrics that agentic systems should embed to adapt intelligently within real-world workflows.

​

My research spans both reductionist and systems-based approaches. I use controlled experimental methods to isolate mechanisms of trust, coordination, and performance when precision is required. At the same time, I conduct context-rich experimental and field studies to capture how these mechanisms operate within dynamic, interdependent teaming environments. I view reductionist and systems approaches as complementary rather than competing, each necessary for designing agentic systems that function reliably in the real world.

​

I operate at what I consider the interaction architecture layer of agentic systems, specifying how AI agents should monitor, interpret, and recalibrate in response to evolving human states. As autonomy increases, preserving effective human–AI coordination is not a usability feature. It is a systems requirement.

All rights reserved © 2025 Matthew Scalia
bottom of page