Orchestrating LLMs for Complex Financial Reasoning with Multi-Agentic Workflow

Dr. Sambit Sahu(Capital One,VP of AI Foundations)・Dr. Shi‑Xiong (Austin) Zhang(Capital One,Sr. Director, LLM Core & Agentic AI)

ABSTRACT

This talk highlights the forefront of Artificial Intelligence and Machine Learning in the financial services industry. We will introduce MACAW—the Multi-Agent Conversational Assistant Workflow—an LLM-based framework we developed to build conversational assistants capable of both answering complex questions and executing actions on behalf of users. MACAW leverages self-reflection, planning, and precise API generation to meet user needs. Within Financial Services, MACAW has been integrated into customer-facing applications, most notably the Customer Assist platform. Here, it fundamentally redefines customer interaction—moving beyond simple dialogue handling to delivering dynamic, API-grounded, business-logic-aware solutions that continuously learn and adapt.
Photo of Dr. Sambit Sahu

Dr. Sambit Sahu

VP of AI Foundations, Capital One

Dr. Sambit Sahu is the VP of AI Foundations at Capital One, where he leads the research and development of Large Language Models and multi-agentic based reasoning to solve complex business challenges. Prior to his current role, he was a Senior Engineering Manager at Alexa AI at Amazon, where he led efforts to address scaling issues in LLM pre-training and the Alexa runtime. Dr. Sahu also worked as a research scientist and research manager at IBM's Watson Research Center and is an adjunct associate professor of Computer Science at Columbia University. His past accomplishments include creating a mobility analytics platform at IBM that led to smarter transit implementations in several cities. He holds a PhD in Communications Network and Distributed Systems from the University of Massachusetts at Amherst.
Photo of Dr. Shi-Xiong (Austin) Zhang

Dr. Shi‑Xiong (Austin) Zhang

Sr. Director, LLM Core & Agentic AI, Capital One

Dr. Shi-Xiong (Austin) Zhang is the Sr. Director of LLM core and Agentic AI team at Capital One, where he leads the LLM pretraining and architecture research. Prior to his current role, he was a Principal Researcher and Lead of the multi-modal LLM team at Tencent AI Lab. Austin Zhang obtained his Ph.D. from the University of Cambridge in 2014. Following his PhD studies, he served as a Sr. Speech Scientist at Microsoft, where he built Microsoft Speech Recognition Models serving billion's customers in 200+ languages. Dr. Zhang's scholarly contributions were honored with the Best Paper Award at Interspeech, and he received the "IC Greatness Award" at Microsoft for his crucial role in developing the "Personalized Hey Cortana" system for Windows 10. He frequently gives keynotes/tutorials and has served as Area Chair (ICASSP, Interspeech, ASRU) and elected co-chair of IEEE Speech Language Technical Committee (SLTC).