top of page

Research

CARE Lab

Screenshot 2026-02-04 at 3.22.27 PM.png

The CARE Lab (Cognition, Adaptation & Reasoning for Embodied AI) conducts research at the intersection of robotics, multimodal AI, and intelligent agent design. The lab focuses on how embodied systems perceive, reason, and interact in real world environments, with students contributing through exploratory research in multimodal memory, adaptive dialogue, and socially aware robot control.

Ongoing Projects

Thrust 1: Intelligent Physics-Based and Embodied AI Systems

ARC — Autonomous Robotics Control & Driving

Building on my prior work in adaptive control and autonomous systems, ARC investigates automated and learning-based approaches for tuning PD/PID controllers on mobile robots. Using platforms such as piCar robots, Pololu 3pi+ robots, and Unitree Quadruped robots, this project explores optimization, extremum‑seeking control, and reinforcement learning to improve stability, energy efficiency, and navigation accuracy under real‑world constraints.​

picar.webp

Figure of piCar robot

robot do.webp

Figure of robot dog

NEXUS — Next‑Generation Embodied User‑Centric Systems

This project advances modular embodied AI architectures that integrate perception, planning, and safe human interaction, while personalizing this humanoid robot based on stakeholders' needs and preferences using the 1X Neo robot platform. The work emphasizes reproducible system design, simulation‑to‑hardware transfer, and rigorous evaluation, extending my earlier embodied robotics research into public‑facing and service‑robot contexts.

neo robot.jpg

Figure of Neo robots

more neo robot_edited.jpg

Figure of Neo robots

Thrust 2: Agentic and Multi‑Expert AI Systems

This thrust extends my prior work in multimodal social robotics and LLM-integrated systems to agentic AI architectures that combine foundation models with planning, real-time reasoning, and persistent memory.

​

MENTOR-AI— Multi-Expert Neural Teaching Orchestrator for Reasoning-Driven AI

This is an AI-powered instructional agent, optionally embodied as a virtual avatar, designed to support interactive teaching and question-answering in computer science-related courses. The system integrates LLMs with retrieval-augmented generation, structured course content (e.g., zyBooks), and multimodal interfaces to explain concepts, answer student questions, and adapt responses based on student interaction history. MENTOR-AI enables research on multi-expert reasoning, dialogue, and trustworthy deployment of agentic AI in real educational settings.

picture here...

Thrust 3: Multimodal and Generative AI for Human‑Centered Applications

MEDI‑CARE — Medical AI Assistant for Post‑Treatment Follow‑Up

MEDI‑CARE explores explainable, privacy‑aware LLM‑based assistants that support patients after clinical procedures, extending my earlier healthcare robotics research into conversational and decision‑support AI.

mellama.jpg

Figure of chosen model to be used for MEDI-CARE

CAST‑AI — Cognitive Agents for Social Media and Trending News

CAST‑AI investigates generative, multimodal agents for embodied and virtual news delivery, examining how expressive AI communication affects trust, engagement, and information dissemination in socio‑technical systems.

heygen.avif

Figure of chosen AI system to be used for CAST-AI

Thrust 4: AI‑Driven Learning, Gamification, and Workforce Development

PLAY‑AI — Personalized Learning AI through Gamification. PLAY‑AI studies gamified learning systems that integrate AI‑driven assessment and adaptive feedback into interactive gameplay. This work directly connects AI research with pedagogy and inclusive student engagement.

jackbox.avif

Figure of Jackbox Trivia Murder Party for theme of game: strategy

JSBC-AI

One line of collaboration was through the James Silberrad Brown Center for Artificial Intelligence (JSBC-AI), a pioneering interdisciplinary hub for applied and socially grounded AI research at SDSU.

 

Through this center, I worked closely with faculty in the MIS department on robotics and socially intelligent AI systems. In this context, I recruited students for the center and co-advised undergraduate and master’s students on projects involving multimodal AI pipelines, human–robot interaction, and socially grounded data workflows.

 

The collaborative environment fostered by JSBC-AI directly shaped my research trajectory at SDSU by accelerating interdisciplinary exchange, student research engagement, and the translation of embodied AI research into applied socio-technical contexts. These efforts resulted in peer-reviewed publications in leading information systems venues such as ICIS and HICSS, and provided students with hands-on experience spanning system design, implementation, evaluation, and dissemination.​​

AI robot developed by SDSU researchers to help people with mental health concerns.

Data Privacy

with Dr. Hajar Homayouni

I collaborated with faculty in the Department of CS, including Dr. Hajar Homayouni, on projects focused on federated learning, privacy-preserving AI, and secure AI infrastructure. Through this work, I co-advised student teams developing federated and multimodal generative frameworks for sensitive domains such as healthcare, producing peer-reviewed publications in security and software quality venues, including QRS and NDSS. These projects emphasized reproducible experimentation, privacy evaluation, and rigorous validation, further extending my prior work on responsible and trustworthy AI systems.​

© 2026

bottom of page