Multi-modal AI Lab

In collaboration with

the AI for Engineering Sciences course

of the University of Oxford

MULTI-MODAL AI & EMBODIED AGENTS

at the intersection of

Digital Twins, VR simulations, Generative AI and Cognitive Robotics

We pioneer next-generation solutions that combine language, vision, sensor data, and simulation to enhance safety and decision-making in complex, high-risk environments. From construction to utilities and industrial operations, we build intelligent systems that understand and model real-world conditions, helping organizations reduce risk, train more effectively, and act with confidence.

We automate the creation of domain-specific knowledge models—such as digital knowledge graphs, training content, and procedural checklists—directly from existing documentation, expert input, and on-site imagery. These structured representations power downstream services like VR-based simulations, real-time monitoring assistants, and learning copilots, ensuring that knowledge flows dynamically to where it’s needed most.

High-risk, knowledge-intensive domains demand more than static training—they require systems that adapt to real-world complexity. By combining immersive VR environments with AI-driven customization, we enable experiential learning that mirrors on-the-job scenarios while adjusting to user proficiency, risk context, and task goals. Our systems don’t just simulate—they assist, offering contextual prompts, procedural guidance, and performance feedback. This integrated approach boosts retention, builds competence, and supports ongoing learning in safety-critical settings.

Our partnership with the Digital Twins Course of the University of Oxford places us in a unique position to bridge cutting-edge research with practical applications.