IBM’s Hybrid Blueprint Secures Future Of Gen AI In Automotive

T Murrali
04 Jul 2025
07:00 AM
3 Min Read

By infusing security and transparency across the AI stack, the company empowers the automotive industry to scale Gen AI with confidence—ensuring trust, safety, and compliance in a rapidly evolving mobility ecosystem.


Infographics

As software-defined vehicles evolve and AI-powered features become mainstream, securing Generative AI (Gen AI) in the automotive space is no longer optional—it’s essential. This calls for a multi-pronged approach: encrypting data from end to end, validating and explaining AI models, tightly controlling access, and safeguarding over-the-air updates. At the same time, onboard AI inference must be optimised to ensure low latency and high safety, all while adhering to evolving regulatory and ethical frameworks.

Amid these complexities, IBM is helping automotive clients build trust in Gen AI through its hybrid-by-design strategy—a blend of cloud and on-premises infrastructure that delivers flexibility, scalability, and above all, security.

Speaking to Mobility Outlook, Rishi Aurora, Managing Partner, IBM Consulting India & South Asia, said, “Data privacy is critical in the automotive domain, especially with sensitive vehicle and customer data. IBM’s architecture, reinforced by robust Security Operations Centers (SOCs), actively monitors, detects and responds to threats in real time—fortifying the entire AI pipeline.”

Equally important is explainability. IBM’s tools for AI governance help automakers track model performance, decode decision-making logic, and maintain fairness and accountability. By embedding security and transparency into every layer of the AI stack, the company enables the automotive sector to confidently scale Gen AI while safeguarding trust, safety and compliance in a fast-changing mobility landscape.

Can Gen AI Become A Watchdog For Cybersecurity In Connected Mobility?

In the high-stakes world of connected mobility, cyber threats can ripple across systems with real-world consequences. According to Aurora, Gen AI has the potential to transform the way cybersecurity is managed in the automotive ecosystem—by enhancing human insight rather than replacing it.

Infographics

Rishi Aurora and Biswajit Bhattacharya

“Gen AI can analyse vast amounts of data from various sources, such as security logs, threat feeds, and dark web forums, to identify patterns, trends, and emerging threats,” he said. It can then synthesise this intelligence into actionable threat reports, giving security teams a strategic edge in anticipating breaches before they happen.

What sets Gen AI apart is its ability to auto-generate dynamic, context-aware response playbooks. These detailed guides can outline real-time containment steps, malware eradication protocols, and system recovery procedures—drastically reducing reaction time during a security incident and limiting potential damage.

However, Aurora stressed that AI should augment—not replace—conventional defences or human expertise. In a domain where safety is non-negotiable, Gen AI must serve as an intelligent co-pilot, enabling faster, more informed decisions while reinforcing the broader cybersecurity posture of next-gen mobility systems.

Tackling Catastrophic Forgetting In Autonomous Mobility

As Gen AI evolves to support autonomous vehicles in real-world deployments, a critical challenge emerges—catastrophic forgetting. This phenomenon occurs when a neural network, upon learning new tasks or data, loses its ability to recall previously learned information. In the context of autonomous driving, this isn’t just a technical limitation—it’s a safety risk.

“Catastrophic forgetting is especially problematic in sequential learning,” explained Biswajit Bhattacharya, Lead Client Partner & Automotive Industry Leader, IBM Consulting India & South Asia. “A vehicle might forget how to handle rare but essential scenarios, which could lead to dangerous consequences on the road. This could lead to accidents, underlining the urgency to develop methods that allow these vehicles to remember and apply various scenarios effectively,” he mentioned.

As these vehicles scale across regions, they must consistently retain operational knowledge—be it navigating a snowy road in Norway or a chaotic intersection in Mumbai. Re-training models from scratch with each new deployment isn’t just resource-heavy; it’s unrealistic.

Infographics

To address this, researchers are developing smarter learning strategies. Replay methods reintroduce past data during training to retain old knowledge. Dynamic architectures, like Progressive Neural Networks, expand with new tasks while preserving existing ones. Meanwhile, regularisation-based techniques strike a balance between learning the new and safeguarding the old.

In a world where autonomous vehicles must adapt continuously while upholding safety and reliability, solving catastrophic forgetting is mission-critical. It’s not just about learning—it’s about remembering wisely. “The future of autonomous vehicles hinges on the success of these techniques,” he added.

Managing Rule Complexity In Dynamic Environments

One of the persistent challenges in deploying AI in dynamic domains like automotive cybersecurity is the upkeep of rule-based systems. These frameworks often require frequent expert intervention, driving up the total cost of ownership and making long-term viability difficult to justify in fast-evolving settings.

In such environments, where policies and operating conditions change rapidly, static rules lose relevance quickly. Continuous updates are essential—but they demand significant time, effort, and domain expertise. To address this, Bhattacharya said, “a strategic blend of automation, knowledge management, and clarity on value delivery becomes vital.” When done right, “it's possible to strike a balance between expert involvement and operational efficiency,” he concluded.

Also Read:

How IBM Fuels India’s Shift To Software-Defined, AI-Driven Mobility

Share This Page