Mohan Veloo, CTO, APAC, F5, spearheads F5’s AI campaign. In a freewheeling discussion with CXO Media and APAC Media, he analyses the open source advantage and how it can democratise artificial intelligence (AI) and foster more inclusivity in adoption.
Can you share the significance of DeepSeek’s open-source release and what it means for AI leadership in APAC?
DeepSeek’s open-source release is a powerful statement—it breaks the assumption that you need $100–200 million and a Silicon Valley zip code to build competitive AI. By making their models open and accessible, DeepSeek challenges the Western concentration of AI innovation and signals a broader shift: Asia is stepping into a leadership role in this new era.
It’s not just about access—it’s about influence. The US built global digital power with open technologies like Linux, Kubernetes, and PostgreSQL. China and others have adapted and built on these foundations effectively. With DeepSeek, we’re now seeing Asia contribute not just as adopters, but as originators. If the region continues to invest in high-quality open-source models, it could help shape the future trajectory of AI globally.
How can open-source AI democratize access to technology and drive inclusion?
AI will soon be as fundamental as the internet. The question isn’t whether we’ll use AI—it’s who gets to benefit from it. Historically, the most powerful models have been closed, controlled by a few large players. Open-source efforts—like Meta’s Llama, Mistral’s models, and now DeepSeek—are shifting that dynamic. By putting cutting-edge tools in the hands of startups, researchers, and local innovators, they lower the barriers to entry.
This creates competitive pressure, pushing even closed providers to be more inclusive. Open-source AI acts as a counterweight, ensuring AI doesn’t become a luxury reserved for the few. It enables broader participation, which is key to driving real inclusion in the AI economy.
Can you share more on the risks and security implications of open-source AI, including potential threats such as model poisoning and shadow AI?
Like any powerful tool, open-source AI is a double-edged sword. The transparency is great for innovation and trust, but it also opens up new threat vectors. Two big risks stand out: model poisoning, where attackers tamper with training data to bake in vulnerabilities; and shadow AI, where rogue AI tools run outside sanctioned environments, often under the radar.
That said, we have seen mature open ecosystems like Linux and Kubernetes thrive despite similar concerns. The key is a strong, security-conscious community and proactive governance. Open-source doesn’t mean insecure—it just means we need to stay one step ahead.
How does cybersecurity evolve in response to AI-driven threats?
Cybersecurity is now a dynamic, AI-powered arms race. Both defenders and attackers are using AI to outmanoeuvre each other in real time. We are seeing AI enhance everything from anomaly detection to automated response. But as defences get smarter, so do the threats. That’s why adaptive security strategies—backed by layered defence and continuous learning—are critical.
At F5, we focus on securing the digital nervous system of AI apps—the APIs. Our tools mitigate risks from data leakage to misuse, ensuring that AI workloads run securely, reliably, and at scale.
How is F5 adapting to the increasing adoption of AI-powered applications?
We are seeing the Jevons Paradox play out in real time: the easier it gets to build AI, the more it gets used, and the more infrastructure and security it demands.
That’s where F5 comes in. As AI-driven apps scale up, they need secure, high-performance infrastructure that can keep up. We’re helping customers manage AI workloads with smart load balancing, API protection, real-time threat detection, and performance optimisation—all backed by infrastructure that’s AI-ready. AI is changing how apps are built.
Rajneesh De