Artificial Intelligence is no longer the stuff of science fiction—it is already shaping our elections, our classrooms, our farms, and even our private conversations. Yet here’s what we often overlook: AI is not just a technical revolution; it is a human one. And like any human endeavor, it will only be as trustworthy as the way we talk about it.

This is the heart of the message I will bring to the 2025 AI Fest on August 12 at the Iloilo Convention Center, where I am honored to be one of the plenary speakers. My session, Responsible AI: Harnessing Innovation for the Common Good through Effective Communication and Safeguarding Against Malicious Use, will stand alongside contributions from respected experts: Dr. Benito Teehankee of De La Salle University, Dr. Jon Fernandez Jr. of Ateneo de Manila University, Engr. Rowen Gelonga of the Department of Science and Technology, Mr. Gavin Lim of Black Tactical Unit and The Void Deck Film Singapore, and Prof. André Carlos Ponce de León de Carvalho of the University of São Paulo.

We’ve spent years debating how to make AI fair, accountable, transparent, and safe. These principles are the backbone of every responsible AI framework. But they are empty promises unless the public actually understands them. Trust does not grow in silence; it grows in dialogue.

Bridging Code and Community

Communication—strategic, transparent, and inclusive—is not an afterthought. It is the infrastructure that holds responsible AI together. Without it, we risk building technology faster than we build public understanding, creating a dangerous gap where misinformation can thrive. And in a society already navigating an overload of digital noise, that gap can be exploited quickly and at scale.

I have seen the transformative power of communication firsthand. Between 2023 and 2025, in a series of talks at West Visayas State University, I watched students shift from fearing AI to cautiously embracing it. The turning point wasn’t a new algorithm—it was conversation. When we explained AI in their language, grounded in their lived experiences, fear gave way to informed optimism.

Deepfakes and the New Misinformation Threat

We are also entering an era where deepfakes and disinformation will challenge every institution. In the Philippines, fabricated videos and altered voice clips of well-known media personalities and even prominent billionaires have been used to promote fraudulent investment schemes, questionable nutraceutical products, and unverified supplements. These manipulated materials, often shared widely on social media platforms, exploit the credibility of familiar faces to lure unsuspecting individuals into financial or health-related risks.

Such incidents remind us that responsible AI is not just about building the right systems, but also about building the public’s capacity to detect and resist manipulation. Misinformation thrives in environments where people feel excluded from the conversation. When citizens cannot easily verify information or lack the literacy to understand how AI works, they become vulnerable to being misled. This is why communication is not just a defensive tool—it is a proactive safeguard that strengthens public resilience against malicious use of technology.

A Framework for Public Trust

Responsible AI needs a strategic communication framework that works at three interconnected levels. The first is foundational literacy, which includes curriculum-aligned AI education and accessible media toolkits to raise public fluency. The second is participatory design, where AI solutions are co-created with communities to ensure that technology reflects lived realities. The third is feedback governance, which establishes channels for citizens to help shape and monitor AI policy.

These layers align with global best practices, such as the OECD’s Principles on Artificial Intelligence, which emphasize transparency, accountability, and inclusive participation. More importantly, they recognize that the public must be part of AI’s evolution—not merely passive recipients of its effects. When communication is embedded at every stage, from design to deployment, AI stands a greater chance of being trusted, adopted, and used responsibly.

Building the Future Together

Policymakers must invest in AI communication infrastructure alongside research and development. Universities and media institutions should embed AI communication into their curricula. Developers should work with communicators from the very start, and civil society must act as translators of AI for the grassroots.

This is not “soft” work—it is hard infrastructure for a future in which AI serves people, not the other way around. Just as engineers design for safety and efficiency, communicators must design for clarity, trust, and engagement. Both are essential for AI’s long-term legitimacy.

AI will not wait for us to catch up. The race is on—not only to build better machines, but to build a public that can trust, challenge, and ultimately own the technologies shaping its future. The bridge between those two goals will not be written in code alone. It will be written in words people can understand, trust, and act upon.

Ken Lerona is a business consultant with over 20 years’ experience in marketing and branding. He conducts talks and workshops for private and government organizations and advises on innovation, business strategy, and reputational risk management. Connect with him on LinkedIn: www.linkedin.com/in/kenlerona