(Written with the use of AI)

Artificial Intelligence (AI) is no longer just a futuristic concept; it’s here, influencing everything from healthcare to governance. This article itself was written with the help of a popular Large Language Model (LLM) AI, showing how, when used responsibly, AI can be a tool for good. However, recent incidents in the Philippines highlight how AI can also be used maliciously—and it’s time we acknowledge the risks. The real threat is not AI itself, but how it’s misused to manipulate and deceive the public.

AI Propaganda in Philippine Politics

A recent case involving a senator who shared an AI-generated post filled with false claims is a clear example of the danger we face. If even public figures are being misled by AI-generated content, what about the average citizen? This highlights the larger issue of AI illiteracy—the lack of understanding about how AI works and how it can be used to manipulate information. Kate Crawford, an expert on AI ethics, points out, “AI is not neutral; it reflects the power dynamics of those who create it.” This means that AI’s power can easily be misused, and people who don’t understand AI are vulnerable to manipulation.

Improving AI Literacy

The key to preventing the malicious use of AI is AI literacy. This is not just about understanding how AI works—it’s about learning how to evaluate and question the information that AI produces. Education should be the first step. Schools and universities need to teach students not just how AI works, but also how to critically assess AI-generated content. Likewise, the media must help the public understand the potential dangers of AI, especially its role in spreading false information. Maria Ressa, Nobel laureate and media advocate, has warned that social media platforms, powered by AI algorithms, amplify fake news, creating a dangerous cycle. As she put it, “If we don’t educate ourselves, we will lose the ability to distinguish fact from fiction.”

Instead of relying on strict regulation, we need a more proactive approach: the academe, media, and tech companies must work together to educate society about AI. Journalists need training to better identify AI-generated misinformation and report responsibly.

AI for Good

Despite its risks, AI holds immense promise for good. In agriculture, AI can help predict weather patterns and optimize crop yields. In healthcare, it can assist doctors in diagnosing diseases more accurately. These are just a few examples of how AI can enhance our lives if used responsibly. But for AI to truly benefit society, we need to ensure its use is ethicaland transparent. Tech companies must be held accountable for how their AI tools are deployed. But accountability alone isn’t enough. Education and open communication are just as important.

Malicious AI Use

In the Philippines, AI-generated disinformation is already a threat. From fake health advice to manipulated images of public figures, AI is being used to spread false narratives. In 2024, AI-generated deepfake videos have surfaced, misleading the public and causing harm. These incidents highlight how easily AI can be used to manipulate perceptions and influence opinions. We need to address this head-on, not just through regulation but through education and media responsibility.

Taking Action

AI is not the enemy. It’s a tool, and like any tool, it can be used for both good and ill. The problem isn’t AI itself, but how we choose to use it. Inaction has allowed those with malicious intent to get ahead. We need to act now to ensure that AI is used responsibly. This means educating the public, promoting transparency, and holding tech companies accountable for the tools they create. The longer we wait, the greater the risk that AI will be used against us. It’s time to take responsibility.

Ken Lerona is a business consultant with over 20 years of marketing and branding experience. He conducts talks and workshops for private and government organizations and consults on innovation and reputational risk management. Connect with him on LinkedIn at www.linkedin.com/in/kenlerona.