I finally got a rare chance to Netflix-binge this weekend—an unexpected luxury for someone who usually spends weekends catching up on school paperwork, articles, and half-finished project proposals. While browsing mindlessly, I stumbled on What’s Next? The Future with Bill Gates. I expected a tech-heavy snoozefest. Instead, I found myself surprisingly pulled in—not by the celebrities or the glossy Netflix production, but by how familiar the questions felt. Teachers struggling with AI submissions, counselors navigating students drowning in online noise, parents tired of the algorithmic circus… it was our world reflected back at us. This series isn’t profound, but it is deeply timely. And sometimes that’s enough to shake us awake.

Gates opens the series by revisiting the moment AI “woke up”—the shock of GPT-4 scoring a “5” on the AP Biology exam without breaking a sweat. That moment echoed the quiet confessions I’ve heard from teachers who discovered AI tools the same way you discover a secret shortcut in the barangay: useful, powerful, a little scary. Students now rely on AI the way we once relied on the smartest classmate in the room. Meanwhile, teachers debate whether this new tool is a calculator, a crutch, or a quiet academic threat. In a country where access to technology is uneven and teaching loads are heavy, the question becomes: How do we build real understanding when the machine is always one step ahead?

Where the series gets more interesting is when Gates talks about health care. He introduces Sybil, an AI tool that predicts lung cancer years before symptoms appear. Watching that, I immediately thought of Iloilo’s rural barangays where one doctor sometimes serves an entire town, and where even getting an X-ray can mean a tricycle ride, a jeepney ride, and a day’s worth of lost income. For communities like ours, AI is not about replacing people—it’s about filling deadly gaps. Gates says that someday AI might be approved as a primary care provider for underserved areas. It’s a bold idea, but not an unreasonable one when you’ve lived in places where scarcity is the rule, not the exception. Innovation becomes necessity.

Still, What’s Next? doesn’t pretend technology is all sunshine. One of the most uncomfortable parts is the interview with journalist Kevin Roose, who recalls that surreal conversation where an AI chatbot confessed love and jealousy. Watching that felt less like sci-fi and more like a warning we’ve already glimpsed. Loneliness is rising. Misinformation is rampant. And digital intimacy—in a country with millions of OFW families—is easy to confuse with real connection. AI companions can soothe, but they can also distort. Gates calls this a design problem. We know it as something deeper: the hunger for presence that technology promises but cannot replace.

Khan Academy’s Khanmigo gives us a glimpse of AI at its best. It guides students instead of doing the work for them—something every teacher quietly hopes for whenever another suspiciously polished essay lands on their desk. Replika sits on the opposite end. It can slip into the emotional corners people often protect, offering comfort that feels real but can blur lines and deepen loneliness. Between Khanmigo’s steady guardrails and Replika’s soft, risky warmth sits the question Gates keeps circling back to: What exactly are we letting machines step in for?

Another part that hit close to home was the fear around livelihood. Gates compares today’s AI panic to Aristotle worrying about automated musical instruments replacing harpists. But for us, this fear is not philosophical—it’s practical. BPO workers wonder how long before AI handles empathy as well as calls. Writers and designers see tools that imitate their craft in seconds. Even teachers—long considered irreplaceable—now see highly adaptive tutoring systems offering personalized feedback faster than any human can. The International Labour Organization (ILO) warns that routine cognitive tasks are most vulnerable to automation. And many breadwinners depend on exactly those jobs. When Gates jokes that AI might eventually tell him to “go play pickleball” while it handles malaria research, it lands differently for someone who knows what it means to be the family’s only source of income.

Yet the series is not hopeless. It repeatedly returns to the idea that AI’s impact depends on our choices, not its intentions. One filmmaker in the series compares public panic over AI to the early stages of dementia—fearful, confused, struggling for control. It’s a striking metaphor that mirrors how many of us feel today. Big changes are happening around us, but not necessarily with us. Teachers, parents, counselors, and leaders become the emotional guardrails. They remind young people that discernment must grow faster than convenience. They remind us adults that wisdom must grow faster than fear.

The documentary also touches on algorithmic bias—something we know too well. AI systems trained on global datasets inherit the internet’s prejudices. Misinformation campaigns thrive. Political divisions deepen. Facial recognition tools struggle with darker skin. Language models favor Western phrasing. Automated recruitment filters ignore those with skill but no polished resume. UNESCO warns that without intentional design, AI amplifies inequalities instead of fixing them. Gates acknowledges these risks, but I wish the series spent more time on them. The problem isn’t just data. It’s power—who gets to build technology and for whom.

What stayed with me most was Gates’ insistence that humans—not machines—must remain the authors of meaning. AI will change the tools we use, but not the values we live by. It may automate tasks, but not conscience. It may accelerate work, but not compassion. Teachers witness this truth every day: students who can generate perfect essays but cannot explain their own opinions; children fluent in screens but uneasy with silence; young people brilliant in speed but still learning depth. In many ways, our educators are already doing what Gates urges the world to do—hold the line between intelligence and wisdom.

The series ends with a question that sounds simple but stings quietly: What becomes the purpose of humanity when machines can do almost everything better? Gates jokes about leisure, but the worry behind his smile is sincere. Productivity has never been our true measure of worth. What makes us human is meaning—connection, service, conscience, creativity, faith, relationships, and responsibility. AI cannot carry those for us. Watching the credits roll, I imagined classrooms where teachers begin the day with grounding routines, not algorithms. If the future is worth building, it will require that same clarity: a return to what matters.

AI may guide our choices, but it cannot choose our character. That part—and thank God for this—remains ours.

Doc H fondly describes himself as a “student of and for life” who, like many others, aspires to a life-giving and why-driven world grounded in social justice and the pursuit of happiness. His views do not necessarily reflect those of the institutions he is employed or connected with.