Legal Updates

ARTIFICIAL INTELLIGENCE AT WORK: NAVIGATING THE LEGAL LANDSCAPE IN THE PHILIPPINES

ARTIFICIAL INTELLIGENCE AT WORK: NAVIGATING THE LEGAL LANDSCAPE IN THE PHILIPPINES

The future is for those who are prepared.

We can barely keep up with the rapid development of AI tools in the open-source community. Agentic AI. Deep-research. Vibe-coding. MCP. Reflective Models. AI in mass HR systems. Artificial Intelligence (AI) is already influencing who are hired, who gets monitored, and who gets left behind. The question we must ask—especially for those of us in the legal and regulatory trenches—is this: are our laws evolving as fast as the tech that’s reshaping our jobs? I must emphasize that we, lawyers, are not spared from this rapid incursion.

Let’s be honest. The Philippine Labor Code is a 1974 vintage. It was not built with algorithms in mind - much more those I mentioned earlier. It was designed in a time when determinations and decisions could only be made by people. Today, that’s no longer the case. Imagine a warehouse worker flagged as "underperforming" by a machine that doesn’t understand she took time off to care for a sick child. Or a young applicant getting auto-rejected by a chatbot trained on decades of biased hiring data. Sounds far-fetched? Not really. It’s already happening elsewhere—and it’s only a matter of time before we begin seeing this here too.

To its credit, the National Privacy Commission stepped up last December with Advisory No. 2024-04. It is one of the first serious attempts to put guardrails around AI in the Philippines. The advisory insists on transparency, fairness, and accountability—solid principles, yes, but easier said than enforced when the algorithm is a black box even its creators barely understand.

Here’s the thing: just because AI made the call doesn’t mean the employer is off the hook. Under the NPC’s guidelines, companies must explain what data they’re collecting, how decisions are made, and what recourse is available – and that’s just the privacy piece.

Fairness, in real-world terms, means people shouldn't lose jobs, opportunities, or benefits because a machine determined that people are not efficient or effective enough. Or worse, because the AI was trained to make the same mistakes humans already do— they can even do it faster. That may not be the progress we should aspire for. That’s a repeat performance with less transparency.

In March 2025, the Department of Labor and Employment held national consultations on AI in the workplace. I followed the discussions closely. You could feel the tension in the room: business leaders eager to innovate, union reps worried about job losses, civil society urging caution. One recurring theme stood out: Who’s responsible when AI gets it wrong?

Let’s say a facial recognition system used for attendance keeps misidentifying workers—maybe because the models weren’t really trained to handle Filipino features. Or a scheduling algorithm cuts hours for caregivers who need flexible time. Can the affected employee sue? Is the employer liable? What about the software vendor? Right now, the answers are murky and that’s a problem.

Here’s another legal concern - collective bargaining agreements. Most CBAs weren’t written with algorithmic managers in mind. What happens when AI starts dictating schedules, tasks, and evaluations? Do the old rules still apply, or are we heading toward a new kind of labor negotiation—one where unions sit across not only from managers, but also machines? That might sound extreme, but at this rate, who knows?

These aren’t just policy concerns. They’re live legal questions. And the longer we delay giving them answers, the greater the risk that we allow automation to quietly chip away at workers’ rights. And there’s more. During one consultation I attended—this time with the IT and Business Process Association of the Philippines (IBPAP) on March 27, 2025—a consistent and glaring question was raised: under what guidelines do we move forward with our AI adoption in the absence of concrete regulations? That question lingered. It still does. Because readiness isn't just about enthusiasm. It's about knowing where the boundaries are—and who draws them.

The private sector often says, and did say during that consultation, government moves too slowly. Maybe so. But let’s not forget: government policies affect everyone. They’re meant to be cautious, inclusive, deliberate. That said, urgency remains. In fact, it was due yesterday.

Relevant government agencies need to continue acting fast. Not just with aspirational principles, but with clear rules. What counts as transparent AI? How do you audit fairness? What mechanisms are in place for appeal and redress?

AI isn’t inherently good or bad. Like any tool, it reflects the intent and discipline of its makers—and users – and best, with guidance from government. If we want it to promote dignity at work, we have to make that a legal requirement, not a marketing tagline.

The law must not lag behind the algorithm. We need to bring our legal frameworks into the present. Not to fight innovation, but to guide it—so it serves people, not replaces or exploits them.

The future belongs to those who prepare. So let’s prepare—urgently, responsibly, and with eyes wide open.

ATTY. PAUL VINCENT W. AÑOVER is a Partner in the Firm. He specializes in Labor and Employment Law, Policy Development, IT Related Industries, Copperative Law, Gaming Regulations, Data Privacy and related Cyber Laws, and Non-Bank Financial Institutions. He may be contacted at pvwa@anoverlaw.org

NB: This piece was written entirely by the writer and reflects his personal insights as a lawyer and experience as a policymaker with firsthand experience in digital governance. While he used standard editing tools to refine grammar and flow, the arguments, tone, and structure were developed independently.

REACH US

Have any inquiries?