← Back to Blog

Why Every VUW Student Should Care About AI Safety

Artificial intelligence is no longer a niche topic confined to computer science lecture halls. It's in our phones, our hospitals, our courts, and our classrooms. Large language models write essays, generate code, and summarise legal documents. Computer vision systems monitor traffic, screen medical images, and assess insurance claims.

And yet, most of us — including the people building these systems — are only beginning to grapple with what it means for AI to fail.

What do we mean by AI safety?

AI safety is a broad field, but at its core it asks a simple question: how do we make sure AI systems do what we actually want them to do, without causing harm?

This covers a wide range of concerns:

  • Alignment: Does the AI's behaviour match human intentions?
  • Robustness: Does it work reliably across different conditions, or does it break in unexpected ways?
  • Fairness: Does it treat different groups of people equitably?
  • Transparency: Can we understand why it made a particular decision?
  • Misuse: Could someone use this system to cause deliberate harm?

These aren't hypothetical questions. They're playing out right now in hiring algorithms that discriminate, content recommendation systems that amplify misinformation, and autonomous vehicles that struggle with edge cases.

Why does this matter in Aotearoa?

New Zealand has a unique opportunity — and responsibility — when it comes to AI.

Our public sector is actively adopting AI tools. Stats NZ, ACC, MSD, and other agencies are exploring machine learning for decision-making. The question isn't whether AI will be used in government — it's how.

And in Aotearoa, that "how" has particular weight. Any AI system operating here needs to reckon with:

  • Te Tiriti o Waitangi and the obligations it creates around partnership, protection, and participation
  • Maori data sovereignty — the principle that Maori data should be governed by Maori, and that indigenous knowledge systems must not be extracted or exploited by AI
  • The small population problem — machine learning models trained on global datasets may not reflect the realities of a country of 5 million people with distinct demographics and cultural context

These aren't edge cases. They're central questions for anyone deploying AI in this country.

You don't need to be a coder to care

One of the biggest misconceptions about AI safety is that it's a purely technical problem. It's not.

Sure, alignment research involves deep mathematics and computer science. But the broader challenge of AI governance — deciding what we want AI to do, who gets to make those decisions, and how we hold systems accountable — requires perspectives from law, philosophy, political science, public policy, indigenous studies, and more.

If you're studying law at VUW, you should understand how AI is used in criminal justice and what it means for the right to a fair trial. If you're studying public policy, you should understand how algorithmic decision-making changes the relationship between citizens and the state. If you're studying design, you should understand how interfaces shape people's understanding of AI outputs.

AI safety is everyone's business.

What VicAI is doing about it

Our Safety & Governance team runs reading groups, panel discussions, and research projects focused on responsible AI in Aotearoa. We've hosted discussions on Maori data sovereignty, algorithmic accountability in government, and the ethics of generative AI in education.

We don't pretend to have all the answers. But we believe that students — from every faculty — should be part of the conversation.

The best time to think about AI safety was ten years ago. The second best time is now.

If this resonates with you, join VicAI and get involved. You don't need to write code. You just need to care.

tutorial event-recap

Workshop Recap: Building Your First LLM Application

A recap of our hands-on workshop where 40 students built their first LLM-powered app — from prompt design to a working prototype in three hours.

James Nakamura 2 minutes