
Two AI Conversations in New Zealand. They Have Barely Met.
AI generated image
Two Conversations Running in Parallel
There are two significant AI conversations happening in New Zealand right now. Both involve questions of who controls data, who benefits from AI systems, and what obligations attach to the organisations building them. The two conversations are occurring in the same country, often within the same organisations, and they have barely intersected.
The first conversation is the one most business leaders are participating in: AI adoption, productivity, governance, risk. Boards are asking about AI strategy. CIOs are scoping AI initiatives. HR teams are updating policies. Consultants are busy. There is urgency, and in many organisations, genuine progress.
The second conversation has been building for longer. Māori researchers, iwi data governance bodies, and indigenous rights advocates have spent the better part of a decade constructing a framework for understanding who owns data about Māori communities, who has the authority to use it, and under what conditions. This work is rigorous, internationally recognised, and directly relevant to the AI systems that NZ organisations are now building or procuring.
This article is not an argument for or against either conversation. It is an observation about what happens when organisations building AI systems on New Zealand data do not know that the second conversation exists, and what it asks of them.
The Māori Data Sovereignty Framework
What is Māori data sovereignty?
Māori data sovereignty is the right of Māori to govern the collection, ownership, and application of data about Māori people, communities, and resources. Te Mana Raraunga formalised this as a distinct rights framework from 2015, establishing that iwi data requires iwi governance, not just personal privacy protections.
Te Mana Raraunga was established in 2015 as a Māori data sovereignty network. Its founding work drew on indigenous rights theory, Treaty of Waitangi obligations, and the specific inadequacy of existing privacy frameworks for Māori data. Personal data protections, Te Mana Raraunga argued, were designed for individuals. Māori data is collective. When an iwi's health outcomes, land records, or cultural knowledge are aggregated and analysed, the entity affected is not an individual. It is a people.
From this work came Kāhui Raraunga, the organisation focused on embedding Māori data governance in practice. Kāhui Raraunga has published the Māori Data Governance Model and a Māori AI Governance Framework that provides operational guidance specifically for AI, alongside a set of internationally adopted principles. The CARE Principles (Collective Benefit, Authority to Control, Responsibility, Ethics) were developed through the Global Indigenous Data Alliance as an indigenous counterpart to the FAIR data principles that govern open research data. Where FAIR asks whether data is findable, accessible, interoperable, and reusable, CARE asks: for whose benefit, under whose authority, with what responsibilities, and on what ethical basis?
These are not abstract philosophical questions. AI systems trained on health data, census records, social services data, land information, and cultural materials routinely incorporate Māori data. The question of whether that data was collected, curated, and is now being applied with appropriate Māori governance is one that most AI projects in New Zealand have not been designed to ask.
Karaitiana Taiuru, whose work at taiuru.co.nz specifically addresses AI ethics and Māori rights, has documented the ways in which standard AI governance frameworks fail to account for indigenous data contexts. His analysis of large language models, training data, and Māori cultural knowledge raises questions that go beyond the data minimisation and consent frameworks that most NZ AI policies are built around.
What the National AI Conversation Is Missing
Is New Zealand's national AI strategy engaging with Māori data frameworks?
Not substantively. NZ government AI guidance focuses on productivity, safety, and regulatory alignment with international frameworks. The parallel conversation about indigenous data rights, CARE principles, and iwi governance authority has not been integrated into that policy framing in any operational way.
New Zealand's government has engaged with AI strategy at a policy level: guidance on AI in the public sector, algorithmic transparency principles, and engagement with the OECD AI principles. This is meaningful work. It is also framed almost entirely within a Western liberal regulatory tradition that treats data governance as a matter of individual rights, institutional accountability, and risk classification.
The Māori data sovereignty framework sits in a different tradition. It grounds data rights in collective identity and Treaty obligations, not individual consent forms. It asks questions about who benefits from the insights derived from Māori data, not just who has permission to access the raw records. It holds that iwi have governance authority over data about their communities that pre-exists and sits alongside the Privacy Act.
Te Arawhiti, the office for Māori Crown relations, has produced guidance on AI and Treaty obligations. That guidance exists. It is not widely referenced in corporate AI strategy documents. It is rarely on the agenda when organisations scope their AI governance frameworks.
There is also a practical infrastructure question that rarely surfaces in AI adoption discussions: where is NZ data stored, and who can access it? The US CLOUD Act (Clarifying Lawful Overseas Use of Data Act) gives US authorities access to data held by US-based cloud providers regardless of where the data physically resides. This has specific implications for Māori data held in US-cloud infrastructure. Some iwi have responded by building their own data infrastructure specifically to address this. The organisations whose AI systems depend on the same US cloud providers are generally unaware that this regulatory gap exists.
The Risks of Remaining Separate
What risk do NZ organisations face if these two conversations stay separate?
Organisations building AI on data that includes Māori knowledge, cultural context, or community records without iwi governance face trust deficits, reputational exposure, and potential Treaty obligations that will become harder to ignore as AI regulation matures and Māori institutions assert their authority more formally.
The risk is not primarily regulatory, at least not yet. New Zealand does not currently have AI legislation that explicitly references Māori data sovereignty. The Privacy Act 2020 and the Health Information Privacy Code provide some protections, but they do not address collective data rights or iwi governance authority.
The more immediate risk is trust. Public sector organisations, district health boards (now health NZ), universities, and large private sector entities hold significant volumes of data about Māori communities. When those organisations build AI systems on that data without engaging with the relevant iwi governance frameworks, the question of whether they have the right to do so is one that Māori institutions are increasingly positioned to raise, and willing to.
The second risk is accuracy. AI systems trained on historical data about Māori communities may encode patterns shaped by historical inequities: in health outcomes, in land access, in social services engagement, in education. If those systems are then used to make decisions about Māori communities without Māori governance oversight, they risk embedding and amplifying exactly the disparities that Māori data sovereignty frameworks were designed to address. This is not a theoretical concern; it is the documented outcome of AI systems deployed in health, criminal justice, and social welfare contexts internationally.
The third risk is reputational. New Zealand organisations that are seen to have built AI systems on Māori data without appropriate engagement will face scrutiny. That scrutiny is harder to manage after deployment than before.
Is your AI governance framework covering all the obligations your data carries?
If your organisation is building or procuring AI systems on New Zealand data and you have not assessed the Māori data governance dimension, that is a gap worth addressing before deployment, not after. This is a strategic conversation, not a compliance checklist.
Discuss it with PeterThe EU AI Act Changes the Calculus
New Zealand is not subject to the EU AI Act. But organisations that do business with the EU, hold EU data, or operate in sectors where EU regulatory standards are becoming de facto global standards, are finding that "not subject to" is a narrower concept than it first appears.
The EU AI Act requires documented governance practices for AI systems classified as high-risk. Health, education, employment, essential services, and public authority decisions are all within scope. For NZ organisations with any European exposure in these sectors, demonstrating AI governance practices that meet EU standards is becoming a practical requirement, not just a legal one.
The relevant point here is that building documented AI governance practices, which the EU Act requires for high-risk systems, creates an infrastructure that is also the correct infrastructure for addressing Māori data governance questions. Data provenance documentation, impact assessment processes, and ongoing monitoring frameworks are useful for both. Organisations investing in EU AI Act compliance have an opportunity to build governance frameworks that simultaneously address the Māori data sovereignty dimension, if they are aware that the second conversation exists.
As covered in the agentic AI governance guide on this site, governance frameworks built before deployment are substantially easier to construct than frameworks retrofitted after systems are live. That principle applies here as well.
What Organisations Can Do Now
How can NZ organisations bridge AI governance with indigenous data considerations?
Start with a data audit: identify datasets involving Māori knowledge, community records, or cultural context. Engage relevant iwi authority before system design. Reference the CARE principles alongside FAIR data practice. For government and health organisations, engage with Te Arawhiti's guidance on Treaty obligations in AI.
This is not a call to pause AI adoption. It is a call to build AI adoption on a complete picture of what your data carries and what obligations attach to it. The following three steps are practical starting points for organisations that have not yet addressed the Māori data governance dimension of their AI work.
1. Identify which of your datasets involve Māori communities
This is simpler than it sounds for most organisations, and more significant than they realise. Health data, census-linked data, social services records, land information, and educational records all routinely include Māori communities as a significant population. Any AI system trained or fine-tuned on these datasets is working with Māori data.
The audit question is not just "does this dataset contain data about Māori individuals?" but "does this dataset contain information about Māori communities, cultural practices, or collective resources that would be subject to iwi governance authority?" The distinction matters. Individual-level data is addressed by the Privacy Act. Collective cultural data is addressed by something else.
2. Engage iwi governance before system design
Engagement after deployment is damage control. Engagement during procurement or design is partnership. The difference in outcome is significant, and the difference in cost is smaller than most organisations expect.
For public sector organisations and those in regulated sectors, Te Arawhiti publishes guidance on engaging Māori in AI development. For private sector organisations working in health, insurance, financial services, or property, the relevant iwi authority will depend on geography and community. Kāhui Raraunga can assist with identifying appropriate governance contacts for specific data contexts.
This is not a legal requirement for most organisations yet. It is a responsible practice that is becoming a trust signal, and that will become a harder question to answer retrospectively as Māori institutions mature their own AI oversight capacity.
3. Reference CARE alongside FAIR in your data governance
The FAIR principles (Findable, Accessible, Interoperable, Reusable) are the dominant framework for research and open data governance. They are useful. They are also incomplete for any context where collective rights attach to data.
The CARE principles ask four additional questions: Does the use of this data serve the Collective Benefit of the communities it represents? Does the governance of this data respect the Authority to Control of the relevant communities? Does the organisation using this data accept Responsibility for how it is used and what outcomes it produces? Does the use of this data reflect an ethical commitment to the dignity and rights of the communities involved?
Adding CARE questions to data governance reviews does not require a fundamental redesign of AI governance frameworks. It requires expanding the frame to include questions that most frameworks currently omit. For organisations working in health, education, public services, or any context where Māori communities are significantly represented in the underlying data, that expansion is both appropriate and practical.
The Coordination Failure
The gap between these two conversations is not a failure of intent. Most organisations building AI in New Zealand are not deliberately ignoring Māori data sovereignty frameworks. They are not aware that the frameworks exist, or they do not know how to connect them to the practical work of AI governance.
The coordination failure is systemic. The AI adoption conversation is predominantly happening in boardrooms, technology functions, and consulting engagements. The Māori data sovereignty conversation is predominantly happening in academic institutions, iwi governance bodies, and policy circles. The translators between these worlds are few. The occasions when both conversations occur in the same room are rarer still.
What is needed is not a merger of the conversations, or a claim by any one voice to represent both. What is needed is awareness — sufficient for the people making AI governance decisions in NZ organisations to know that the second conversation exists, that it has produced rigorous and practical frameworks, and that those frameworks are relevant to the systems they are building.
The responsible AI practices that distinguish the best NZ organisations from the rest are not only about output verification and data security. They include an understanding of what your data carries: whose communities it represents, what collective obligations attach to it, and what it means to use it well. For New Zealand specifically, that understanding includes the Māori data sovereignty dimension. Organisations that build AI without it are not being irresponsible. They are being incomplete.
For NZ organisations at Rung 3 and above on the AI maturity ladder, building AI systems that connect to actual business processes and data pipelines, this is the governance dimension that most current frameworks are not yet asking them to address. The organisations that address it now will not need to retrofit it later. That is the practical case, separate from the moral one.
The government sector in New Zealand has particular obligations here, and particular opportunities. AI systems built by or for government agencies on New Zealand public data carry Treaty obligations that private sector systems do not. Meeting those obligations requires the two conversations to meet each other. That is a conversation worth starting.
Further Reading
- Te Kāhui Raraunga. Māori Data Governance Model. kahuiraraunga.io
- Te Kāhui Raraunga. Māori AI Governance Framework. kahuiraraunga.io/maoriaigovernance
- Te Mana Raraunga. Māori Data Sovereignty Network. temanararaunga.maori.nz
- Global Indigenous Data Alliance. CARE Principles for Indigenous Data Governance. gida-global.org/care
- Karaitiana Taiuru. AI Ethics & Māori Data Rights. taiuru.co.nz