Our AI Policy
1. Purpose and Applicability
This AI Policy outlines the principles, expectations, and commitments guiding the ethical and effective use of Artificial Intelligence (AI) within Coactive Education. It applies to all staff, contractors, and collaborators involved in the design, deployment, or management of AI systems that impact our work with all stakeholders including ākonga, kaiako, and tumuaki.
2. Framework for AI Objectives
We use AI to support innovation in educational practice, improve efficiency, and enhance access to quality professional learning and support. All AI objectives must:
-
Centre ākonga in design and outcome;
-
Enhance collaboration among stakeholders;
-
Support equity, inclusion, and culturally responsive practice;
-
Be informed by current, credible pedagogical research.
Progress toward these objectives will be reviewed annually, ensuring alignment with our strategic goals and stakeholder needs.
3. Commitment to Compliance and Improvement
We commit to:
-
Complying with all relevant AI-related legislation, including privacy and data protection under the Privacy Act 2020;
-
Adhering to international AI standards where applicable
-
Continually improving our AI governance and capability through feedback, audits, and staff training.
4. Principles Guiding AI Activities
Our AI use is grounded in the following principles:
-
People-first: AI must support learning, wellbeing, and achievement.
-
Ethical and fair: We do not use AI to profile or disadvantage individuals.
-
Transparent and explainable: Stakeholders must understand how AI affects decisions.
-
Collaborative and co-constructed: AI supports - not replaces - human judgement.
-
Innovative with integrity: We seek to lead change, not chase trends.
5. Processes for Handling Deviations
All suspected policy breaches, AI errors, or unintended consequences must be reported to our Data and AI Lead. A formal review process will identify the root cause and ensure appropriate corrective and preventive action is taken. Stakeholders impacted by a deviation will be informed in a timely and transparent manner.
6. Alignment with Other Organisational Policies
This policy complements our:
-
Privacy and Protective Security Policy
-
Health and Safety Policy
7. Documentation and Communication
This policy is publicly available and will be shared internally during onboarding and professional development sessions. Any major updates will be communicated to all staff and partners, with opportunities for feedback.
8. Review and Evaluation
The AI Policy will be reviewed every 12 months or earlier if there are significant changes in AI use, education policy, or compliance standards.
9. Roles and Responsibilities
-
Executive Team: Overall accountability for AI governance.
-
AI/Data Lead: Oversees implementation, monitors compliance, coordinates training.
-
Project Teams: Ensure AI usage aligns with this policy in their workstreams.
-
All Staff: Are responsible for ethical use and raising concerns where necessary.
10. AI Risk Management
We assess AI risks using a standardised framework covering:
-
Impact on learners and educators
-
Data privacy and security
-
Cultural responsiveness
-
Systemic bias and fairness
Risk assessments will be integrated into project design and reviewed regularly.
11. Training and Awareness
All staff engaging with AI tools or processes will receive annual training focused on:
-
Ethical AI use
-
Privacy and data safety
-
Practical applications for education
-
Recognising and addressing bias
12. Monitoring and Reporting
All AI systems in use will be monitored to ensure they perform as intended, remain safe and fair, and continue to serve the interests of our stakeholders.
13. Incident Response and Accountability
In the event of an AI-related incident (e.g., harmful recommendation, privacy breach), we will:
-
Notify affected parties
-
Conduct a root cause analysis
-
Take remedial steps
-
Report transparently to partners and regulators, where required
14. External Communication and Transparency
We maintain an open and transparent stance with our education partners and the public regarding our use of AI. This includes:
-
Clearly communicating how AI supports our mahi
-
Ensuring AI does not replace human relationships or professional judgement
-
Making AI decisions interpretable to end users
