AI Ethics
Quick reference notes for the COMP3007 AI ethics content.
1. Core AI Ethics Principles (Australian AI Ethics Principles)
We may be asked to identify which principle is demonstrated in a short scenario.
Key examples to remember:
- Human, societal and environmental well-being
- Example: An AI system that helps monitor bushfires to reduce environmental damage.
- Focus: long-term benefits, minimising harm to people, communities, and the environment.
- Fairness
- Example: Consulting stakeholders when designing an AI system so diverse perspectives are included.
- Focus: avoiding bias, ensuring equitable treatment and outcomes.
- Human-centred values
- Example: CBA’s “Bill Sense” feature that:
- Lets users control inputs,
- Opt in or remove predictions.
- Focus: autonomy, dignity, user control and empowerment.
- Example: CBA’s “Bill Sense” feature that:
- Privacy protection and security
- Example: Flamingo AI:
- Uses only necessary data,
- Prevents unauthorised access or use.
- Focus: data minimisation, security controls, safeguarding information.
- Example: Flamingo AI:
- Contestability
- Example: IAG allows customers to use internal and external complaint processes to challenge claims decisions.
- Focus: users can challenge, appeal or dispute AI-influenced outcomes.
Other principles (awareness level for MCQs):
- Transparency & explainability – explaining how AI decisions are made.
- Reliability & safety – robust, tested, safe systems.
- Accountability – people and organisations remain responsible for impacts.
2. Voluntary AI Safety Standard – Guardrails
We’ll likely get one multiple-choice question about the 10 Guardrails.
Memorise these examples:
- Guardrail 1 – Accountability, governance & compliance
- Example: A project with a dedicated team, clear timelines, and risk management expertise, plus a strategy for regulatory compliance.
- Keywords: governance, accountability, internal capability, compliance plan.
- Guardrail 4 – Test & monitor AI models and systems
- Example: An AI model whose performance is regularly monitored based on real-world user interactions and safety metrics.
- Keywords: testing, evaluation, ongoing monitoring after deployment.
- Guardrail 7 – Contestability processes
- Example: Creating and communicating a process for impacted people to:
- Raise concerns,
- Request remediation,
- Contest AI decisions.
- Keywords: challenge, appeal, recourse, complaints.
- Example: Creating and communicating a process for impacted people to:
Remember:
- Data protection → Privacy & Security (not necessarily Guardrail 7).
- Complaint/appeal flows → Contestability (Guardrail 7 & contestability principle).
3. AI Impact Navigator – Plan–Act–Adapt & 4 Dimensions
3.1 Plan–Act–Adapt Cycle
The Plan–Act–Adapt cycle is a continuous improvement loop:
- Plan – identify potential AI impacts, set goals and safeguards.
- Act – deploy systems with controls and ethical practices in place.
- Adapt – monitor outcomes and update systems based on feedback and evolving risks.
MCQ summary:
“A continuous improvement cycle to guide measurement, action, and learning for AI impacts.”
3.2 The 4 Impact Dimensions
Know which dimension deals with transparency and public trust:
- Social licence & corporate transparency
- Focus: building public trust and stakeholder confidence through:
- Openness about AI use,
- Clarity on data handling and impacts,
- Transparent decision-making.
- Focus: building public trust and stakeholder confidence through:
Other dimensions (awareness):
- Impacts on people and communities.
- Impacts on the environment.
- Governance, risk and compliance.
4. Quick Q&A Recap
A last-minute checklist:
- Bushfire monitoring system → Human, societal and environmental well-being
- Consulting stakeholders during design → Fairness
- Bill Sense with opt-in/out & control over predictions → Human-centred values
- Flamingo AI limiting data to what’s necessary & preventing misuse → Privacy protection and security
- IAG customers can challenge claims decisions internally & externally → Contestability
- Project with clear governance team & regulatory strategy → Guardrail 1 – Accountability, governance & compliance
- Monitoring model performance with real-world data & safety metrics → Guardrail 4 – Test and monitor AI systems
- Process for impacted people to raise concerns and contest outcomes → Guardrail 7 – Contestability
- Plan–Act–Adapt → Continuous improvement cycle for measuring, acting and learning from AI impacts
- Dimension focused on transparency and trust → Social licence & corporate transparency
Leave a comment