📢 Notice 📢

2 minute read

Quick reference notes for the COMP3007 AI ethics content.

1. Core AI Ethics Principles (Australian AI Ethics Principles)

We may be asked to identify which principle is demonstrated in a short scenario.

Key examples to remember:

  • Human, societal and environmental well-being
    • Example: An AI system that helps monitor bushfires to reduce environmental damage.
    • Focus: long-term benefits, minimising harm to people, communities, and the environment.
  • Fairness
    • Example: Consulting stakeholders when designing an AI system so diverse perspectives are included.
    • Focus: avoiding bias, ensuring equitable treatment and outcomes.
  • Human-centred values
    • Example: CBA’s “Bill Sense” feature that:
      • Lets users control inputs,
      • Opt in or remove predictions.
    • Focus: autonomy, dignity, user control and empowerment.
  • Privacy protection and security
    • Example: Flamingo AI:
      • Uses only necessary data,
      • Prevents unauthorised access or use.
    • Focus: data minimisation, security controls, safeguarding information.
  • Contestability
    • Example: IAG allows customers to use internal and external complaint processes to challenge claims decisions.
    • Focus: users can challenge, appeal or dispute AI-influenced outcomes.

Other principles (awareness level for MCQs):

  • Transparency & explainability – explaining how AI decisions are made.
  • Reliability & safety – robust, tested, safe systems.
  • Accountability – people and organisations remain responsible for impacts.

2. Voluntary AI Safety Standard – Guardrails

We’ll likely get one multiple-choice question about the 10 Guardrails.

Memorise these examples:

  • Guardrail 1 – Accountability, governance & compliance
    • Example: A project with a dedicated team, clear timelines, and risk management expertise, plus a strategy for regulatory compliance.
    • Keywords: governance, accountability, internal capability, compliance plan.
  • Guardrail 4 – Test & monitor AI models and systems
    • Example: An AI model whose performance is regularly monitored based on real-world user interactions and safety metrics.
    • Keywords: testing, evaluation, ongoing monitoring after deployment.
  • Guardrail 7 – Contestability processes
    • Example: Creating and communicating a process for impacted people to:
      • Raise concerns,
      • Request remediation,
      • Contest AI decisions.
    • Keywords: challenge, appeal, recourse, complaints.

Remember:

  • Data protection → Privacy & Security (not necessarily Guardrail 7).
  • Complaint/appeal flows → Contestability (Guardrail 7 & contestability principle).

3. AI Impact Navigator – Plan–Act–Adapt & 4 Dimensions

3.1 Plan–Act–Adapt Cycle

The Plan–Act–Adapt cycle is a continuous improvement loop:

  • Plan – identify potential AI impacts, set goals and safeguards.
  • Act – deploy systems with controls and ethical practices in place.
  • Adapt – monitor outcomes and update systems based on feedback and evolving risks.

MCQ summary:

“A continuous improvement cycle to guide measurement, action, and learning for AI impacts.”

3.2 The 4 Impact Dimensions

Know which dimension deals with transparency and public trust:

  • Social licence & corporate transparency
    • Focus: building public trust and stakeholder confidence through:
      • Openness about AI use,
      • Clarity on data handling and impacts,
      • Transparent decision-making.

Other dimensions (awareness):

  • Impacts on people and communities.
  • Impacts on the environment.
  • Governance, risk and compliance.

4. Quick Q&A Recap

A last-minute checklist:

  1. Bushfire monitoring systemHuman, societal and environmental well-being
  2. Consulting stakeholders during designFairness
  3. Bill Sense with opt-in/out & control over predictionsHuman-centred values
  4. Flamingo AI limiting data to what’s necessary & preventing misusePrivacy protection and security
  5. IAG customers can challenge claims decisions internally & externallyContestability
  6. Project with clear governance team & regulatory strategyGuardrail 1 – Accountability, governance & compliance
  7. Monitoring model performance with real-world data & safety metricsGuardrail 4 – Test and monitor AI systems
  8. Process for impacted people to raise concerns and contest outcomesGuardrail 7 – Contestability
  9. Plan–Act–AdaptContinuous improvement cycle for measuring, acting and learning from AI impacts
  10. Dimension focused on transparency and trustSocial licence & corporate transparency

Leave a comment