Australia’s push for AI-powered road safety enforcement has raised fresh privacy concerns after a new audit in Queensland found potential breaches in the way traffic camera data is managed. The Mobile Phone and Seatbelt Technology (MPST) Program, which relies on AI image recognition to detect drivers using mobile phones or not wearing seatbelts, flagged more than 137,000 potential offences in 2024. While the system is credited with saving lives, the Queensland Audit Office report has highlighted ethical and civil privacy issues that could shake public trust in transport regulation.
Scale of the Program in 2024
In its first full year of widespread rollout, the MPST program gathered massive amounts of data, scanning millions of vehicles across the state.
Metric | Value |
---|---|
AI assessments made | 208 million+ |
Potential offences flagged | 137,000 |
Fines issued | 114,000 |
Revenue generated | $137 million+ |
The sheer scale of these numbers makes it clear why the program has become a cornerstone of automated enforcement. However, it is also this volume of surveillance that has put privacy issues under the spotlight.
How the AI Technology Works
The MPST system operates using AI cameras installed at multiple high-traffic locations. Nine devices can run simultaneously, scanning the inside of vehicles to determine whether drivers are holding mobile phones or failing to wear seatbelts.
When AI identifies a possible offence, the image undergoes a two-stage human review: first by the contracted management company, and then by the Queensland Revenue Office. Only after both checks is a fine issued. Officials argue this dual-layer process ensures accuracy and prevents wrongful penalties.
While the review process may limit machine errors in issuing fines, the audit suggests the broader governance framework does not sufficiently address ethical risks, long-term data storage, and civil privacy safeguards.
Privacy and Ethical Issues Identified
The audit identified several key areas of concern with the program’s reliance on AI surveillance:
- Driver and passenger privacy: Cameras inevitably capture non-offending drivers, passengers, and international visitors, raising questions on how their data is stored and used.
- Governance gaps: There is a lack of comprehensive frameworks for ethical risk assessment, despite official AI governance policies requiring them.
- Facial recognition risks: Concerns remain about potential reliance on incomplete or biased recognition that may link to wrongful penalties.
- Revenue vs safety: Public debate continues over whether the program prioritizes safety or has become a revenue-generating system.
Audit officials stressed that without greater transparency and stronger oversight, the program risks eroding public trust.
Government Response
Queensland’s Transport Minister Brent Mickelberg accepted the audit findings and committed to strengthening oversight. The Transport and Main Roads Department (TMR) has pledged to implement an AI Strategic Roadmap by 2028, introducing new ethical governance policies and stricter frameworks for accountability.
Authorities assured the public that final infringement notices are always issued by human officers, not by AI systems alone. They stressed the technology’s role is limited to identifying possible offences quickly, leaving legal enforcement to people.
Balancing Road Safety with Civil Rights
AI advocates defend the cameras by pointing to their effectiveness. Distracted driving, particularly from mobile phones, remains a leading cause of accidents. By catching tens of thousands of offenders in a single year, authorities argue the system makes roads safer.
However, civil rights groups stress that without careful limits, such surveillance can gradually undermine freedoms. Surveillance systems that began as targeted safety tools could morph into constant monitoring. To guard against this, they argue for stronger legislation around data handling, usage transparency, and independent audits.
Public and Legal Perspectives
Public trust is one of the biggest challenges for the MPST program. While many drivers back the crackdown on phone use, some worry about the sheer volume of personal data being collected.
Legal scholars warn that the success of autonomous driving and intelligent enforcement depends on public confidence in the fairness and accuracy of AI technology. Without it, the public may question whether these tools exist primarily for safety or government revenue collection.
For long-term acceptance, experts recommend community forums, public awareness campaigns, and regular third-party reviews to prove impartiality and build support.
Looking to the Future
The Queensland Audit Office’s recommendations focus on immediate improvements in governance and data transparency. Key suggestions include:
- Developing a comprehensive public data management framework
- Engaging civil privacy experts in oversight roles
- Expanding community engagement around objectives and safeguards
- Enhancing human-in-the-loop processes for higher accountability
The ultimate goal, according to the government, is a balance between life-saving AI enforcement and civil rights protection. By 2028, the AI Strategic Roadmap aims to create policies that can serve as a model for other states in Australia.