Imagine if the DMV replaced its human road test examiners with robots, claiming they were 80% accurate and only occasionally made mistakes.
You probably wouldn’t feel comfortable handing over the keys for your driving test, right?
Not because technology can’t help, but because removing human judgment from decisions that directly affect public safety raises serious ethical concerns.
Permitting and plan review decisions are no different. Accuracy isn’t just a performance metric; it’s an ethical responsibility tied directly to public safety in the built environment.
Ethical AI in permitting and plan review isn’t about expecting AI to be accurate 100% of the time. It’s about understanding its limits and designing systems that actively manage risk.
So what does ethical AI software actually look like? And how can you tell if a system was responsibly designed?
The reality of AI accuracy in permitting & plan review
Before discussing ethics, it’s important to understand how even the most well-intentioned AI systems can introduce risk.
Why? Because most AI systems rely on multi-step chains of reasoning, which means that even if each step is almost always accurate, errors compound, and you end up with lower accuracy overall.
For example, imagine using AI plan review software to calculate room areas from PDFs. The system might follow this process:
- Locate walls and rooms
- Find the scale on the drawing
- Extract numerical scale values
- Measure pixel dimensions and convert using scale
- Calculate area
Now assume the AI is 90% accurate at each step.
Individually, that sounds great.
But when the errors compound across all 5 steps? You get 59% accuracy overall.
And that’s where ethics enters the conversation.
Rather than pretending AI accuracy limitations don’t exist, ethical AI systems address them directly by incorporating human input into the process.
Why ethical AI systems keep humans in the loop by design
Because AI accuracy has practical limits, ethical systems are built to keep humans in the loop (HITL), requiring review, guidance, and sign-off throughout the process.
Returning to the example above, imagine three of the five steps now require human confirmation, making those steps effectively 100% accurate.
Overall accuracy improves to 81%.
That’s a meaningful difference. And in real-world permitting workflows, the gains can be even more dramatic when human experts are empowered to guide, correct, and validate AI output.
The problem is, not every system was designed this way.
How to tell if an AI system is ethical and well designed
If you’re evaluating AI software for permitting or plan review, these questions can quickly reveal whether ethics were part of the design:
→ How easy is it to blindly accept what the AI says?
If the answer is “very,” that’s a red flag. Ethical AI systems create friction by keeping humans actively in control.
→ Can I override the AI at any moment?
A trustworthy system always allows the human expert to make the final call.
→ Does the system show exactly what the AI did and assumed?
Ethical AI must be explainable. Hidden conclusions are dangerous.
→ Can I correct the AI’s assumptions before it draws conclusions?
Intervention too late in the process defeats the purpose.
→ Is friction intentionally built into critical steps?
If a system encourages rubber-stamping rather than thoughtful review, it’s not an ethical AI system. It’s a liability.
Ethical AI is about more than just compliance
Ultimately, if AI is introduced into the permitting process without adequate safeguards, there isn’t just a concern of missed code violations, but also a risk of completely eroding public trust.
The future of permitting will absolutely involve AI—but the only acceptable future is one where the technology is safe, transparent, accountable, and built around the human experts that protect communities.
Ethical AI doesn’t replace human expertise. It amplifies it by reducing repetitive work, catching oversights, enforcing consistency, and speeding up low-risk tasks.
But only when it is designed to partner with humans, not replace them.