8 Questions to Ask AI Plan Review Vendors Before You Buy
When you evaluate AI plan review software, most demos will look impressive. You’ll see dashboards, hear about accuracy rates, and review case studies from other jurisdictions.
However, the questions that determine whether a solution will work in practice often don’t come up until months into implementation, or after you’ve gone live and discovered the tool doesn’t fit naturally within your process.
If you’re in the evaluation phase, these questions help differentiate AI plan review solutions built for government work from those built for other use cases and then retrofitted for public sector use.
1) What’s the turnaround time when an applicant uploads a PDF?
When an applicant uploads a PDF, how long does it take before they receive results they can act on? If the answer is “real-time” or “within minutes,” the solution was built with permit workflows in mind. Applicants can address issues before submitting, which means higher-quality applications reach reviewers.
Anything longer (hours or business days) can be a sign that the system was built for a different use case, such as automated code compliance during the design phase, when architects submit BIM or IFC files for review. Those tools are valuable for design professionals, but a compliant design doesn’t mean a permit-ready application. Automated code compliance tools don’t check for application completeness, zoning compliance, or the many other items plan reviewers verify during the permitting process.
Key takeaway: Real-time or near-instant results indicate the solution was built for permitting. Longer turnaround times (hours or days) suggest it was designed for a different use case.
2) How quickly can you implement regulatory changes?
Ideally, the answer should be 24 to 48 hours, not weeks. And those updates shouldn’t come with additional fees or require extensive configuration on your end.
It’s also worth asking how the solution handles grace periods, or transition windows, when both the outgoing and incoming codes are accepted. For example, if you’re moving from the 2018 to the 2021 IRC but accepting both during a grace period, can the solution handle that scenario?
Solutions that struggle to accommodate regulatory changes or support grace periods introduce recurring issues every time codes are updated (which is regularly).
Key takeaway: Updates should happen in 24-48 hours without extra fees, and the solution should handle grace periods when you’re accepting multiple code versions.
3) Can you support additions, alterations, and existing building codes?
Many AI plan review tools are designed for new construction. They work well when projects follow current codes and conditions are straightforward.
Additions, alterations, and existing building projects introduce complications, including grandfathered codes, existing conditions, and nuanced compliance scenarios that don’t fit neatly into standard checks.
When vendors claim they support these project types, ask whether they’ve successfully implemented them in another jurisdiction. There’s a difference between “our platform is technically capable of this” and “we have this running for a city like yours.”
Key takeaway: Look for vendors who have implemented these project types in another city, not just those who say they’re technically capable of it.
4) What other jurisdictions have you worked with that are similar to ours?
This question is less about checking boxes on a desired features list and more about whether the vendor has experience with challenges like yours.
If you primarily process residential permits, ask whether the vendor has configured residential processes. If your department is small and has limited IT support, ask whether they have worked with jurisdictions facing similar constraints.
Looking at past implementations can help you get a sense of how smoothly your own might go. Vendors who have solved similar problems before can anticipate common pitfalls and know where implementation issues usually crop up.
Key takeaway: Similar past implementations can mean a smoother rollout for your jurisdiction. The vendor who knows what issues to expect also knows how to avoid them.
5) Does your solution check for compliance the way our reviewers work?
Jurisdictions show compliance in various ways. Some require measurements to be directly on plans, while others accept affidavits, notes, or additional forms.
If reviewers expect setback measurements to appear on the drawing itself, but the solution only validates that the numbers meet code (without verifying that they appear where reviewers need them), you haven’t actually saved your team time. Staff still need to manually verify where and how that information is presented.
The solution should be flexible enough to map to your existing review process. Your team shouldn’t have to change how they review plans just to accommodate the technology.
Key takeaway: The solution should fit your review process, not force your team to change how they work to accommodate the technology.
6) What do your metrics actually measure?
When vendors cite metrics like “99% faster approvals” or “80% reduction in review time,” it’s important to understand what those numbers mean.
An “80% reduction” compared to what baseline? Your current process or an industry average? And for which project types? “Faster approvals” could mean the overall permit timeline dropped from 60 days to 12, or it could mean individual code checks on the platform run faster. Both are improvements, but they address different parts of the process.
The more specific a vendor is about what they’re measuring and how, the easier it is to evaluate whether those improvements address your needs.
Key takeaway: Meaningful metrics specify what’s being measured, the baseline for comparison, and which project types they apply to. Vague claims like “80% faster” don’t tell you much without that context.
7) Can I speak with customers who’ve been live for at least six months?
Demos are great for showcasing what a system can do, but talking to customers who have been using the tool for an extended period gives you a better idea of how well it performs in real-world conditions.
Ask to speak with jurisdictions similar to yours in size, project mix, or team capacity. When you have those conversations, useful questions include:
- What surprised you after go-live?
- What didn’t work as expected?
- How responsive is the vendor when you need support?
- If you were evaluating again, what would you ask that you didn’t think to ask the first time?
Key takeaway: Talking to similar jurisdictions that have been live with the solution for 6 months or more shows you how it performs under real conditions, not just what it can do in a demo.
8) What do you need from us to implement this?
Some platforms require clean GIS data, while others depend on documented processes, completed audits, or API access to existing permitting systems before they can configure a system.
CivCheck, for example, is natively built to work with PDFs and hand-drawn plans so that every applicant can upload any type of permit document, not just those created with the “right” tools.
Understanding these prerequisites ahead of time makes it easier to assess readiness and scope the internal effort needed to make it work. This way, you can also figure out if it’s the right time to implement or if some prep work is needed first.
Key takeaway: Know what’s required for implementation upfront to assess your readiness and determine if you need to do any prep work first.
Finding the right fit
Every jurisdiction operates under different constraints. What works for a large county with multi-review commercial projects might not be the right fit for a smaller city primarily permitting residential projects. The vendor’s answers to these questions give you a clearer idea of fit than a demo.
If you’re still not sure what you need in an AI plan check tool, an experienced vendor will help you walk through questions as part of the evaluation process rather than forcing a one-size-fits-all solution.
