Applications, requests, reviews, inspections, and other work are piling up faster than your team can process them. You’re losing good people to burnout or retirement, and your constituents are calling, emailing, and showing up at the counter, all asking the same question: “Why does this take so long?”
And your answer? “We’re doing the best we can with what we have.”
The old solutions of hiring more staff, extending hours, and telling everyone to work harder stopped working years ago. You can’t squeeze more hours out of days that are already maxed out.
Meanwhile, most cities ran deficits last year, while pension obligations keep climbing and traditional revenue sources dry up. You’d hire more people if you could, but the talent pool isn’t there. Experienced professionals in fields like building codes, permitting, and constituent services are retiring faster than new people are entering these careers.
Your existing team is already working at its limit, and asking them to do more isn’t fair or sustainable. At the same time, residents expect a level of service comparable to what they experience in other areas of their lives.
This is why cities and counties are turning to AI for support. This guide outlines:
Now let’s figure out how you can (and should) adopt AI.
Here’s the good news: You’re not a guinea pig.
AI tools are already working in city and county governments across the country, with early adopters reporting 25-40% efficiency gains within the first 90 days. According to a report by the Boston Consulting Group, agencies can save up to 35% of budget costs over the next 10 years by using AI in areas like case processing, not by cutting people, but by letting technology handle the repetitive tasks so your staff can focus on work that requires their expertise.
The technology is proven, and your peers are already using it. But getting internal approval requires addressing some legitimate concerns.
Understandably, there are concerns about adopting AI that may be holding you and others back. Let’s address them:
You may have experienced technology vendors promise the moon and deliver disappointment. And when it comes to AI, the stakes feel even higher. What if it makes a mistake? What if it’s biased? Will this replace our staff?
Cities are putting guardrails in place to address these concerns. Many require risk assessments, security checks, and testing for accuracy and bias before deployment.
Many governments are starting with low-risk applications like chatbots that answer frequently asked questions or intake tools that flag incomplete applications. These aren’t high-stakes decisions where an AI error creates major problems.
As for staff replacement concerns, there’s an important distinction between artificial intelligence and augmented intelligence.
Artificial intelligence removes the human element entirely and makes decisions autonomously. Augmented intelligence helps staff do their jobs better and faster while humans still make the final calls. Think of it like the difference between a self-driving car and a navigation app. One replaces the driver while the other makes the driver more effective.
Take CivCheck’s AI plan review software as an example. Rather than performing plan checks independently, the platform’s AI guides staff through their reviews so they can make decisions faster. It augments staff (as the name suggests) rather than performing their job for them.
Many governments aren’t sure if AI can solve their specific problems or what would actually qualify for funding. Starting with the problem, not the technology, usually helps answer this. What’s your biggest bottleneck? Where are staff spending the most time on repetitive work?
Cities report that starting with “low-hanging fruit” (tasks that are repetitive, time-consuming, and don’t require high-level decisions) helps build confidence and show value. Common starting points include processing routine forms, answering frequently asked questions, and organizing large document sets.
The fear keeping decision-makers up at night is greenlighting an AI project that doesn’t work and wasting taxpayer money. The solution is to start small.
Don’t bet your entire department on an untested system. Start with one department or one specific problem. Some AI tools can go live in a week. Measure results immediately, and if it works, expand. If it doesn’t, you haven’t lost much.
Cities successfully using AI often started with limited pilots that proved value quickly. Then, they fully committed after leadership saw results.
Let’s use a permitting use case as an example. Remember that plan review backlog that’s been sitting for six months? The one you can’t seem to get through because your team is already maxed out? That’s fundable. “We want to explore AI” is not.
What gets approved comes down to specifics:
Cities and counties are adopting innovative AI tools to solve specific, high-impact problems. Here’s what’s working right now, organized by department and the technologies they’re using:
Saratoga, California rolled out Hamlet, an AI platform that summarizes City Council agendas, recordings, and supporting materials to improve constituent outreach and transparency.
Denver, Colorado is using Sunny AI to handle some of their 311 calls, projecting $2.8 million in savings. The AI handles after-hours calls and simple requests, allowing staff to focus on more complex tasks during business hours.
Hartford, Connecticut partnered with Google to provide AI-powered, real-time translation services for city council and board meetings, with nearly 80 languages for residents to choose from. The goal is to foster trust and transparency in a city with a large immigrant population.
Covington, Kentucky built an economic development chatbot to answer resident questions about opening and maintaining a business, including the required permits to operate legally and commercial properties available for sale or lease.
Honolulu, Hawaii reduced plan review time by 70% on average using CivCheck’s AI plan review software. Staff can complete reviews faster because the tool has already flagged potential issues, pulled relevant code sections, and run initial compliance checks.
Seattle, Washington is working with CivCheck on permit application screening, scanning thousands of applications for omissions and other issues requiring resubmission, and flagging common errors so they can be addressed and prevented. The city’s ultimate goal is to reduce permitting times by half.
New York City’s Department of Buildings partnered with CivCheck to pre-screen 14 residential alteration and enlargement plans against 50 regulation checks for missing information and code compliance. Plan reviewers reported a 25% time savings after using the tool. In the future, the city anticipates saving over 60 minutes per permit and speeding up approvals, giving reviewers more time to focus on expert-level work.
Washington, D.C. is using AI for visual inspection of water mains and sewage pipes. Instead of sending inspection crews into every pipe on a schedule, AI analyzes video footage to identify problems that need attention.
Cambridge, Massachusetts, and Pittsburgh, Pennsylvania are using AI analytics to address traffic gridlock. They’re analyzing traffic patterns to improve signal timing and reduce congestion.
Seattle, Washington is using AI to process housing applications faster by automatically checking them for completeness and flagging missing information before a human reviewer needs to review them.
Indianapolis, Indiana invested in ethical AI training for government workers to teach them to use AI responsibly, focusing on explaining AI decisions to residents and keeping human oversight in government services.
The Bloomberg Philanthropies City Data Alliance provides technical assistance to cities implementing AI and data analytics projects. Austin, Boston, Dallas, Denver, Kansas City, and Newport News are all participating. Baltimore used the program to identify neighborhoods vulnerable to infrastructure failures. Tampa used it to identify areas most impacted by hurricanes in real time.
The pattern is the same across all of these examples: start with a clear problem, pick a tool that addresses it, measure results, and scale what works.
Before departments adopt AI, many cities are putting frameworks in place. These internal guardrails usually include clear principles, a review process, and expectations for staff and vendors to follow.
The goal is to help departments move faster, try new tools with less risk, and build internal trust that AI won’t create unmanageable fallout. These frameworks make it easier to say “yes” to AI because there’s a shared definition of what “responsible” looks like.
Cities like San Francisco, San José, and Seattle are early leaders, having published formal AI guidance and launched cross-departmental governance programs. Others, including Austin, Boise, Boston, Denver, New York, and Washington, D.C., are rolling out their own frameworks based on their local needs.
Here are a few examples:
These frameworks vary in form, but their function is the same. Protect the public, support staff, and make innovation safer.
If you’re ready to start or improve an internal AI review process, you can adapt common elements already working in other cities:
You can draw from published frameworks to draft your own. Here are links to help get you started:
Getting internal approval to adopt and implement AI is about making a credible case that it solves a real problem, fits within your budget, and doesn’t introduce unnecessary risk for your organization. The case looks different depending on who you’re talking to. Here are some ideas on how to frame your pitch by role:
City managers are focused on operational performance, risk, and the public’s experience with government services. This audience doesn’t need to be convinced that AI can help. They need to understand how a specific solution solves a real, ongoing problem. Don’t pitch it as an “AI initiative.” Instead, present it as an improvement to service delivery that doesn’t require adding headcount or launching a major IT overhaul.
Be prepared to answer questions about how it will be implemented, how performance will be measured, and what happens if it doesn’t deliver.
Finance leadership cares about the responsible use of resources. The best framing here is to treat your project as a cost avoidance or efficiency strategy. Show how AI will reduce overtime or free up staff time for more expertise-driven work. Connect the use case to existing budget lines (software subscriptions, consulting, technology upgrades) rather than asking for new funds.
Expect to be challenged on ROI. You’ll need to bring numbers, even if they’re estimates from pilot programs or similar jurisdictions. The more specific, the better.
IT personnel zero in on integration, data, and vendor compliance. A good approach is to position this as something that fits cleanly within your existing infrastructure. Lead with technical details like API compatibility, security documentation, and audit controls.
Make it clear that human oversight can be built in (like it is for CivCheck’s plan review solution), and that this isn’t an automation free-for-all. If you treat IT as a strategic partner from day one rather than a last-minute reviewer, you’ll avoid unnecessary problems later.
Department leaders care about the daily pressure on their teams, including rising workloads, burnout, and slow turnaround times. Don’t try to sell them on AI in the abstract. Show how a specific tool reduces repetitive tasks and gives staff more time to do the work they were hired to do.
Your job is to explain how this starts small, requires minimal disruption, and improves work without adding more tasks to the team’s plate.
When talking to council or board members, focus on the public impact. What they hear from constituents matters more than backend problems. Talk about how AI can reduce wait times, improve service delivery, and help residents get what they need without standing in line, calling multiple departments, or submitting the same documents twice.
If you can reference another city or county that’s already using the tool and getting results, you’re much more likely to get buy-in.
Similar to the council and boards they lead, mayors and county executive leadership want visible wins for the public. They’re focused on delivering on campaign promises and making progress on signature priorities, such as affordable housing, small-business growth, and digital access. Connect your project to those themes, and highlight how it delivers a measurable outcome that the public will notice.
Framing it as a low-risk pilot that can scale over time helps them see it as smart leadership, rather than a potentially risky tech experiment.
When your request lands on someone's desk, they’ll evaluate three things: risk, alignment with priorities, and implementation or rollout. Here’s what they’re looking for before saying yes:
Some municipalities are finding that a two-phased RFP approach works well for AI procurement because the technology is changing quickly and most jurisdictions don’t yet know what’s possible.
Louisville, Kentucky, took this approach. Instead of specifying exactly which AI tool they wanted, they issued an RFP that outlined their permitting backlog challenges, constituent service delays, and document processing bottlenecks. They asked vendors: “Can Your AI Fix a City Problem?” The goal of the first phase of the RFP is to select 5-10 pilots to run for 3-6 months, then evaluate which deliver results worth scaling in phase two.
Why this approach works well for AI:
What to include in an AI-focused RFP:
Traditional RFP approaches work too. If you know exactly what you need (like Honolulu seeking a plan review tool or Denver looking for 311 AI support), a traditional RFP with detailed specifications is the right path. But if you’re looking into AI across multiple departments and aren’t sure where to start, this problem-first approach can save you months of work.
Run through these before you start any formal request process. If you can’t answer yes to all five, you’re not ready yet.
You’ve seen what’s fundable and what other jurisdictions are doing with AI. Here are eight steps you can take to get started.
Before you pursue a project, figure out if you’re actually ready to implement an AI tool:
Don’t try to fix everything at once. Pick one specific problem that AI can help with. Go back to the What Makes an AI Project Fundable section. Your starting problem should be measurable, tied to existing work, and solvable with human oversight built in.
You need buy-in before you pursue funding. Talk to the people who will use the tool first. If your staff is not on board, find out why and address those concerns before pitching to leadership.
Then build your case up the chain, focusing each message on the specific leadership decision-maker. Show them the problem, what it’s costing now, how AI fixes it, and how much funding you need to implement it. Share examples of peer jurisdictions that did this successfully. Be ready to answer:
Start with your existing department budget if possible. Look at line items for software subscriptions, professional services, consulting, or operational improvements. AI tools often fit within categories you’re already budgeted for.
If existing budget won’t work, meet with your finance or grants department to see if there are other options. They can help you identify which budget categories make sense or if there are programs you can tap into.
Check if your city has AI-specific procurement guidelines. Get IT and legal involved early, and look at RFPs from other cities and counties for guidance on adopting and implementing similar tools.
If you need guidance on what to include, review the What Decision-Makers Evaluate section again. Your RFP should address how the solution addresses your specific pain points, how the vendor handles risk assessment, security, performance metrics, and bias detection.
You don’t need to create governance from scratch. Cities like Seattle, San Francisco, Austin, and San José have published frameworks you can borrow. Look at their pilot evaluation checklists, risk management tiers, and data governance approaches.
Even if your city doesn’t require a formal framework, using elements from these examples will make your project stronger and easier to approve.
Vendor training is a start, but you’ll also want to use free and low-cost resources, run the new AI tool in parallel with your current process before going live, and be honest about what it can and can’t do. Here are a few places to find training:
Track measurable outcomes (review time reduced from X to Y, call volume down Z%) you promised in your pitch to leadership. Share results at conferences and with your city’s decision-makers. Use early wins to justify bigger projects. The best way to get funding for your next AI project is to prove that the first project was successful.
Your team is stretched thin, the work keeps coming, and there’s no relief in sight. You’d add staff if you could, but the budget isn’t there, and the talent pool is shrinking. Meanwhile, residents deserve better service, and your people deserve support.
Now, you have what you need to move forward. You have a measurable bottleneck, proof that the technology works, budget categories where this fits, and a blueprint for making it happen. Pick one problem, the one costing you the most time or creating the most frustration. Build your case using the frameworks in this guide. Start small, prove it works, and then scale from there.
Your team will thank you, your residents will notice, and you’ll have a path forward that works.
PS - If you want a copy of this ebook, you can download below.
PPS - Want to see what AI tools can do for your organization? Reach out to see how CivCheck is helping cities reduce plan review time by up to 80% while keeping humans in charge of every decision.