Skip to content
Jen Nieto
December 03, 2025
16 min read

Applications, requests, reviews, inspections, and other work are piling up faster than your team can process them. You’re losing good people to burnout or retirement, and your constituents are calling, emailing, and showing up at the counter, all asking the same question: “Why does this take so long?”

And your answer? “We’re doing the best we can with what we have.”

The old solutions of hiring more staff, extending hours, and telling everyone to work harder stopped working years ago. You can’t squeeze more hours out of days that are already maxed out.

Meanwhile, most cities ran deficits last year, while pension obligations keep climbing and traditional revenue sources dry up. You’d hire more people if you could, but the talent pool isn’t there. Experienced professionals in fields like building codes, permitting, and constituent services are retiring faster than new people are entering these careers. 

Your existing team is already working at its limit, and asking them to do more isn’t fair or sustainable. At the same time, residents expect a level of service comparable to what they experience in other areas of their lives. 

This is why cities and counties are turning to AI for support. This guide outlines:

Now let’s figure out how you can (and should) adopt AI. 

AI Is Here Now (And It’s Already Working)

Here’s the good news: You’re not a guinea pig. 

AI tools are already working in city and county governments across the country, with early adopters reporting 25-40% efficiency gains within the first 90 days. According to a report by the Boston Consulting Group, agencies can save up to 35% of budget costs over the next 10 years by using AI in areas like case processing, not by cutting people, but by letting technology handle the repetitive tasks so your staff can focus on work that requires their expertise. 

The technology is proven, and your peers are already using it. But getting internal approval requires addressing some legitimate concerns. 

What’s Actually Holding Governments Back

Understandably, there are concerns about adopting AI that may be holding you and others back. Let’s address them: 

“Can we trust this technology, and what about our staff?”

You may have experienced technology vendors promise the moon and deliver disappointment. And when it comes to AI, the stakes feel even higher. What if it makes a mistake? What if it’s biased? Will this replace our staff?

Cities are putting guardrails in place to address these concerns. Many require risk assessments, security checks, and testing for accuracy and bias before deployment. 

Many governments are starting with low-risk applications like chatbots that answer frequently asked questions or intake tools that flag incomplete applications. These aren’t high-stakes decisions where an AI error creates major problems. 

As for staff replacement concerns, there’s an important distinction between artificial intelligence and augmented intelligence

Artificial intelligence removes the human element entirely and makes decisions autonomously. Augmented intelligence helps staff do their jobs better and faster while humans still make the final calls. Think of it like the difference between a self-driving car and a navigation app. One replaces the driver while the other makes the driver more effective. 

Take CivCheck’s AI plan review software as an example. Rather than performing plan checks independently, the platform’s AI guides staff through their reviews so they can make decisions faster. It augments staff (as the name suggests) rather than performing their job for them. 

“We don’t even know what qualifies.”

Many governments aren’t sure if AI can solve their specific problems or what would actually qualify for funding. Starting with the problem, not the technology, usually helps answer this. What’s your biggest bottleneck? Where are staff spending the most time on repetitive work? 

Cities report that starting with “low-hanging fruit” (tasks that are repetitive, time-consuming, and don’t require high-level decisions) helps build confidence and show value. Common starting points include processing routine forms, answering frequently asked questions, and organizing large document sets. 

“What if we implement this and it fails?”

The fear keeping decision-makers up at night is greenlighting an AI project that doesn’t work and wasting taxpayer money. The solution is to start small. 

Don’t bet your entire department on an untested system. Start with one department or one specific problem. Some AI tools can go live in a week. Measure results immediately, and if it works, expand. If it doesn’t, you haven’t lost much.

Cities successfully using AI often started with limited pilots that proved value quickly. Then, they fully committed after leadership saw results.

What Makes an AI Project Fundable 

Let’s use a permitting use case as an example. Remember that plan review backlog that’s been sitting for six months? The one you can’t seem to get through because your team is already maxed out? That’s fundable. “We want to explore AI” is not. 

What gets approved comes down to specifics: 

→ You can measure the problem. How long does a typical process take right now? How many calls are you fielding? What percentage of applications get kicked back? Pick the thing that is costing you the most money or time and show how AI tools will help. 
→ It improves existing work. You’re already processing permits, answering questions, reviewing applications. If AI can help you handle that existing workload faster or with fewer errors, that’s fundable. Proposing to launch an entirely new service you’ve never offered before is a much harder sell. 
→ Human oversight is built in. The AI can flag potential issues or suggest relevant code sections or draft responses, but a person reviews everything and makes the final call. Whether it’s a chatbot answering questions or a tool checking plans, someone reviews the output before it goes to the public. For plan review, CivCheck is the only solution built specifically with this requirement in mind, requiring staff to review and sign off on every AI interpretation. That human-in-the-loop requirement is often what separates a fundable project from one that gets rejected outright.
→ Outcomes are tied to delivery. Before the application even goes in, you should know exactly what metric you’re moving and by how much. Think actual numbers you can track: We’ll reduce average review time from 12 days to 7 days. We’ll cut phone calls to the permit desk by 25%. 
→ Integrate with your existing systems. Complete system overhauls are expensive, risky, and politically difficult. But if an AI tool can be used on its own, or plug into what you’re already using, that’s much more attractive to leadership. For example, CivCheck can easily integrate with any permitting or plan review system that accepts web API calls, which means you can add it without overhauling your entire setup. The easier the integration, the more fundable the project. 
→ Data governance and privacy protections in place. Where does the data go once the AI processes it? Who has access to it? How long do you keep it? What happens to personally identifiable information? You don’t need a 50-page security document before you apply for funding, but you do need clear answers. 

How Departments are Using AI Tools 

Cities and counties are adopting innovative AI tools to solve specific, high-impact problems. Here’s what’s working right now, organized by department and the technologies they’re using:

Customer Service & Constituent Engagement

Saratoga, California rolled out Hamlet, an AI platform that summarizes City Council agendas, recordings, and supporting materials to improve constituent outreach and transparency. 

Denver, Colorado is using Sunny AI to handle some of their 311 calls, projecting $2.8 million in savings. The AI handles after-hours calls and simple requests, allowing staff to focus on more complex tasks during business hours. 

Hartford, Connecticut partnered with Google to provide AI-powered, real-time translation services for city council and board meetings, with nearly 80 languages for residents to choose from. The goal is to foster trust and transparency in a city with a large immigrant population.  

Covington, Kentucky built an economic development chatbot to answer resident questions about opening and maintaining a business, including the required permits to operate legally and commercial properties available for sale or lease. 

→ What makes these fundable? Clear service delivery problems with measurable ROI. You can track call volumes, response times, and staffing costs. 
Plan Review

Honolulu, Hawaii reduced plan review time by 70% on average using CivCheck’s AI plan review software. Staff can complete reviews faster because the tool has already flagged potential issues, pulled relevant code sections, and run initial compliance checks. 

Seattle, Washington is working with CivCheck on permit application screening, scanning thousands of applications for omissions and other issues requiring resubmission, and flagging common errors so they can be addressed and prevented. The city’s ultimate goal is to reduce permitting times by half. 

New York City’s Department of Buildings partnered with CivCheck to pre-screen 14 residential alteration and enlargement plans against 50 regulation checks for missing information and code compliance. Plan reviewers reported a 25% time savings after using the tool. In the future, the city anticipates saving over 60 minutes per permit and speeding up approvals, giving reviewers more time to focus on expert-level work. 

→ What makes these fundable? Measurable outcomes (days from application to approval, review cycles per permit, staff hours per review) with human oversight built in. 
Infrastructure & Maintenance

Washington, D.C. is using AI for visual inspection of water mains and sewage pipes. Instead of sending inspection crews into every pipe on a schedule, AI analyzes video footage to identify problems that need attention. 

→ What makes this fundable? Clear cost problem with measurable outcomes: cost per inspection, number of problems caught early, reduction in emergency repairs. 
Data Analytics & Decision Support

Cambridge, Massachusetts, and Pittsburgh, Pennsylvania are using AI analytics to address traffic gridlock. They’re analyzing traffic patterns to improve signal timing and reduce congestion.

Seattle, Washington is using AI to process housing applications faster by automatically checking them for completeness and flagging missing information before a human reviewer needs to review them. 

Indianapolis, Indiana invested in ethical AI training for government workers to teach them to use AI responsibly, focusing on explaining AI decisions to residents and keeping human oversight in government services.

The Bloomberg Philanthropies City Data Alliance provides technical assistance to cities implementing AI and data analytics projects. Austin, Boston, Dallas, Denver, Kansas City, and Newport News are all participating. Baltimore used the program to identify neighborhoods vulnerable to infrastructure failures. Tampa used it to identify areas most impacted by hurricanes in real time. 

→ What makes this fundable? Specific problems with measurable ROI: processing times, error rates, staff hours saved. 

The pattern is the same across all of these examples: start with a clear problem, pick a tool that addresses it, measure results, and scale what works. 

How Cities are Building AI Guardrails

Before departments adopt AI, many cities are putting frameworks in place. These internal guardrails usually include clear principles, a review process, and expectations for staff and vendors to follow. 

The goal is to help departments move faster, try new tools with less risk, and build internal trust that AI won’t create unmanageable fallout. These frameworks make it easier to say “yes” to AI because there’s a shared definition of what “responsible” looks like. 

Cities like San Francisco, San José, and Seattle are early leaders, having published formal AI guidance and launched cross-departmental governance programs. Others, including Austin, Boise, Boston, Denver, New York, and Washington, D.C., are rolling out their own frameworks based on their local needs. 

Here are a few examples:

  • Austin, TX created generative AI standards modeled after San José, formed an AI Advisory Committee, and published staff-facing guidance for responsible AI use.
  • Boise, ID introduced citywide AI regulation focused on ethical use, content validation, and accountability. Their “AI Ambassadors” program promotes cross-departmental knowledge sharing.
  • Boston, MA published employee guidance for using generative AI, emphasizing fact-checking, source validation, and disclosure when content is AI-generated.
  • Denver, CO launched its DenAI Summit to coordinate pilot projects, released a formal request for AI vendors, and began developing an internal AI governance framework.
  • New York, NY released a 37-action AI plan that focuses on public engagement, procurement reform, equity, and rules for responsible use.
  • Washington, D.C. codified AI principles including safety, privacy, equity, and accountability. Its AI Taskforce coordinates engagement and confirms human oversight across departments. 

These frameworks vary in form, but their function is the same. Protect the public, support staff, and make innovation safer.

Core Framework Components to Borrow

If you’re ready to start or improve an internal AI review process, you can adapt common elements already working in other cities:

  • Clear guiding principles: Start with values like transparency, equity, human decision-making, and community impact. These keep the focus on outcomes that matter.
  • Defined governance roles: Small cross-departmental groups can help vet tools, flag risks, and coordinate efforts.
  • Pilot evaluation checklists: Seattle’s “Proof of Value” framework is one model. It helps departments measure results quickly and decide whether to scale. 
  • Internal and public engagement: Forums, surveys, and training help staff and residents build confidence in how AI is being used and by whom.
  • Data management and security: Include privacy reviews, procurement standards, and compliance with frameworks like NIST RMF.
  • Risk management and oversight: Use tiers to separate low-risk tools (like redaction or meeting summarization) from higher-stakes applications. Add human-in-the-loop requirements where needed.
  • Workforce readiness: Offer hands-on training, safe test environments, and documentation support. Create safe test environments so staff can experiment and learn.
  • Built-in flexibility: Design frameworks to evolve. Most frameworks treat pilots as learning opportunities and update policies as they go.

You can draw from published frameworks to draft your own. Here are links to help get you started:


How to Sell This to Leadership 

Getting internal approval to adopt and implement AI is about making a credible case that it solves a real problem, fits within your budget, and doesn’t introduce unnecessary risk for your organization. The case looks different depending on who you’re talking to. Here are some ideas on how to frame your pitch by role: 

City Manager

City managers are focused on operational performance, risk, and the public’s experience with government services. This audience doesn’t need to be convinced that AI can help. They need to understand how a specific solution solves a real, ongoing problem. Don’t pitch it as an “AI initiative.” Instead, present it as an improvement to service delivery that doesn’t require adding headcount or launching a major IT overhaul. 

Be prepared to answer questions about how it will be implemented, how performance will be measured, and what happens if it doesn’t deliver. 

Finance Director

Finance leadership cares about the responsible use of resources. The best framing here is to treat your project as a cost avoidance or efficiency strategy. Show how AI will reduce overtime or free up staff time for more expertise-driven work. Connect the use case to existing budget lines (software subscriptions, consulting, technology upgrades) rather than asking for new funds. 

Expect to be challenged on ROI. You’ll need to bring numbers, even if they’re estimates from pilot programs or similar jurisdictions. The more specific, the better. 

IT Director

IT personnel zero in on integration, data, and vendor compliance. A good approach is to position this as something that fits cleanly within your existing infrastructure. Lead with technical details like API compatibility, security documentation, and audit controls. 

Make it clear that human oversight can be built in (like it is for CivCheck’s plan review solution), and that this isn’t an automation free-for-all. If you treat IT as a strategic partner from day one rather than a last-minute reviewer, you’ll avoid unnecessary problems later. 

Department Heads

Department leaders care about the daily pressure on their teams, including rising workloads, burnout, and slow turnaround times. Don’t try to sell them on AI in the abstract. Show how a specific tool reduces repetitive tasks and gives staff more time to do the work they were hired to do. 

Your job is to explain how this starts small, requires minimal disruption, and improves work without adding more tasks to the team’s plate. 

Council and Board Members

When talking to council or board members, focus on the public impact. What they hear from constituents matters more than backend problems. Talk about how AI can reduce wait times, improve service delivery, and help residents get what they need without standing in line, calling multiple departments, or submitting the same documents twice. 

If you can reference another city or county that’s already using the tool and getting results, you’re much more likely to get buy-in. 

Mayor and Executives

Similar to the council and boards they lead, mayors and county executive leadership want visible wins for the public. They’re focused on delivering on campaign promises and making progress on signature priorities, such as affordable housing, small-business growth, and digital access. Connect your project to those themes, and highlight how it delivers a measurable outcome that the public will notice. 

Framing it as a low-risk pilot that can scale over time helps them see it as smart leadership, rather than a potentially risky tech experiment. 

What Decision-Makers Evaluate 

When your request lands on someone's desk, they’ll evaluate three things: risk, alignment with priorities, and implementation or rollout. Here’s what they’re looking for before saying yes: 

→ Risk assessment. Show proof that you’ve thought about what could go wrong and how you’re preparing for it. What if the AI makes a mistake? You have staff review everything. What if the vendor goes out of business? You’re keeping local copies of all your data. What if there’s a security breach? You’ve verified the vendors meet your IT security standards. These are straightforward questions with basic answers, but you need to have them ready. 
→ Security reviews. Your IT department will want to know how data is handled, stored, and protected. Bring vendor documentation early and loop IT in before decisions are made. Don’t wait until after you’ve found an AI tool to involve IT. Get them in early, as this review can take time, and you don’t want it holding up funding. 
→ Performance metrics and testing plans. This means not just what you’ll measure, but how you’ll test this before going live. Maybe you run it on last month’s plans first. Maybe you pilot it with one type of review before expanding. Maybe you run AI reviews in parallel with manual reviews for the first month. Something that shows you’re not betting everything on day one. 
→ Bias detection and mitigation. Could this AI tool treat different groups of people differently in ways that create unfair outcomes? For most government AI tools, especially ones that apply the same regulations to everyone, the answer is probably no. But you need to show that you’ve thought about it and have a plan to monitor it. 

Red Flags That Stall or Sink a Project

  • No ROI. If you can’t explain what you’ll save or improve in concrete terms, your request is likely to be rejected. “Better efficiency” isn’t an answer, but “We’ll process 25% more plans without hiring additional staff” is. 
  • No data plan. You haven’t thought through where the data goes or who accesses it. This tells decision-makers you’re not ready to implement a new tool that will access sensitive public information.
  • No human oversight. If the AI tool is making decisions on its own with no review, that’s an automatic rejection in most frameworks. Build review into the process from the start.
  • Unclear vendor lock-in terms. Can you get data out if you need to? Can you switch vendors without losing everything? If you don’t know, find out as soon as possible.
  • No testing or validation plan. You’re planning just to flip the switch and hope it works. Even a basic testing phase makes your case stronger. 
  • Using free tools with problematic terms. If you’re using free tools, read the fine print carefully. The State of Colorado banned free ChatGPT on state devices because the terms of service violated state law. Often, the paid or government versions are structured differently. 

 

Using Problem-First RFPs to Explore AI Solutions

Some municipalities are finding that a two-phased RFP approach works well for AI procurement because the technology is changing quickly and most jurisdictions don’t yet know what’s possible. 

Louisville, Kentucky, took this approach. Instead of specifying exactly which AI tool they wanted, they issued an RFP that outlined their permitting backlog challenges, constituent service delays, and document processing bottlenecks. They asked vendors: “Can Your AI Fix a City Problem?” The goal of the first phase of the RFP is to select 5-10 pilots to run for 3-6 months, then evaluate which deliver results worth scaling in phase two.

Why this approach works well for AI:

  • You don’t need to be an AI expert to write the RFP. You just need to know your problems.
  • Vendors propose innovative approaches you might not have considered. You get to see multiple approaches to the same problem.
  • You can gauge market interest before investing time in a detailed RFP
  • It builds internal knowledge and helps you learn what works in your environment before making larger investments.

What to include in an AI-focused RFP:

  • Specific problems you’re trying to solve (processing X permits/month with X-day average turnaround)
  • Must-have requirements (human oversight, API integration, data residency)
  • Your constraints (budget range, timeline, existing systems)
  • Questions about implementation (training requirements, support model, testing approach)

Traditional RFP approaches work too. If you know exactly what you need (like Honolulu seeking a plan review tool or Denver looking for 311 AI support), a traditional RFP with detailed specifications is the right path. But if you’re looking into AI across multiple departments and aren’t sure where to start, this problem-first approach can save you months of work.

Is Your Project Justifiable? 5 Questions to Ask Your Team

Run through these before you start any formal request process. If you can’t answer yes to all five, you’re not ready yet.

  • Can we measure the current problem? You need actual numbers. How many hours does this take? How many applications get rejected? How many calls are you answering?
  • Does this solve a real pain point? Not something that would be nice to fix, but something that’s actively causing internal issues. Your team is overworked, residents are frustrated, or you’re losing people because the work is too demanding. 
  • Can we define success in six months? You need specific outcomes you can measure. Think: we’ll review 20% more plans, or we’ll cut review time from 10 days to 6. 
  • Do the people who’ll use it want it? If your staff thinks this is a terrible idea, you have a problem that leadership approval won’t solve. Get their buy-in first. 
  • Have other agencies done this successfully? You don’t need to be the guinea pig. If you can point to another city or county that tried this and got results, your case is much stronger. 

 

Are you ready to implement AI?

You’ve seen what’s fundable and what other jurisdictions are doing with AI. Here are eight steps you can take to get started.

Step 1: Assess your readiness

Before you pursue a project, figure out if you’re actually ready to implement an AI tool:

  • Do you have usable data?
  • Do you have an executive champion?
  • Does IT support this?
  • Can you measure the problem?

Step 2: Start with one problem

Don’t try to fix everything at once. Pick one specific problem that AI can help with. Go back to the What Makes an AI Project Fundable section. Your starting problem should be measurable, tied to existing work, and solvable with human oversight built in. 

  • Good starting problems: high volume of repetitive questions, incomplete applications, manual reviews. 
  • Bad starting problems: vague goals, policy issues disguised as process issues, problems you can’t measure.

Step 3: Build your case internally first

You need buy-in before you pursue funding. Talk to the people who will use the tool first. If your staff is not on board, find out why and address those concerns before pitching to leadership.

Then build your case up the chain, focusing each message on the specific leadership decision-maker. Show them the problem, what it’s costing now, how AI fixes it, and how much funding you need to implement it. Share examples of peer jurisdictions that did this successfully. Be ready to answer: 

  • What’s the cost? 
  • How do we know it works? 
  • Who reviews the AI’s output? 
  • What happens to our data?

Step 4: Identify your funding path

Start with your existing department budget if possible. Look at line items for software subscriptions, professional services, consulting, or operational improvements. AI tools often fit within categories you’re already budgeted for. 

If existing budget won’t work, meet with your finance or grants department to see if there are other options. They can help you identify which budget categories make sense or if there are programs you can tap into.

Step 5: Address procurement (if needed)

Check if your city has AI-specific procurement guidelines. Get IT and legal involved early, and look at RFPs from other cities and counties for guidance on adopting and implementing similar tools. 

If you need guidance on what to include, review the What Decision-Makers Evaluate section again. Your RFP should address how the solution addresses your specific pain points, how the vendor handles risk assessment, security, performance metrics, and bias detection. 

Step 6: Look at city frameworks for guidance

You don’t need to create governance from scratch. Cities like Seattle, San Francisco, Austin, and San José have published frameworks you can borrow. Look at their pilot evaluation checklists, risk management tiers, and data governance approaches.

Even if your city doesn’t require a formal framework, using elements from these examples will make your project stronger and easier to approve.

Step 7: Plan for implementation and adoption

Vendor training is a start, but you’ll also want to use free and low-cost resources, run the new AI tool in parallel with your current process before going live, and be honest about what it can and can’t do. Here are a few places to find training:

  • InnovateUS offers AI learning modules that thousands of government employees in New York, San Francisco, New Jersey, and Indianapolis are using. 
  • NACo AI Leadership Academy is a 6-week program for county leaders covering AI implementation strategies, funding approaches, and lessons learned from counties already using AI.
  • Your state CIO office may offer AI training programs. 
 
Step 8: Prove it worked (so you can get more funding)

Track measurable outcomes (review time reduced from X to Y, call volume down Z%) you promised in your pitch to leadership. Share results at conferences and with your city’s decision-makers. Use early wins to justify bigger projects. The best way to get funding for your next AI project is to prove that the first project was successful. 

What Happens Next

Your team is stretched thin, the work keeps coming, and there’s no relief in sight. You’d add staff if you could, but the budget isn’t there, and the talent pool is shrinking. Meanwhile, residents deserve better service, and your people deserve support. 

Now, you have what you need to move forward. You have a measurable bottleneck, proof that the technology works, budget categories where this fits, and a blueprint for making it happen. Pick one problem, the one costing you the most time or creating the most frustration. Build your case using the frameworks in this guide. Start small, prove it works, and then scale from there. 

Your team will thank you, your residents will notice, and you’ll have a path forward that works.

PS - If you want a copy of this ebook, you can download below.

PPS - Want to see what AI tools can do for your organization? Reach out to see how CivCheck is helping cities reduce plan review time by up to 80% while keeping humans in charge of every decision. 

RELATED ARTICLES