If you’re looking to adopt AI across your team, it’s important to understand that not all AI behaves the same or can be applied to every problem.
Many technologies we use today can be considered “AI,” such as search engines, spell-check tools, and even the built-in chess app on your computer. But just as these technologies function differently in the front end, the methods, models, and algorithms used to build their “intelligence” vary too.
So, what are the different AI models—and how do you choose the right one?
To start, first consider three factors:
Once you’ve defined your inputs, outputs, and task, you can then explore which AI technology will work best for your use case.
Let’s look at the six most common AI models and how governments are already putting them to work.
Data Input: Text/Words
Data Output: Text/Words
Type of Task: Generation
- Summarizing and rephrasing text
- Expanding on text
- Completing fill-in-the-blank exercises
- Translating languages
- Drafting proposals, documentation, job descriptions, etc.
- Summarizing legislation when doing policy research
- Translating public-facing content for accessibility
LLMs generate responses by predicting words based on learned language patterns. They don’t “understand” meaning, which can occasionally lead to inaccurate or fabricated results.
Examples: ChatGPT, Gemini, Claude, etc.
An LLM analyzes patterns in vast amounts of text and predicts the next most likely word in a sequence. For example, given “Once upon a,” it predicts “time.” With enough training data, these models can generate fluent, human-like responses across a range of topics.
Data Input: Text/Words
Data Output: Text/Words
Type of Task: Generation
- Answering questions using a specified knowledge base (e.g. support chatbots)
- Chatbots that answer citizen questions about programs, forms, or deadlines
- Internal tools that help staff quickly find policies, benefits, or procedures
RAG models rely on the accuracy and organization of their knowledge base. They still use LLMs to generate responses, so quality depends on both the underlying data and the model’s ability to retrieve relevant information.
Examples: Perplexity (AI-powered internet search engine)
RAG combines two technologies: a search engine and an LLM. First, it retrieves the most relevant documents for a user’s question. Then, it passes those documents and the question into an LLM, which generates a concise, conversational response grounded in real information.
Data Input: Text, Image (most of the time)
Data Output: Image
Type of Task: Generation
- Generating realistic images or visual concepts
- Putting together conceptual images for proposals (e.g. helping communities envision alternative uses of a property/ city land for parks, public works, art, etc.)
GANs can only generate images similar to those they were trained on, so outputs may not always reflect entirely new or unseen visuals.
Examples: DALLE-3, Adobe Firefly, Microsoft Copilot, Midjourney
GANs consist of two models: a generator, which creates images, and a discriminator, which determines whether those images look real. Through training, both models improve—the generator learns to create increasingly realistic images, while the discriminator sharpens its ability to detect fakes.
Data Input: Text, Image (most of the time)
Data Output: Category Label (Text)
Type of Task: Classification
- Pattern Matching (finding parts of an image that look similar to a provided image)
- Classifying the kind of image: What kind of thing is this?
- Image Detection (finding the bounding boxes for parts of an image that fit in a particular category)
- Digitization of paper-based government forms
- Finding relevant information in government documents (e.g. permit plan review, front-desk/counter work)
- Evaluating environmental conditions of natural resources over time (like coastline erosion)
- Facial Recognition for security (e.g. TSA Pre-Check/Airport Security)
- Evaluating the conditions of public infrastructure assets like bridges and sewage pipes using drone-captured images
Neural network–based models require large training datasets. Traditional rule-based methods may struggle with imperfect images, such as blurry scans or overlapping text.
Examples: OCR in Adobe Acrobat (the ability to search for and find text in PDFs), Facial Recognition on iPhones
Computer vision models learn to identify patterns by analyzing pixel-level similarities. Neural networks can automatically learn which patterns correspond to specific objects or features, allowing systems to recognize text, shapes, and faces with high accuracy.
Data Input: Series of Numbers (typically), Text, Images
Data Output: Category Label/Decision (Text or Number)
Type of Task: Classification
- You have a list of known rules and want to evaluate known information against the rules
- You have a clear decision tree/routing structure, which you want to use to make a decision
- Scoring driver permit exams
- Automated routing of documentation through government process workflows
- Permit & license issuance
- Guided government document/form submission and completion checks
These systems require a clear, well-defined rule set. Updating them can be time-consuming, as each process change requires reprogramming.
Examples: IRS Free File, TurboTax
A rules engine applies a structured series of “if-then” statements or decision trees to evaluate inputs and reach a conclusion. This makes the decision process transparent and predictable—ideal for regulated or policy-driven workflows.
Data Input: Series of Numbers (typically), Images
Data Output: Number
Type of Task: Numeric-Value Prediction
- Simulating a known set of interactions (e.g. traffic patterns, weather patterns)
- Optimization problems (e.g. trying to minimize or maximize a value - knowing the different interactions at play in a situation)
- Simulating road traffic
- Simulating the environmental impact of civic projects with digital twining
- Optimizing meeting schedules or resource allocation
- Forecasting budgets based on usage data
These models are only as reliable as their underlying assumptions. Poorly defined simulations can lead to inaccurate or incomplete predictions.
Examples: Road Traffic Simulation Software, Urban Planning Modeling Software
Optimization models test many possible scenarios to find the best outcome—such as minimizing cost or maximizing efficiency. Simulation models help visualize how changes to one part of a system affect the whole, enabling data-driven planning for complex civic systems.