The Gap Nobody Filled: AI-Powered Workflow Validation for Jira Cloud
Gabriela Perdum
Author
10 min readMarch 16, 2026
Workflow validators in Jira have a long history of being either too hard to build or too blunt to be useful.
The built-in options are simple: require a field to be non-empty, check a user’s role, verify a linked issue exists. Anything more nuanced requires a third-party app. And once you go third-party, you end up in scripting territory — Groovy, Jira Expressions, domain-specific languages. These tools are powerful, but they require a level of technical fluency that most Jira administrators don’t have and shouldn’t need to develop just to enforce “make sure the description is complete before moving to review.”
That’s the gap CogniRunner was built to fill.
Why This Exists
The problem isn’t that Jira can’t enforce quality gates. It’s that doing it properly has always required either a developer or a significant time investment in learning scripting tools that were never designed for non-technical users.
ScriptRunner for Jira, the most widely used workflow customisation tool on the Marketplace with around 18,000 installs, requires Jira Expressions on Cloud and Groovy on Data Center. Writing a validator that checks whether a description contains acceptance criteria isn’t hard if you can write code. It’s inaccessible if you can’t.
JMWE, JSU, Jira Workflow Toolbox — all follow the same pattern. They give you a library of pre-built rule types: field required, user permission check, status of linked issue. Useful. But the moment you need something outside that library, you’re writing expressions. And the moment you need to evaluate the actual content of a field — not just whether it’s empty, but whether it’s meaningful, whether it matches your team’s standards, whether it duplicates something that already exists — no amount of expression-writing gets you there. You’d need an LLM.
Getting an LLM into a Jira workflow validator is a real engineering project. Atlassian’s Forge platform has a workflow validator module that can make external API calls, but it’s still in preview, it requires JavaScript/TypeScript development, a configuration UI, API key management, latency handling, and cost management. That’s weeks of work for a single validator.
On the AI side of the Marketplace, the situation is equally clear. Every AI app for Jira — ChatGPT integrations, GPT for Jira, AI automation tools, Atlassian’s own Rovo — operates as an assistant or content generator. None of them can block a transition. The architecture doesn’t allow it. Rovo agents, for instance, run asynchronously after a transition completes. They can leave a comment or update a field. They cannot return a pass/fail result that stops the transition from happening.
Atlassian has effectively said as much. In a community FAQ, when asked whether Rovo could enforce field validation before a transition, the response was direct: workflow validators are the right tool for that, and Rovo doesn’t contribute anything meaningful there.
The Marketplace splits cleanly into two camps: validator apps that can block transitions but have no AI, and AI apps that understand language but cannot block transitions. CogniRunner is built to sit in the space between them.
Who It’s For
CogniRunner is useful for three overlapping groups.
Jira Cloud administrators and power users
If you manage workflows and you’ve ever wanted to enforce a quality standard that couldn’t be expressed as “field is not empty,” CogniRunner replaces the need to either learn Jira Expressions or file a request with a developer. You describe what good looks like. The AI enforces it.
Atlassian partners and consultants
If you configure Jira for clients, CogniRunner changes the economics of adding AI-powered quality gates to a workflow. A validator that would previously require a scoped development engagement now takes minutes to configure. That’s a different conversation with a client.
Engineering and delivery teams
If your team’s workflow has gates that depend on human review to enforce — definition of ready, acceptance criteria presence, duplicate prevention, document compliance — CogniRunner can automate that enforcement consistently, without it depending on who happens to be reviewing the queue.
What Makes It Different
There are two things CogniRunner does that nothing else on the Marketplace currently does.
Plain-English prompts as live workflow validators
You select a Jira field, write a prompt describing your validation criteria in plain English, and attach it to a workflow transition. When a user attempts that transition, the AI reads the field content, evaluates it against your prompt, and either passes the transition or blocks it with a specific explanation of why.
This isn’t a chatbot interface bolted onto a workflow. It’s a synchronous evaluation running inside the transition. The AI’s reasoning becomes the error message the user sees.
Two rule types cover the two main use cases:
Validators block the transition and show the AI’s reasoning as an error message. The issue stays where it is until the criteria are met.
Conditions hide the transition entirely. The user doesn’t see the button until the validation passes.
Validation works against every Jira field type — standard and custom, text and rich text, selects, people, dates, numbers, labels, components, and more.
Attachment and image analysis
When Attachment is selected as the field to validate, CogniRunner downloads and reads the actual content of attached files — PDFs, Word documents, Excel spreadsheets, PowerPoint presentations, and images via AI vision. The AI can evaluate what’s inside a document, not just whether it exists.
This unlocks validation scenarios that are technically impossible with any other tool on the Marketplace. Block a transition if an attached specification doesn’t contain required sections. Reject an issue if the screenshot shows an error state that should have been resolved. Verify a template has been filled in before it moves forward.
JQL-based duplicate detection
For prompts that mention duplicates or similarity, CogniRunner can automatically generate and execute JQL queries against your instance to find related issues in real time. The AI runs up to three rounds of queries, evaluates the results against your field content, and blocks the transition if a duplicate is identified — referencing the specific issue key in the error message.
No API key required on your end. The OpenAI integration is handled by the app. You install it, configure your prompts, and it works.
How It Fits Into Your Workflow Editor
CogniRunner integrates into Jira’s native workflow editor. It appears under Marketplace Rules alongside other installed validator apps. There’s no separate admin interface to learn for basic setup.
Open the workflow editor, select a transition, open the Rules panel.
Click + next to “Validate details” (validator) or “Restrict transition” (condition), find CogniRunner Field Validator.
Select the field, write your prompt, optionally configure JQL duplicate detection. Click Update.
Rules can be enabled or disabled without removing them from the workflow. The Admin panel provides a centralised view of all configured rules across projects and workflows, with full validation logs: every pass and fail is recorded with the issue key, field, AI reasoning, and any JQL queries executed.
What’s Coming
The current version covers the AI validator and condition layer. The work in progress and planned next steps are all in the same direction: more control over how the AI operates, and a broader range of what it can do inside a Jira workflow.
AI Post Functions - Shipping
AI-generated post-function actions on workflow transitions — with static and hybrid options alongside fully AI-generated output. The hybrid approach lets you constrain what the AI writes, addressing the predictability concern that makes fully dynamic AI risky in production workflows.
Static Validators & Conditions - Next
The AI helps you build Jira Expressions rather than evaluating content directly. This gives you the speed of natural language configuration with the determinism of a static expression rule — no AI latency at transition time, full control over the logic.
REST API + Custom Integrations - Next
Connect CogniRunner to external systems via REST API — Databricks, Salesforce, and others. This includes the ability to embed instance-specific documentation so the AI understands your environment and can generate implementation-ready solutions, not generic ones.
Improved Web Search - Planned
Better real-time web search support during AI validation — enabling validators that cross-reference external knowledge sources as part of their evaluation.
BYOK + Multi-Provider - Planned
Bring your own API key, with support for Anthropic Claude and Google Gemini as additional AI providers alongside OpenAI.
Local Model Support - Planned
Connect to fully local inference servers — Ollama, LM Studio, and any OpenAI-compatible endpoint, including self-hosted Anthropic API setups. For teams with data residency requirements or those who want to run everything on-premise.
The post-functions work is in active testing and is the next thing to ship. Everything else on the list is confirmed direction for the next few months, in roughly the order shown. Exact timelines are tied to testing rather than a fixed schedule.
A note on the post-function design: one of the genuine concerns about AI-generated workflow actions is unpredictability. Fully AI-generated post-functions — where the AI decides what to write to a field, what transition to trigger, who to notify — are powerful but potentially erratic in production. The static and hybrid options are specifically designed to address that. You constrain the action type and the AI fills in the content within those bounds. Predictable structure, intelligent content.
The Bigger Picture
The roadmap reflects a specific point of view: that the useful role for AI in Jira workflows is not to replace the administrator’s judgment but to extend what that judgment can enforce.
Right now, quality gates are limited by what can be expressed in deterministic rules. If a standard can be described in a sentence, it can be enforced. That’s not a small thing — most of the quality standards that slip through Jira workflows aren’t slipping because teams don’t know what good looks like. They’re slipping because expressing “good” in Jira Expressions is hard, and nobody gets around to writing the validator.
The extension into local models and multi-provider support follows the same logic applied to infrastructure. Some teams can’t use a cloud AI provider for data residency or security reasons. Supporting Ollama, LM Studio, and OpenAI-compatible local endpoints means the same plain-English workflow validation is available in fully air-gapped environments. The interface stays the same. The model runs wherever you need it to.
The REST API and documentation embedding work is about accuracy at the instance level. A fine-tuned model that knows the Jira API deeply can generate implementation-ready solutions — not just suggestions. When you embed your own instance documentation, the AI understands your specific field structure, your custom workflows, your integration points with Databricks or Salesforce. The output is specific enough to implement directly, not a starting point that still requires significant interpretation.
Try CogniRunner
CogniRunner is available now on the Atlassian Marketplace. Free trial available from the listing.
If your Jira workflows have quality standards you’ve wanted to enforce but haven’t because the tooling made it too hard — this is a direct solution to that problem. Describe what you want the AI to check. It will check it.