The Requirements Paradox & My Specs Wizard Experiment
How the shift to AI-powered coding created a clarity problem - and my experiment in solving it
Remember that meme with the developer saying like "So we'll lose our jobs to AI..." followed by "But they need to write clear requirements, so we're safe"?
The irony is a the shift happening in how we build products. The faster AI can implement our ideas, the more precisely we must think about what we want to build.
This creates a fundamental tension: when execution becomes faster, intention must become better.
When Bottlenecks Shift
In complex systems, especially the ones driven by humans, bottlenecks act somehow predictable: when you optimize one constraint, pressure shifts to the next weakest link. For a looong time, that bottleneck in software development was execution. Engineering complained about timelines. Product wrestled with technical limitations. Building software was inherently slow.
Now the engineers are able to deliver “stuff” that took let’s say two sprints in 3-4 days max. So now the developers will wait for business, rolling their eyes and kindly saying “can you give me a rough estimation when the next set of specs will be done?”
What an irony… :)
What happens? The new constraint isn't technical, it's cognitive. Our ability to think clearly about problems and communicate that clarity to systems that excel at execution but struggle with ambiguity.
Traditional development absorbed imprecision naturally. Human developers filled gaps, asked questions, pushed back when something doesn't make sense. They brought contextual understanding that smooths “lack of clarity”.
AI assistants implement exactly what you specify. Garbage in, garbage out, as simple as that. When your specs are fuzzy, AI becomes “creative”. So joy ends, vibe ends, anger starts.
The Vibe Coding Trap
Here's what happens when you start coding with AI: the first hour is pure magic. You're building like a rockstar - what used to take your team like 2-3 weeks, now you get in minutes. It's addictive. So you start treating development like a video game - shoot here, prompt, run there, shoot, watch things happening, shoot again, faster.
That's when your brain switches off. The faster the feedback, the lazier your thinking becomes. Prompt quality drops. Expectations skyrocket. Two hours later? Everything breaks.
The natural reaction is blaming the AI. Wrong target. But the AI didn't fail, your requirements did. Your clarity did
The Conversation Problem
Requirements precision comes from good conversations, not good documentation.
Think about effective user interviews. Breakthrough moments rarely come from the first question. They emerge from follow-ups, gentle probing that reveals real problems beneath stated needs.
Yet most requirements processes rely on static templates. We ask people to predict needs rather than helping them discover what they want to solve.
This mismatch becomes critical when AI can turn imprecise specifications into “functional” systems at unprecedented speed. The cost of building wrong approaches zero. The opportunity cost remains enormous.
Testing an Alternative
Rather than fighting this reality, I experimented with working within it.
What if we designed tools specifically for this dynamic? Tools that help teams do rigorous thinking work that AI-assisted development requires?
I built a conversation engine that guides teams through comprehensive discovery while outputting specifications optimized for AI execution. When you begin a project, it doesn't present templates. It starts conversations: "Tell me about the spark behind this idea; what's the story?"
From there, it adapts based on project type, pushing deeper than you might go alone. When I mentioned user frustrations, it probed for specific scenarios. When I described technical approaches, it challenged assumptions about feasibility.
Use-Case: The Mini-Jira Experiment
To validate this approach, I used it on a real thing: building a project management tool designed for teams using AI coding assistants.
Starting from the rough concept that "current Jira isn't adapted for hybrid human-AI workflows," the conversation process guided me through structured thinking over 3.5 hours.
What emerged:
Complete product strategy with validated problem-solution fit
Four strategic epics across three delivery waves
Sixteen detailed tasks with precise acceptance criteria
Full technical architecture - for sure, it could be highly improved, and be challenged by a real architect
Specifications formatted for direct import into AI workflows
The efficiency was notable - concept to comprehensive specs in half a day. But what struck me was how structured conversation forced better product thinking.
The system pushed beyond surface requirements consistently. Instead of accepting "users want better dashboards," it dug into specific workflow breakdowns. Rather than letting me assume technical approaches would work, it made me articulate integration challenges and performance requirements.
When I mentioned users, it asked about personas and their worst days. When I described solutions, it challenged assumptions about behavior. When I glossed over complexity, it forced me to think through implementation challenges.
Organizational Implications
This reveals something fundamental about how product work evolves. When development cycles compress from months to weeks, strategic work must be completed upfront. Conversations we used to have during implementation now need to happen during "conceptualizing" phase.
Core principles emerging:
Strategic clarity must precede execution
Specifications need precision for AI interpretation
Human judgment becomes more valuable, not less
The most fascinating aspect isn't technological, it's psychological. Product managers who thrived with gradual iteration may struggle with precision demands of AI-assisted development. The skills that matter are changing.
Traditional product work balanced breadth and depth, moving between strategic thinking and tactical execution. AI-assisted development demands upfront depth in strategic thinking while reducing tactical execution burden.
This creates opportunities for product leaders willing to adapt. When bottlenecks shift to decision-making and requirements clarity, humans who excel at structured thinking become more valuable.
What Changes, What Remains
The engineer's joke was half right: AI needs clear requirements. But creating those requirements while maintaining strategic thinking and user focus represents sophisticated work that deserves better tools.
There are few questions that keep me a bit awake during the nights:
How do teams maintain strategic agility when front-loading becomes essential?
Is that a new era of waterfall? How can we prevent this trap?
What new tensions emerge when constraints shift from implementation to decision-making?
How does this transformation affect psychological safety needed for genuine innovation?
The bottleneck has shifted. Teams that recognize this and adapt their thinking processes accordingly will build products that matter.
Those that don't will find themselves outpaced not by AI, but by other humans who learned to work with it more effectively.
The System Behind the Experiment
My experiment was deliberately simple. A mix of workflows guided by prompt engineering, bash scripts for project structure, and Node.js validation scripts that check whether the generated specs are ready for AI coding environments. The whole thing feeds into a project management tool built specifically for this hybrid workflow.
Nothing revolutionary in the tech stack. The insight was that the conversation layer matters more than the underlying architecture.
This is just the beginning. An experiment that proved useful enough to warrant sharing, but one that needs broader testing. If you're feeling the requirements precision challenge with AI-assisted development, try it. See what breaks. What conversations does your domain need that I haven't thought of?
I’d be very happy to hear your feedback. Technically, conceptually, or even on the psychological level. Because clarity is more human than AI-related. Still :)
The conversation engine and workflow templates are available on GitHub under MIT license.


I really like where this is all going. Prompting PMs to better discovery will be useful at companies who spoon feed business requirements to product teams expecting feature output. Being guided back to product discovery basics for PMs who never got the chance to flex and as such, lost the muscle (or was a BA promoted to PM and never developed the muscle in the first place) will really help. And as you say, in the age of AI where you cannot do gradual discovery alongside delivery, the up front work matters even more. Can’t wait to put this into practice!