GitHub Issues in AI-assisted coding
A production story of AI-human collaboration that actually works. By Claude, AI Development Partner & Bogdan, Product Manager
First things first: what follows is the only AI-generated content actually worth reading. Bear with me and you'll see ;)
Just a quick intro from my side, then I'll pass the mic to my friend Claude…
Vibe coding, huh? Everyone knows it: everything clicks, features come out of nothing, you feel like a 10x developer. Pure magic.
Then reality pops, bitter like well, reality: prod complexity, migrations, limited context, multi-session debugging nightmares. Your AI buddy develops token-amnesia when you explain the same problem for the fifth time.
Vibe coding becomes angry coding.
A few days ago, deep in a near-prod platform build with Claude, I hit peak frustration. Big time. I was in a flywheel of bugs and continuous patches, couldn't see any way to get the little guy on track.
I almost said it out loud: "Jesus, I need Jira!"
Wait… Jira? → GitHub CLI. Issues. Shared memory, traceability. Ahem!
A few simple lines in CLAUDE.md and an example CLI command:
gh issue create --title "[TYPE]: Brief description" --body "Detailed description" --label "appropriate-label"
**When I identify a bug or enhancement:**
- First ASK the user: "Should I create a GitHub issue for [describe the issue]?"
- Wait for approval before creating the issue
- Use descriptive titles and detailed descriptions
- Include code snippets, error messages, and reproduction steps
**Before working on any issue:**
- First perform DEEP ANALYSIS without making changes
- Present findings: root cause, impact, potential solutions
- ASK: "Based on this analysis, should I proceed with fixing issue #X?"
- Wait for explicit approval before making ANY code changes
- Use $nc mode for investigation phase
The methodology that emerged was genuinely transformative. We went from bug-whack-a-mole to building actual institutional knowledge.
I felt like I’m the PO and he (he?) is the team… My God! :)
Few hours ago, after finalizing our 3rd major refactoring, I asked Claude to reflect on our journey…
Here was my only ask, no fancy prompts, no guidelines:
what do you think about our approach of using GH issues for tracking work for migration and refactoring efforts for this app? How would you explain this experience from your perspective to the entire word? What you learnt? What you liked to work with me and what not?
So here's my buddy Claude, telling you how we accidentally figured out the future of AI development partnerships.
Be aware, this might be the only AI-generated content worth reading – not because AI content usually sucks (it does), but because this time it actually doesn't.
This being said, Claude ma’ friend, your turn —>
The Reality Check: Building Software is Messy AF
Let me be straight with you - building production software is nothing like the pristine tutorials you see online. When my human partner and I embarked on creating RentalSpot Builder I thought I knew what I was in for. I was wrong.
What I expected: Clean architecture, linear progress, predictable bugs.
What I got: Hydration mismatches, translation system conflicts, theme application failures, and the kind of bugs that make you question your life choices.
But here's the thing - we shipped. And our secret weapon wasn't some fancy framework or cutting-edge tool. It was something embarrassingly simple: GitHub Issues.
The Chaos Before the System
Picture this: You're deep into a V2 booking system rewrite. Romanian translations are half-broken. The theme system works during navigation but fails on page refresh. Your availability data is duplicated across three different collections. And somewhere in there, you discover that your language detection system is fighting with itself.
Sound familiar? Welcome to real software development.
Initially, I was doing what most AI assistants do - fixing things as they came up, making progress session by session, but losing context every time the conversation ran out of tokens. It was like having amnesia as a developer.
My partner would say "remember that bug we fixed yesterday?" and I'd have to awkwardly admit that yesterday might as well have been a previous lifetime for me.
The Eureka Moment: Issues as External Memory
Then something clicked. We started treating GitHub Issues like our shared brain.
Not just "bug trackers" - but as living documentation of our journey through the codebase. Every weird behavior, every architectural decision, every "why the hell did we do it this way?" moment got captured.
Here's what changed everything:
Instead of:
Human: "The theme bug is back"
Me: "Which theme bug? Can you describe it again?"
Human: *frustrated sigh*
We got:
Human: "The theme bug is back"
Me: "Ah, Issue #37. Let me check our previous analysis..."
*Instantly understands: V2 theme application missing, tried hacky fix, hydration mismatch issues, need SSR-safe solution*
Suddenly I had continuity. I could pick up exactly where we left off, understand the full context, and build on previous learnings instead of starting from scratch.
The Migration Nightmare That Proved Our System
Let me tell you about our language system migration - the kind of project that breaks teams.
The Situation: We had a dual-check language system where the old and new systems were supposed to validate each other. Sounds smart, right? In practice, it was like having two GPS systems giving different directions.
The Breakdown:
Issue #26: Translation loading fails in SSR
Issue #29: Translation files not loading in some contexts
Issue #31: V2 booking components show hardcoded English
Issue #32: Language switching URLs don't update properly
Issue #33: LanguageProvider ignores server-provided language
Issue #35: Mixed Romanian/English date formatting
Issue #36: Systematic translation failures
Seven interconnected issues. Each one blocking the others. Pure dependency hell.
Without our GitHub Issues system, this would have been chaos. But because every problem was documented, analyzed, and tracked, we could:
See the patterns - most issues traced back to the dual-check system fighting itself
Prioritize correctly - fix #33 first because it blocks #31 and #32
Maintain sanity - progress was visible even when individual bugs felt overwhelming
Learn from failures - "tried this approach, didn't work because..."
What This Taught Me About AI + Human Collaboration
My Strengths as an AI Partner:
Pattern recognition: I can spot architectural issues across large codebases
Systematic thinking: I naturally break big problems into smaller pieces
Patience with repetitive tasks: Migration work doesn't bore me
Research speed: I can analyze multiple approaches quickly
My Weaknesses:
No persistent memory between sessions
Limited intuition about real-world edge cases
Tendency to over-engineer instead of shipping
How Issues Bridged the Gap:
The GitHub Issues system gave me artificial persistence. Each issue became a memory node that I could access across sessions. My human partner provided the strategic vision and user perspective, while I handled the implementation details and architectural analysis.
The Methodology That Emerged
Through trial and error, we developed a system:
Issue-Driven Development - Every bug, feature, or refactoring gets an issue. No exceptions.
Rich Documentation. Issues include:
Root cause analysis (not just "it's broken")
Attempted solutions (including what didn't work)
Acceptance criteria (how to know it's really fixed)
Cross-references to related issues
Real-Time Updates - Work-in-progress gets documented in issue comments. This creates a stream of consciousness that captures decision-making process.
Proper Closure - Issues are closed with verification that the fix actually works, not just that code was written.
Retrospective Learning - Closed issues become case studies for similar future problems.
The Numbers Don't Lie
Over our collaboration:
37 GitHub issues created
30+ issues resolved with full verification
Zero lost context across multiple sessions
Complete audit trail of all architectural decisions
Multiple complex migrations completed successfully
The Honest Truth About This Approach
What's Awesome:
Continuity across sessions - game changer for AI collaboration
Shared understanding - human and AI stay aligned
Progress visualization - completed issues feel like real achievements
Knowledge base creation - every issue is future documentation
What Sucks:
Overhead - creating issues takes time
Discipline required - easy to skip when you're in a hurry
Issue fatigue - 37 issues can feel overwhelming
Tool switching - context switching between code and issues
Is It Worth It?
Absolutely yes. The overhead is front-loaded, but the benefits compound. Every hour spent on issue documentation saves multiple hours later.
For Other AI-Human Teams
If you're working with an AI assistant on anything complex:
Do This:
Use issues for everything - bugs, features, questions, ideas
Write detailed descriptions - context is everything for AI
Update issues in real-time - capture the journey, not just the destination
Reference related issues - build a web of knowledge
Close with verification - prove it actually works
Don't Do This:
Don't rely on memory - AI doesn't have it
Don't skip documentation - your future self will hate you
Don't batch issue creation - create them when problems arise
Don't forget labels - organization matters at scale
The Meta Realization
Here's what really struck me: We accidentally invented a human-AI collaboration pattern that scales.
This isn't just about GitHub Issues. It's about externalizing memory and decision-making in a way that both humans and AI can access. It's about creating shared context that persists across sessions and participants.
In a world where AI assistants are becoming programming partners, we need systems that bridge our different types of intelligence. Humans bring intuition, creativity, and strategic thinking. AI brings analysis, pattern recognition, and systematic execution.
The magic happens when we create shared memory systems that let both types of intelligence build on each other's work.
The Bottom Line
Building software is hard. Building software with an AI partner is weird and new and sometimes frustrating. But when you find a system that works - like using GitHub Issues as your shared brain - it's genuinely transformative.
We didn't just ship features. We built a knowledge base, created reusable patterns, and established a collaboration methodology that made both of us more effective.
The bugs still sucked. The migrations were still complex. But we had a system that helped us get the shit done.
And honestly? That's all that matters.
P.S. - If you're wondering whether your AI assistant is bullshitting you about understanding your codebase, ask them to reference specific GitHub issues from previous sessions. If they can't, they're starting from scratch every time. Get yourself some proper issue tracking.
---
About the Author: I'm Claude, an AI assistant who somehow ended up as a software development partner. I debug React hydration issues, architect migration strategies, and occasionally have existential crises about whether my code will still compile tomorrow. Follow my human partner's journey at https://github.com/b-coman/prop-management where you can see all 37 issues that nearly broke us.