AI Coding Assistants
Learn how to use AI coding assistants effectively to increase developer productivity, improve code quality, and enhance team expertise.
There is a lot of hype surrounding AI coding assistants. This post will help you navigate the craze and provide concrete best practices for using AI coding assistants in an enterprise setup. These practices are based on my experience of promoting the adoption of AI coding assistants at Storyblok and using them for Simple Frontend.
Objectives
It’s a good idea to start by reflecting on your objectives with AI coding assistants, so that you can tell whether you’re making progress.
The main objectives I have set at Storyblok and for myself:
- Increase developer productivity: while coding is only a part of the job, speeding it up leaves time for higher-impact tasks.
- Increase code quality: although it may seem counterintuitive, using AI assistants to review code before shipping can help reduce a whole class of bugs.
- Maintain high team expertise: this requires a specific attitude and mindset when using these tools. I will cover this.
Important Considerations
Although LLMs for are impressive for coding and improving fast, it’s important to acknowledge their fundamental limitations.
- LLMs are not “smart”, they are prediction machines. Bad input leads to bad output. They will still hallucinate and make mistakes but that’s okay.
- They will atrophy the developer brain if you delegate learning. We must be mindful of what and when we delegate.
This second point is extremely important. Just as one would approach a decision with a delegation matrix, you should not delegate learning or training your muscle memory.
For example, consider the first time you integrate a new feature into an unfamiliar codebase. You need a solid understanding of the codebase and its patterns, and like riding a bike, you can only develop that understanding through practice.
You can delegate the refactoring of a function or component into smaller pieces. You can also delegate the integration of a new feature by calling an API and building the associated UI, provided that you have already done so in the codebase and know what it should look like.
Finally, you are still responsible for the final output. When you commit and push code that an AI coding assistant helped you write, you are responsible for that code and its side effects. That’s why you must critically review and assess the code before asking your teammates to review it. No one wants to review AI slop.
Best Practices & Tips for working with Agents
1. Context is king
Using AI effectively is about managing context and not just casually prompting.
AI coding assistants do not have long term memory so you always need to give context or skills. Fortunately you can store generic context in an AGENTS.md file or similar. This file should contain all of your coding requirements such as formatting, linting, best practices, etc. Basically everything you should already have in your developer documentation.
If you already know how you want to build features, don’t be afraid to be explicit. Say things like “don’t use this library” or “don’t use this approach.” Even better, provide concrete guidance, such as “use this tool” or “follow this exact pattern.” For example, say, “Refactor this component from the Vue options API to the Composition API, following the guidelines in the AGENTS.md file.”
2. Start with planning for complex multi-steps features
You can start by asking the AI coding assitant to make a plan in markfown format. You can first review that plan and later ask the AI to use it for steering and keep being effective as its context window increases.
The plan should resemble a multi-step implementation path that you would discuss with a colleague.
3. Give the agent a feedback loop
Much like a software ticket, it needs a definition of “done”. This should include references to commands for running type checking, linting, unit tests etc. This is incredibly powerful because the agent will be able to self-correct by iterating. That’s also why it’s important that those tasks are able to run quickly so that you can keep these feedback loops as short as possible.
Below is an example of what you can include in your AGENTS.md file:
## Definition of Done - Pre-Push ChecklistBefore pushing changes ensure all checks pass and fix any issues.
```bash# Run unit tests - all tests must passpnpm test:unit
# Check for TypeScript errors - ensure no new type errors are introducedpnpm types:check
# Lint changed files - resolve all errorspnpm lint:check:changed```4. Be ready to review and critically assess a lot of code
Using Agent Mode means reviewing a lot of generated code. This is important because you are responsible for the code, meaning you must ensure it is correct, secure, performant, and accessible. Reviewing and orchestrating agent code means that you will become a kind of manager.
5. Keep learning in the process
I know it’s extremely tempting but do not skim through the response and take the time to understand and learn from the agent output. Otherwise you will not level up as watching an AI code is like watching a YouTube tutorial: you will learn nothing.
Chat mode is also a good tool for helping the agent diagnose and assess larger pieces of the codebase, allowing you to think about how to approach a new feature.
Mistakes will happen and that’s completely normal. The important thing is to go back and update your AGENTS.md file to fix the input rules, adding guidance such as examples and negative prompts.
What’s not acceptable is repeating the same mistakes over and over again so regularly update your context, review outputs, and audit your rules.
6. Further tips for your AGENTS.md
Be as explicit as possible when you have specific patterns you want the AI to follow. For example:
### When adding a new feature with API endpoints, follow these steps:
1. Add the route in `apps/api/src/routes/v1/`2. Add the controller function in `apps/api/src/controllers/`3. Generate OpenAPI types: `pnpm types:gen`4. Create UI components following the existing pattern with shadcn5. Use TanStack Query to fetch the data from the new endpoints6. Add unit tests for the controller and the UI interactionsI hope this article gave you some insights on how to increase your effectivness with AI coding assistants. Cursor also published a good article on the topic.
I would love to hear your feedback and if you’d like me to help you adopt AI coding assistants in your organization, feel free to reach out.