?? We recently held an internal forum at 67 Bricks to explore how AI-powered development tools are impacting our software development practices. The session brought together team members to share experiences using tools like Cursor, IntelliJ AI Assistant, GitHub Copilot, and Aider, highlighting both their strengths and limitations.
- Efficiency Gains: AI tools can help speed up routine tasks such as generating simple UI code and refactoring tests to remove duplication. I showed how Cursor made it much quicker to add a search history component to an application using a similar feature as a basis for the required changes.
- Improved Code Understanding:
James Wolstencroft
demonstrated how GitHub Copilot can explain existing code, enabling him to get to grips with unfamiliar areas of a codebase quickly.? He also showed how he uses the chat support as an AI rubber duck to help troubleshoot issues, and bottom out approaches to implementing new features.
- Error Analysis:
Rhys Parsons
explained how IntelliJ’s AI Assistant has inbuilt support for? interpreting stack traces, suggesting potential fixes based on codebase-specific insights. The additional context made this process much quicker than trawling documentation or Stack Overflow for similar issues. However, it pays to watch out for hallucinations as LLMs will often prefer a tenuous positive response to a definitive “I’m sorry Dave, I’m afraid I can’t do that”
- Task-Specific Success:
Richard Brown
showed how Aider excelled when given narrow, well-defined goals. Spending some time upfront breaking down a task makes it more tractable for AI assistance. When given broader tasks, Aider can struggle to identify the full context, requiring multiple iterations to widen the net - a traditional refactor might be quicker in these cases.
- While AI tools can enhance productivity, some of us felt uneasy about committing code we didn’t write ourselves. However,? this may be viewed as an evolution of adapting sample code and approaches from the internet.? Our existing dev processes (automated tests, code reviews) should mitigate the risk of including code which is not well tested or understood.
- There was also concern that newer developers may miss out on learning core skills by relying too heavily on these tools. We reflected on how small programming tasks and impromptu framework deep dives can build foundational understanding—something we want to ensure AI does not erode.
AI tools can bring increases in efficiency, but they need to be carefully integrated into our workflows to maintain high-quality code. Combining AI-driven productivity with human oversight is key - we are well placed to do this with our robust approach to automated testing and code reviews.
We are encouraging more experimentation within the development team, and are planning an AI-assisted hackathon to further explore the potential of these tools in speeding up prototyping activities.? We are also excited to see how the new o1 models from ChatGPT can extend the reach and performance of AI assisted development tools in more demanding use cases.
?? We'd love to hear from the wider dev community! How are you balancing the productivity gains from AI-powered tools with maintaining quality and foundational skill development?