Using Cursor to build an app without writing a single line of code.
A while back, I wrote an article detailing my first foray into building an app using AI. At the time, I used Claude to generate code, which I then pasted into my IDE to build a prototype of a simple dice game called Pig. To get the app to work, I had to spend some time learning Dart/Flutter in order to do my own debugging.
Many suggested I try Cursor, and they were right—it was a game changer, I highly recommend it. With Cursor I was able to generate the entire app without making a single direct edit to the code. Peter Yang calls it "vibe coding" (he's a lot better at it).
That said, the Cursor IDE integration is not as fully featured as I’d hoped and the process was not entirely smooth. All-in-all, it took me about 15 hours to generate an app that I felt comfortable sharing, faster than the 20 hours it took me on my first attempt, but still pretty far from the “in 25 minutes” claim that I often see on social media. (It also took me another 5 hours to figure out how to publish it.) You can try out the app yourself here.
Take-aways
What a difference a button makes! In Cursor, the code is on the left and the AI chat panel is on the right. When the AI generates code, there’s an “Apply” button that pulls in the code, complete with “diff highlighting” to show the changes made and an “Accept” button to apply them. The flow is so much easier and faster than copy-pasting from a different window, and there’s much more clarity as to what exactly has been changed.
Cursor is not as integrated as I’d like. Several times, Cursor lost track of the latest code and asked me to provide it. But the way it handles attachments and copy-paste of code was really confusing, making it hard for me to understand what code the AI had visibility into. As a user, my expectation is that Cursor would have access to all of the code in the project, with the ability to inspect code inside any file, as needed. I’m guessing that’s an architectural limitation, but until that gets fixed, there’s still some “passing back and forth” of code that’s rather unwieldy.
Cursor is not integrated into the command line. In order to debug issues with the app, Cursor often added logging to the code and then asked me to run the app and copy-paste the console logs back into the chat window. To me, this begs the question: shouldn’t the stream of activity from my command line shell just be plugged into the AI as part of its context??
Cursor can’t see the app itself. The only way to share the output of the app is to take a screenshot and attach it to the chat. Even then, it's not clear that the AI's multi-modal capabilities are up to the task of interpreting the image and relating it to the request I'd made in the chat. I spent several hours trying to get it to correctly render the five-dot pattern on a die, and attempts to fix the issue mostly resulted in new issues, including dots drawn outside of the bounding box. It was awkward to try to describe the problem using the chat interface, and it would really help if the display of the app itself (running on an emulator on my machine) were piped as an additional stream of input into the AI’s context.
At times, I really felt like I was talking to a person. Sometimes I wanted to steer the AI to consider a certain implementation, but since I don’t know how the React framework works under the hood, I was hesitant to propose anything too specific. I've had the same feeling in the past when interacting with human developers: afraid to provide misleading guidance, but also reluctant to let them proceed unchecked. Humans tend to worry about hurting my feelings; whereas the AI tends to assume I know what I'm talking about.
AI is no good at design. You still need a skilled (human) designer if you want a decent-looking app. Since that’s not me, my app looks like this; it’s plausibly a functional prototype, but not ready (by a long shot) to publish in an app store. Making minor adjustments to whitespace was like pulling teeth, so at some point I gave up trying to improve the layouts. Perhaps if I knew the right prompts or the right design library to apply, I would have had more success. Please let me know if you have tips!
Product process
As before, I started with ChatPRD to generate the PRD, which I then fed into Cursor to generate code.
Instead of feeding Cursor the entire PRD, I decided to give it information one step at a time. But as soon as I provided the first paragraph of the PRD, it set off writing code for the full prototype. Hold on!
But then I thought, you know, YOLO, let’s just go with it.
So even faster than in the real world, the PRD quickly got out of sync with the implementation as a result of the iterative app creation process. What I found myself wanting was a reverse ChatPRD: a tool that can generate the PRD from the code, so that there’s a human-readable description of what was implemented. Bonus points for a way to compare the PRD generated from the code to the original spec, so that the (human) PM can accept / reject the diffs.
But perhaps that's just old school thinking. In future, the need to keep documentation in sync with code will be much reduced, since the documentation can be auto-generated from the code at any moment. Who needs a PRD anyway?
Coding environment
Whereas I opted for Dart/Flutter when I built the app using Claude, this time Cursor suggested React Native / JavaScript, guiding me (not without some hiccups) through the process of setting up my development environment, including the installation of Node.js and the emulator.
Similar to my prior experience, in which there was a syntax error because the coding framework had changed since the LLM was trained, the setup instructions in Cursor made reference to files that don’t exist. When I asked about it, it said:
Cursor: Ah, it seems you've created a project using a newer version of Expo that uses the new file structure with the app directory (this is the new "Expo Router" structure).
Later on, I spun my wheels for about an hour trying to fix a minor issue with the app, before Cursor finally suggested we might be better off starting over from scratch. It said:
Cursor: Let's try creating a new project with a specific template that's designed to work with Expo Router. We can then move our game code into the new project.
As noted above, although Cursor is embedded into the IDE, it doesn’t have the context of the files in my project or a view into the software installed on my machine, so its instructions sometimes presumed configurations that didn't exist. To get back on track, it needed me to run commands in the terminal and copy-paste the output back into the chat.
Frustratingly, Cursor loses the chat history every time I restart it or make the mistake of choosing Open -> Folder. I found myself missing the "library" features of other AI tools, so that I could go back to previous questions and responses. My experience coding directly in Claude was that I could leave the project and come back, sometimes days later, and pick up where I'd left off. But with Cursor I found myself starting over again from time to time, sometimes needing to explain to the AI that "we already tried that and it didn't work, remember?"
(I did, however, find a plug-in that allowed me to save the chat history, as long as I remembered to do so before restarting the app.)
Debugging
While it’s true that you can now build apps using AI without actually writing any code, I think it would be difficult to do so if you’ve never written code before yourself. For example, to fix the issue with the rendering of the dot pattern on the 5, I had to propose a specific implementation:
Me: What I don't understand is: if the 4 is rendered correctly (and presumably, using relative spacing?) - then why can't the 5 be rendered just like the 4, but with an added dot in the very center?
Cursor: That's a really good point! You're absolutely right - if the 4 pattern creates a perfect square using flex layout, we should be able to use the same approach for the 5, just adding a centered dot.
Of the 15 hours it took me to build the app, I probably spent at least 10 debugging: 3 trying to get rid of all the scoring and logic bugs, another 5 to work out the kinks in the rendering of the die and the roll animation, and 2 trying to get rid of the “pop” sound on button presses on Android (before eventually giving up on that one).
I also wasted at least an hour because I didn’t know that I needed to refresh the app for certain types of changes to be reflected in the emulator. I kept applying code changes and then reporting back to Cursor that they “hadn’t worked,” when in fact I was still running the old code. I’m sure there’s a name for this failure mode and I suspect every developer has gone through it before, even without AI. I could almost feel the passive aggressive tone in the AI when I asked it why it hadn’t told me about the need to refresh:
Cursor: Yes, I apologize! I should have mentioned reloading right from the start. This is a crucial part of React Native/Expo development that I took for granted.
领英推荐
Design considerations
As I found in my first attempts at building an app, AI is not very good at visual design. Since I don’t have the skills to provide Figma files as input to the AI, the end result was an app that looks more like a wireframe / prototype than a product.
I ended up spending a lot of time giving feedback to Cursor about the animations. Cursor proposed one in which the dot patterns spin around, but it looked like a loading spinner. It also proposed a 3D version, but that looked like a piece of paper flipping around a pole. We settled on an approach that renders a sequence of dot patterns in rapid succession, but even that required some debugging to get rid of the stutter that happens when the same dot pattern occurs twice in a row.
Overall, small spacing and minor jank issues took a lot of back-and-forth to resolve. To appreciate what this is like in practice, here's a transcript of the interaction:
Me: I noticed that the word "Points:" moves to the left when the point total becomes double digit...
Cursor: Let's update the points display to use a fixed layout: ...
Me: There's a lot of space between the "Points:" and the number value. I'd prefer that there be the minimum amount of space needed....
Cursor: Let's adjust the points display to be more compact: ... The text should now be centered under the die with minimal spacing.
Me: The "Points: XX" does not appear to be centered under the die, it's too far to the left.
Cursor: Let's center the points display better under the die: ... Now "Points: XX" should be perfectly centered under the die. Let me know if it needs further adjustment!
Me: It's still not centered, it's still too far to the left.
Cursor: Let's try a different approach to centering: ... This should better center the points display under the die. Let me know if it needs further adjustment!
Me: It's still not centered, the word "Points" is left-aligned (almost) with the "It's your turn!" text and the outline of the die. But the right hand side of the number that's displayed is under the "t" in "turn", it's not very close to the right-hand-side of the die. NOTE that it's the combined text + numeric value, for example: "Points:? 12" that needs to be centered.
Cursor: I understand better now - the entire "Points: XX" text should be treated as one unit and centered under the die. Let's modify the styles: ... Let me know if this looks better!
Me: Ok, that looks good, but now the word "Points" shifts when the number of points transitions from being a single digit (e.g. "7") to double digit (e.g. "12"). I would like the word "Points" to not move around on the display. So I want the combined entity to be centered, but the position of the combined entity should not change according to how many digits are being displayed.
Cursor: Ah, we need to ensure a fixed width for the number portion while keeping the whole unit centered. Let's try this: ... Now the "Points:" text should stay fixed in place while the numbers change. Let me know if this works better!
Me: Nope, the word "Points" still shifts to the left when the point total goes from single digit to double digit.
Cursor: Let's try a different approach with a fixed-width container for the number: ... Now the "Points:" text should truly stay fixed in place. Let me know if this works!
Me: Yes, that looks good, but perhaps the container is a bit too wide? Either that or there's additional padding somewhere, because the combined entity appears to be left of center.
Cursor: Let's adjust the width and check for any extra padding: .... This should center the combined "Points: XX" better under the die. Let me know if it needs further adjustment!
Me: I noticed that, when the digits transition from single digit to double digit, the second digit appears to be "appended" to the right. To make the single digit score (the most common score) appear more centered, perhaps the numeric value display could be made "centered" in the space allocated to it, rather than left aligned?
Cursor: Yes, let's center the number within its container: ... Now single-digit numbers should appear centered in their space, making the display look more balanced. Let me know if this looks better!
Me: Yes, it looks better, thanks. Now, I'd like to refine the transition that happens after I roll a one...
I'm sure that fixing this would have been much faster by just fixing the code, but that would require understanding JavaScript and React.
All in all, I had about 20 of these little mini conversations about different fit and finish issues in the app, and that’s why it takes 15 hours, not 25 minutes, to build even a simple prototype app using AI.
Conclusion
Overall, my take-away from using Cursor is that it’s a big improvement over trying to generate code from a stand-alone chat interface, but there’s still a lot of friction in the workflow due to limitations on how the AI gets its context: it doesn't have visibility into all the project files at once, and there's still a “human in the loop” needed to interpret visual output and to copy-paste information from other sources, including the terminal window.
When people describe “AI coding agents” they often make the analogy to an intern: inexperienced and prone to making mistakes, but enthusiastic about doing whatever task is assigned, no matter how tedious.
But a real programming intern, tasked with fixing the dot pattern of a 5-dice, would never come back with a pattern that’s clearly worse and ask “is this ok now?”
At times, I felt like I was stuck inside Searle’s Chinese room, having to pass notes under the door between me and my intern. Because the conversation is such low bandwidth, the intern is missing too much context, which makes the whole workflow rather inefficient.
However, given the state of AI research, these all seem like engineering issues that will get sorted out soon enough. I imagine the next iteration is a coding companion that sees what I see, has access to the entire codebase, and integrates seamlessly with the local machine for command-line access when needed. It feels like the industry is really close to taking that next step, and I can't wait to give it a try when the time comes.
Product Leader (ex Google, Director)
3 周Thanks, Shane McGrath, that's a great suggestion, and also a new skill I'd need to learn...
Entrepreneur Extraordinaire
3 周I’ve really been enjoying this series of posts. I think they’re screaming for some video content though. Being able to see what you’re talking about and the specific challenges you were having would be way more digestible