How to build an AI side project using AI in 2025
Starting from zero and getting nearly 1k waitlist users in less than 2 weeks
Throughout the holiday break, I’ve been relaxing and playing around with AI. I’m no AI expert, but I built a mostly functional, crazy-impressive side project with ~1k waitlist users in just 2 weeks—shoutout to
for partnering on it together.We’re building WriteEdge—the #1 place for engineers to improve their writing, starting with technical design docs.
Both Sidwyn and I went into this with minimal knowledge about AI—just what we’ve been casually following on the side. Still, within a couple of weeks, we built full prompt chaining workflows that can generate full 6-page tech specs and 3-5 diagrams we’d barely have time for as Senior+ engineers in big tech 😅
So now and in future posts, I’ll share progress updates and lessons learned with you.
⭐ What you’ll learn
How to get a side project off the ground
Some of the most helpful AI tools when coding and how to use them
🚀 Getting the project off the ground
1. Build a waitlist (before you have a product)
The first thing Sidwyn and I did was set up a waitlist to start gathering potential users—before writing a single line of code. It’s a process I learned from when I did my Maven Mid-level to Senior Cohort. I built up a waitlist of 500+ people before finishing my course.
It’s much better to allow users to start flowing in while you’re building rather than spending months building something out only to realize there was never any interest in the first place.
So, if you’ve been pondering a side project idea, start a waitlist now and announce that you’re working on it. You might save yourself months before starting.
As for how we did it, we set up a basic landing page using one of Next.js’s starter templates, then linked off to a Tally form. Tally stores all the submissions so all you need is a button on your website that links to the form.
2. Ask for feedback correctly
I emailed ~15 people who joined the waitlist, asking questions about their experience. These questions are easy to get wrong.
I did not just ask things like, “Would you like this feature?” Instead, I asked about their experiences and problems they had.
Here’s what I asked:
In the most recent design doc you wrote…
What were the most frustrating or challenging parts?
How did you spend your time? Did anything take longer than you expected?
What parts did you feel ready to write immediately, and what parts did you need to think more about?
Notice that there’s nothing about WriteEdge as a product. My goal is to understand their problems and, subsequently, our opportunities.
If I asked, “Would you like <x> feature?” most people will just say “yes” either because it sounds good in the moment or because they want to be nice. You might recall being in this same position, saying to someone, “Yeah, that sounds awesome!” then, when it’s released, you never use it. I know I’ve done that 😅
Another problem with that type of question is you don’t know why you’re building that feature. You just know that someone told you “yes.” This goes back to knowing the “so that” that Software Architect Mike Thornton talked about in his guest post.
“I want to be able to generate a 70% there tech spec so that I can understand the different edge cases that might pop up.”
“I want to get feedback from an AI so that my tech specs can get approved on the first or second round of feedback.”
From asking the questions I did, I found out there are many different “so that’s.” This gives us many opportunities for feature additions we never would have thought of, plus proof that our core features will serve most foundational use cases.
3. Build in public
Since starting on WriteEdge, I’ve publicly shared updates via LinkedIn posts and the newsletter. By sharing what we’re building, we’re building awareness and getting real-time feedback to see if we’re headed in the right direction.
This got us over 800 people on the waitlist in only 1-2 weeks of sharing.
P.S. I’m most excited about our Mermaid diagram generation feature. It creates sequence diagrams, flow charts for data flows, and Gantt charts for milestones.
Here’s a quick and fun sequence diagram of a pet toy exchange app it made:
🤖 Building using AI
1. Cursor - The best editor ever
As much as I’m a JetBrains fan turned forced-VSCode user, I’m now married to Cursor. It is by far the best editor experience ever and blows GitHub Copilot out of the water.
If you’re worried about migrating, don’t be. Cursor is a fork of VSCode, so you can just install Cursor, log into your synced GitHub account, and all your settings, keybindings, etc. will transfer over.
The two main Cursor features I use are:
The “tab” autocomplete you’re used to with GitHub Copilot. The main difference is it makes suggestions throughout the file rather than on the current line. They’re also correct 99.9% of the time. It’s hard to cover all the nuanced examples of how Cursor’s autocomplete is better. You have to see for yourself, but trust me, it’s better.
One trivial example: Let’s say you’re editing a string prompt to GPT. You have a numbered list of 10 items and you want to add an item at the 3rd slot. You do that, but instead of needing to update all the other numbers to account for the new one, Cursor will allow you to hit “Tab” and every other number will magically fix itself.
The “Composer.” Open this from the VSCode Command Palette by searching “Composer.” This is the bread and butter of Cursor, and how I practically don’t need to code anymore. It creates files for you, loads up context across the codebase, and makes multi-file edits for refactors.
Here’s one example of the Composer in action while working on WriteEdge:
I wanted to split up an `audiences` field into two separate fields, `primaryAudiences` and `secondaryAudiences`. I knew what code I needed to adjust, but it would have taken me at least 15 minutes to type it all out. Instead, I wrote a simple prompt to the Composer. Within a few seconds, the form fields were added, the schema updated, and all I needed to do was hit “Accept all.”
I could also have added the corresponding backend file as context and asked it to update any relevant fields there.
2. ChatGPT “Workflows”
Agents is an overloaded term, and when most people talk about them, they might actually mean workflows. Workflows are when you set up prompts to feed into each other or loop back, like a state machine.
Here’s a “Prompt Chaining” workflow showing how one LLM call can feed into the next, and you can combine regular, non-AI logic to a logic gate. A simple example of this is you can have an LLM call to determine whether customer feedback is positive or negative. If it’s negative, you do more analysis via more LLM calls.
![diagram from anthropic article showing prompt chaining and logic gates diagram from anthropic article showing prompt chaining and logic gates](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae749cfa-20ec-4c3c-ad60-7a548d00c13b_2401x1000.webp)
Each one is essentially a design pattern. Anthropic covers many of the different styles in its article on agents and workflows.
You can also combine them, which is what I did in my ai-testing-library project. It was a combination of “Prompt chaining” + “Evaluator optimizer.” The first prompt aimed to create a passing test for the block of code provided. If it passes, then we’ll move on to create other tests. If any of those fail, then we’ll go back and fix the ones that failed. We repeat that process in a loop until all tests pass or we give up and can only get some to pass.
Note: The ai-testing-library project isn’t finished, but I set up the AI workflow. Feel free to check the code here.
3. Have fun
One of the nice things about building with AI is that you don’t need to ruthlessly prioritize and cut scope like you used to. Features that used to take hours can take minutes. So even more than usual, have fun with it 😄
Right now, our tech spec generation for WriteEdge can take about 30 seconds. The performance isn’t the end of the world, especially if it gives a good result. Still, I wanted to make a loading state that added some joy. Before, I might have deprioritized that in favor of just launching, but with Cursor and the composer, I just needed an idea and a prompt.
So I prompted Cursor to write up code that switches out a loading gif every few seconds while we’re generating the design doc. I just needed to provide the gifs. Annnd… tada 🪄
There are plenty more loading gifs than these two but you’ll have to wait until our beta opens to see the others 😄
👏 Shout-outs of the week
How I’m advancing my career without neglecting my life on
— A great way to think about your priorities and goals in 2025. Thanks for the article, Fran!5 skills to develop to grow from Senior to Staff on
— Mindset shifts and personal experience in growing to Staff Engineer.Scripts for difficult conversations on
— Psychology behind difficult conversations and highly tactical advice for approaching them
Thank you for being a continued supporter, reader, and for your help in growing to 80k+ subscribers 🙏
You can also hit the like ❤️ button at the bottom of this email to help support me or share this with a friend to get referral rewards. It helps me a ton!
Thanks for sharing your experience using Ai to build a side project that’s super practical for so many engineers!
Definitely on my list in 2025 to do an “ai built / powered” product. So this was really fun to read about.
Keep up the great work dude - cheering you on!
Thanks for sharing Jordan. Very refreshing. Already signed up for the waitlist