← Back to Home

30 Problems Solved with AI in 30 Days

What Was the Challenge?

In early 2025, I set myself a personal challenge: build one AI tool per day for 30 consecutive days, each one solving a real, documented business problem. Not demos. Not tutorials. Real tools that could be handed to a business owner and used the next morning.

The rules were simple. Each tool had to: solve a specific problem, be built within a single day, and be documented: what I built, what worked, and what didn't. No exceptions, no extensions.

Why 30 Days?

Speed forces clarity. When you have eight hours to go from problem to working prototype, you can't afford to over-engineer. You cut straight to the core loop (the one thing the user needs to do) and you build that first.

I also wanted to test my own stack. After years of reading about AI, building proofs-of-concept, and attending workshops, I needed to answer one honest question: how fast can I actually ship?

The answer, after 30 days: faster than I expected, but the quality gap between day 3 and day 18 was significant. And that gap was the most interesting part.

Five Builds That Actually Worked

AI Resume Builder. The clearest win. A user uploads their CV, pastes a job description, and gets an ATS-ready resume in seconds. What made it work: the problem was sharply defined, the feedback loop was instant, and the output had a clear quality bar (ATS score). It's now live at resume.aibizmy.com.

Voice Agent for Business. Built on Retell AI, this handled inbound business calls with natural conversation flow. Booking appointments, answering FAQs, escalating to humans. A Malaysian SMB owner told me it handled 60% of their inbound calls within two weeks of going live.

AI Proposal Generator. Paste your client brief, get a structured, professional proposal out the other end. What surprised me: the formatting mattered more than the content quality. Businesses wanted something that looked right immediately.

Logistics Analytics Dashboard. Built for a supply chain context, this pulled delivery data and surfaced delay patterns using AI-generated summaries. The insight layer was where AI added real value, not the charts themselves.

Content Carousel Generator. Generates social media carousel slides from a single topic input. The tool taught me that AI content tools live or die by how tightly you constrain the output format.

What Broke (The Honest Part)

Day 9 was a lesson in scope creep. I tried to build a multilingual voice agent that could switch between Bahasa Malaysia and English mid-conversation. The STT (speech-to-text) layer handled it poorly. I shipped a half-built tool, documented the failure, and moved on.

Days 11 through 14 were rough. I hit what I now call the "AI for everything" trap: reaching for an LLM to solve problems that a simple rule or a lookup table would have solved better and faster. The tools from that stretch were technically fine but practically weak.

Mobile OCR was another failure. Building a halal scanner that reads product labels from a phone camera sounds straightforward. The image quality variability across devices made it genuinely hard. I shipped a version that worked 70% of the time, which is not enough for food safety.

What I Learned

Constraints produce better products. The best tools came from the narrowest problem definitions. "Build an AI tool for HR" produced nothing useful. "Build a tool that writes a job description from a bullet list" produced something a business could use on day one.

Document everything in real time. The notes I took while building day 7 were the foundation of a workflow I used on day 19. Your future self will thank you for writing it down while it's fresh.

The output format is the product. Users don't care how clever your prompt is. They care about what lands in their inbox or on their screen. A mediocre AI output in a great format beats a great AI output in a confusing format, every time.

Ship first, harden later. Every tool I tried to make "production-ready" on day one ended up delayed or not shipped at all. The tools I shipped rough, documented the limitations, and improved later consistently performed better in practice.

The Verdict

30 tools in 30 days is exhausting. There were nights where I was committing code at 11pm, writing documentation at midnight, and questioning my life choices by 1am. I'd do it again.

The value wasn't in the 30 tools. It was in the compounding effect of having shipped that many times. By day 25, I had a personal library of patterns, failed approaches, and tested solutions that made every subsequent build faster and better.

If you're learning AI development, I'd recommend a version of this challenge, even at 7 days. The constraint is the point. Speed reveals your actual skill level. And the gap between what you thought you could build and what you actually build is where the real learning happens.


Thinking about an AI project for your business?

I've spent 30+ days building AI tools for real business problems. If you have something in mind, I'd like to hear about it.

Let's Talk