A reflective narrative of the full capstone experience on what I built, what broke, what I learned, and what I'd do differently.
Cloud Cost Autopilot started from a simple observation: cloud bills are confusing, and most small businesses don't have a dedicated engineer to figure out why they're paying so much. I wanted to build something that actually solved a real problem, not just a tutorial project I'd throw away after the quarter. When I started researching the space, I found that tools like AWS Cost Explorer are powerful but overwhelming for someone who just wants to know what they're wasting money on. That gap became the foundation for this project.
Choosing a cloud cost optimization tool also felt personally relevant to where I want to go professionally. I've been interested in cloud engineering and AWS, and I knew that building something real with the AWS SDK, IAM roles, and CloudWatch would teach me more than any course could. The capstone felt like an opportunity to prove to myself and to future employers that I could build something meaningful from scratch.
The project also pushed me to think like a product engineer, not just a programmer. It's one thing to write code that works. It's another to build something that answers a question a real person would actually ask: "How much money am I wasting right now?"
The most technically challenging part of this project was getting the AWS integration working with real credentials. In theory, the flow is straightforward: assume an IAM role, get temporary credentials, use those to call EC2 and CloudWatch APIs. In practice, it took a week of debugging permission errors before it worked.
The breakthrough came from understanding how IAM trust relationships work. My initial attempts kept returning "Access Denied" errors not because my code was wrong, but because the trust policy on my role wasn't configured to allow my user to assume it. Once I understood that the role needed to explicitly trust my IAM user through its trust relationship JSON, everything clicked.
Getting CloudWatch CPU metrics working brought a similar challenge. Pulling raw datapoints and averaging them over seven days sounds simple, but handling missing data, empty responses, and instances that had just launched required careful error handling. I learned to write defensive code that could process partial data gracefully rather than crashing the whole scan.
The core intellectual challenge of this project was translating CPU utilization percentages into something a small business owner actually cares about: dollars. That translation is what the recommendation engine does.
The logic is straightforward: pull each instance's seven-day average CPU, flag anything under 5% as idle, look up the instance type's hourly rate, multiply by 730 hours per month, and return estimated savings. But arriving at that design took iteration. Early on, I was just flagging idle instances without any dollar amount attached.
When I met with my stakeholder who runs a small startup on AWS; his feedback was immediate: "Show me dollar amounts, not percentages." That single piece of feedback completely reframed how I thought about the project's output.
Going into this project I had backend experience but limited frontend work. Building the React dashboard, login screen, stats cards, recommendations table, and action buttons forced me to think about the full picture of how a user would interact with the system.
The most instructive debugging session came from an account ID mismatch that caused recommendations to return zero results even when the scan was working perfectly. The scan was saving instances under one account ID while the recommendation generator was querying a different one. Finding the bug required building temporary debug endpoints, querying the database directly, and tracing data flow from the login form all the way to the database. It was tedious, but it taught me how to systematically debug a full-stack application rather than guessing.
I also made deliberate design decisions about the UI. My first version was visually complex with animations and a dark theme. I pulled it back to something simpler because I wanted the dollar savings number to be the first thing someone sees — not the interface.
If I started over, I would set up a test AWS account with running EC2 instances from day one. Most of my early development was done against empty responses because my test account had no running instances. That meant I couldn't catch bugs in the recommendation engine until very late, leading to a stressful debugging session right before the showcase.
I would also design the database schema more carefully upfront. Several late-stage bugs came from column name mismatches between what the recommendation service wrote and what the dashboard read. A more careful design pass at the start would have caught those inconsistencies early.
More broadly, I underestimated how much time real AWS integration takes. The SDK is well-documented, but connecting it to a real account with real permissions and real data surfaces a whole category of problems that don't exist in tutorials.
Cloud Cost Autopilot ended the quarter as a working full-stack application: a React frontend, a Node.js and Express backend, a PostgreSQL database, and a live AWS integration that scans real EC2 instances and returns real dollar savings estimates.
What I'm most proud of is that the system works with actual AWS data. When I demo it and the dashboard shows "$8.47/month wasted" on a real instance ID from my AWS account, it's the CloudWatch API, IAM role assumption, and a recommendation engine all working together.
In the second quarter I plan to add automatic 24-hour scheduled scans, email or Slack alerts when new waste is detected, and expand the engine to catch more waste types like oversized instances and unattached EBS volumes. More than anything, this project gave me the confidence that I can build something real with AWS, and that's exactly the kind of proof I wanted coming out of this capstone.