It started simple
This did not begin as a portfolio project, a brand exercise, or an attempt to design a polished narrative. I just wanted to build a calculator.
The idea was deliberately small: pick something simple enough to understand quickly, then use it to test how far I could push AI without hiding behind a complicated product.
At first, the goal looked straightforward. Ask AI to generate the calculator, review the result, and see what happened.
But instead of writing the code directly through one general-purpose agent, I tried a different premise: what if AI did not build the product itself, but built the team that would build it?
That small shift changed the whole experiment.
From one agent to a system
Rather than relying on a single agent to do everything, I split the work into explicit roles: a product owner, a designer, an engineer, and a set of validators checking quality at every stage.
At the center of it all is an orchestrator. Its job is not to produce the deliverable itself. It reads the initial prompt, interprets the intent, decides which role needs to act next, and routes the work through the right sequence.
That makes the process feel very different from ordinary prompting. It feels less like asking a model for an answer and more like kicking off a system.
[IMAGE PLACEHOLDER — system overview] A minimal diagram showing the orchestrator at the top and three layers below: Product, Design, and Engineering, each with its own validators.
From requirements to UI
The product owner does more than generate requirements. It looks for missing information, rewrites ambiguous parts, and keeps refining the definition until the brief becomes usable.
Then a validator checks whether that output is actually actionable for design. If it is still vague, incomplete, or too abstract, it goes back through the loop until it becomes clear enough to move forward.
By the time the design phase begins, most of the ambiguity has already been removed.
The designer then works inside real constraints: user flows, layout logic, copy, state handling, and the design system. It is not drawing isolated screens. It is building interface structure while thinking through hierarchy and states such as loading, error, and success.
That work is challenged as well. A design-system validator makes sure the output uses real components. A copy validator checks clarity. A UX validator looks for friction, inconsistencies, and weak decisions. The design does not move on until it passes those reviews.
[IMAGE PLACEHOLDER — calculator UI] Clean, minimal calculator interface built with Material components.
From UI to working product
Once the design is validated, engineering takes over. The engineer agent translates the approved decisions into working code, assembles components, handles logic, and makes sure the interface behaves correctly across breakpoints.
But nothing ships immediately. Another validator checks whether the product actually works under more realistic conditions: whether the layout holds up responsively, whether the components behave correctly, and whether anything breaks when the interface is used like a real product instead of an ideal mockup.
If something fails, it does not get patched informally. It goes back into the loop.
What surprised me was not only that the system worked, but how it worked. I gave it a single prompt. From there, it created multiple agents with narrow responsibilities and specific skills, then kept them collaborating and iterating for more than an hour without intervention.
It generated outputs, validated them, corrected itself, and kept moving forward. At no point did the process feel linear.
[IMAGE PLACEHOLDER — agent flow] A simple flow showing the sequence: prompt → product → validation → design → validation → engineering → validation.
Why it works
A big reason this works is constraint.
Each agent only has access to the skills it actually needs. The product owner defines the problem. The designer does not think like an engineer. The engineer does not rewrite the whole strategy. Validators do not create; they critique.
That separation matters because it reduces context overload. Instead of asking one oversized agent to reason about everything at once, the system sharpens responsibility. Each role has a narrower job, a clearer standard, and a better chance of producing something strong.
Connecting the workflow to Material Web reinforces that even more. The system is not inventing interface patterns from scratch every time. It is composing real components, which makes the output faster to produce, easier to validate, and more consistent by default.
Iteration and what comes next
For visual iteration, I temporarily used Figma as a bridge.
The UI is generated, sent into Figma, adjusted there, and then fed back into the system. On the next pass, the orchestrator can skip the product-definition stage and go directly into design and engineering, refining what already exists instead of rebuilding from zero.
That already creates a much tighter loop than a traditional handoff.
[IMAGE PLACEHOLDER — iteration loop] A simple loop: code → Figma → edits → back to system.
The next step is removing Figma entirely.
I am now building a lightweight browser-based editor that lets me tweak typography, spacing, and components directly on the interface itself. Every change can be captured, sent back into the system, and revalidated automatically. No files, no handoff, and no translation layer between the design surface and the product.
Just continuous iteration on top of the thing that already exists.
What began as a simple calculator ended up becoming something else entirely. Not just a tool, but a way of working: a system where the product does not get built step by step by one agent, but emerges from coordinated roles, structured constraints, and continuous validation.
I did not build a calculator.
I built a workflow that can build calculators.