The Agent Ran My Manual Tests

Manual tests aren’t about who executes them. Using Playwright MCP with Claude Code, I let my coding agent run verification scripts and catch its own regressions during code review.

December 8, 2025 · 6 min · Rida Al Barazi

Setting Agents Up to Succeed

Some of my AI coding sessions produce great work. Others produce garbage. The difference isn’t the model. It’s not the tool. It’s what I give the agent before it starts. You might have heard this called context engineering. It’s an umbrella term with many interpretations. Here’s what it means to me. How my prompts evolved When I started using coding agents with Cursor, I’d prompt like this: “Implement the OpenAI Responses API.” ...

December 4, 2025 · 5 min · Rida Al Barazi

YOLO Mode Only Works When YOLO Can't Hurt You

I’ve been calling it the bingo machine. You sit there watching the AI coding agent work. It stops. Asks permission. You hit allow. It stops again. Allow. Allow. Allow. You’re just pulling the lever waiting for the dopamine hit: the moment it finally says “done” and you can ship. Then you actually test it. And it’s not done. Not even close. Twelve months in I spent the last year fully immersed in AI coding tools. Not casually—daily. I rotate through all of them, keeping only a monthly subscription at a time so I can switch to whatever’s state of the art. Cursor. Claude Code. Codex. Back to Claude. ...

December 1, 2025 · 9 min · Rida Al Barazi