//TIM.CHAO
Primarily translated by AI
Part of AI-DRIVEN DEVELOPMENT: BUILDING A PERSONAL BRAND SITE FROM SCRATCH · PART 5

Testing & Quality: An Automated Tester in the Browser

April 9, 20265 MIN READAI

Just because a feature is finished doesn’t mean it’s done right. In software development, there’s a big difference between “working” and “bug-free.” This post explains how I use automation tools to verify website functionality, identify issues, and ensure code quality.

Playwright MCP: Automated Operations in the Browser

Playwright MCP is one of the most frequently used testing tools in this project. It allows you to programmatically control the browser—navigating pages, clicking buttons, filling out forms, taking screenshots, and inspecting the DOM structure—all automatically.

Unlike traditional E2E testing frameworks, Playwright MCP operates as an MCP server, seamlessly integrated into Claude Code’s workflow. You don’t need to write test scripts; instead, you describe what you want to test using natural language, and the tool executes the actions in the browser.

Here are a few of the commands I use most often:

  • browser_navigate: Opens a specified URL and verifies that the page loads correctly

  • browser_snapshot: Captures an accessibility snapshot of the page—more structured than a screenshot, showing the role, ref, and text content of all elements

  • browser_click + browser_fill_form: Simulates user interactions to test interactive features

  • browser_evaluate: Execute JavaScript on the page to handle tasks that standard operations cannot perform (such as interacting with the TipTap editor in a `contenteditable` field)

Here’s a practical example: testing the article publishing workflow in an admin dashboard. Playwright MCP can automatically open the new article page, fill in the title and body, select categories and series, click the translate button, wait for the translation to complete, and finally save the post. The entire process mirrors manual operations exactly, without missing a single step.

QA Skill: A Systematic Bug-Finding Process

The QA skill provides a systematic quality inspection process. Instead of randomly clicking around, it systematically checks each functional module.

Here’s how it works: First, it lists all the functional points that need to be checked, then executes them one by one. For each functional point, it will:

  1. Describe the expected behavior

  2. Execute the actual operation (typically using Playwright MCP)

  3. Compare the actual results with the expected results

  4. If there is a discrepancy, it records it as a bug and attaches a screenshot or snapshot

The biggest advantage of this process is coverage. Manual testing can easily overlook edge cases (such as empty states, language switching, or text overflow), but a systematic process ensures these are all included in the checklist.

In this project, QA Skill helped me find several hard-to-detect bugs:

  • The EN/JA series page displayed 0 articles (due to incorrect cross-language association logic for seriesId)

  • The series breadcrumb on the article detail page displayed the zh-TW series name instead of the current language

  • Layout misalignment on certain pages at mobile screen widths

Review and Code Review: Ensuring Code Quality

After identifying and fixing bugs, we also need to verify the quality of the fixes themselves. Here, we use two different levels of review tools.

The `review` skill performs a functional review—checking whether the fix fully resolves the issue, introduces new problems, or accounts for all edge cases.

The `code-review` skill performs a code quality review—checking for consistent coding style, potential performance issues, thorough error handling, and adherence to project conventions.

The difference between the two is that "review" focuses on "Is it correct?", while "code-review" focuses on "Is it well-written?". In this project, every time a feature is completed or a bug is fixed, it undergoes both levels of review.

A concrete example: When fixing a bug related to cross-language article queries in the "series" feature, the review confirmed that the EN/JA pages now correctly display the article list and series names; the code review, on the other hand, checked whether the newly added translationGroupId query logic was efficient and whether there was appropriate fallback handling.

The Rhythm of Validation

Looking back at the entire validation process, it forms a clear rhythm:

  1. Use Playwright MCP for automated testing to quickly validate basic functionality

  2. Systematically check all functional points using the QA skill to identify issues

  3. After fixing issues, use review to confirm functionality is correct

  4. Finally, use code reviews to ensure code quality

This process isn’t a one-time event; it’s run after every major modification. It may sound tedious, but since most of it is automated, it actually takes much less time than you might expect.

The next post will be the final one in this series, covering deployment and operations—how to push completed work from local to production, as well as maintenance strategies after going live.