AI-Powered Site Quality Audit for Drupal 11

How we used Claude Code to deliver fault injection testing, performance optimisation, SEO fixes, and AI crawler support on a production site

A corporate Drupal 11 site with Commerce integration needed a thorough quality review. Not a superficial scan — the client needed to know whether their Playwright tests actually caught real bugs, whether their custom code had hidden performance issues, and whether their SEO was properly configured for both traditional search engines and AI-powered search.

We delivered the full audit in a single focused engagement using Claude Code as an AI development partner, with every finding and fix documented automatically on a Jira ticket.

Phase 1: Do the Tests Actually Work?

Rather than simply reviewing the test code, we used fault injection testing — a technique borrowed from reliability engineering. We systematically inserted 10 different types of programming mistakes into the custom modules and theme, then ran the Playwright test suite against each one.

The fault types were deliberately varied: PHP exceptions in block plugins, JavaScript syntax errors, broken CSS selectors, missing asset files, renamed HTML template elements, removed form classes.

Key Finding: 5 of 10 injected faults went completely undetected. The test suite was blind to all JavaScript errors — a broken global.js that killed every Drupal behaviour on every page produced zero test failures.

Fault Injection Results

Fault TypeModuleDetected?
HTML element renamed (header)Theme templateYes — 16 tests failed
JS syntax errorevo_genericNo
JS logic error (wrong selector)evo_genericNo
PHP exception in blockhero_bannerYes — HTTP 500
JS fatal error (throw)evo_base_themeNo
Missing JS asset fileevo_commerceNo
CSS class renamedTheme templateYes — 17 tests failed
Footer element removedTheme templateYes — 19 tests failed
AJAX command brokenevo_ajax_add_to_cartNo
Form class removed (PHP)Theme .theme fileYes — 1 test failed

Closing the Gaps

We wrote four new tests and a browser error collection helper:

Detection rate went from 50% to 70%. The remaining three gaps require authenticated user flows that are outside the current test scope.

Phase 2: Performance Optimisation

A Lighthouse audit scored the site at 60/100 (mobile) and 87/100 (desktop). Server response time was excellent at 20ms, but JavaScript execution and render-blocking resources were dragging down the mobile score.

The deeper story was in the custom PHP code. A line-by-line review of every custom module and theme file revealed issues that had been accumulating for years:

Cache-Killing Code

Two preprocess hooks were setting max-age = 0 on blocks present on every page — the social links footer block and the cart block. This silently disabled Drupal’s Internal Page Cache site-wide. We replaced these with proper cache contexts and tags, restoring full page caching.

N+1 Query Patterns

A sponsor block was calling referencedEntities() inside a loop for every partnership taxonomy term — 10 partnership levels meant the same query ran 10 times. Award winner pages loaded partner nodes one-by-one. Slider pages were making 20+ individual entity loads for 5 images.

All replaced with batch operations: Node::loadMultiple(), Paragraph::loadMultiple(), and shared lookups moved outside loops.

Uncached Configuration

16 ConfigPages::config() calls per page with no static caching — each one hitting the database. We wrapped these in a drupal_static() helper function.

Results

MetricMobileDesktop
First Contentful Paint3.3s0.8s
Largest Contentful Paint3.5s1.9s
Total Blocking Time720ms0ms
Cumulative Layout Shift0.0070.006
Server Response (TTFB)20ms

Phase 3: SEO & AI Readiness

The SEO audit uncovered issues that had been silently undermining search performance:

All fixed: Open Graph and Twitter Card defaults configured for all content types, JSON-LD Organization schema added, preconnect hints for CDN resources, and the empty description removed.

AI Crawler Support

As AI-powered search becomes a significant traffic source, we proactively added:

The Jira Integration

Every phase followed the same pattern: Claude Code posted the prompt and plan as a formatted Jira comment, executed the work, then posted the results — complete with colour-coded tables showing pass/fail status, metric values, and file references.

The Jira API calls went through the site’s existing custom Drupal module. Claude discovered the module by reading the codebase, learned its API, and used it throughout. No external tools, no manual copy-paste.

The ticket now serves as a complete audit trail: what was asked, what was planned, what was found, and what was fixed. That’s documentation with lasting value.

Summary

AreaBeforeAfter
Test fault detection5/10 (50%)7/10 (70%)
Playwright tests4852
Page cacheDisabled by cache killsRestored with proper contexts
DB queries per page~30–40~10–15
Homepage meta descriptionEmptyConfigured
Open Graph tagsNoneFull coverage
Structured dataNoneJSON-LD Organization
AI crawler supportNonellms.txt + robot directives

What’s Next

The site is now being scanned with Siteimprove for accessibility compliance and pentest-tools for security vulnerabilities. Findings from those scans will be addressed as part of the same tracked workflow, maintaining the complete audit trail on the Jira ticket.

© 2026 Brian Willows / Graith Internet. All rights reserved.

This case study is protected by copyright. Reproduction or redistribution without written permission is prohibited.