Intro
Every few weeks we get the same conversation. Someone hired a freelancer for a website months ago, and now they don't know what to do with it. The site works, but barely. Google won't index it, load times hit six seconds, the contact form silently drops emails. The question lands the same way every time: fix it, or rewrite it.
The honest answer is "it depends," but that doesn't help much. A better answer is the eight signals we look at in every audit. If you know what to check for, most of these conversations resolve in half an hour.
Below I break down each signal: how to check it, what threshold turns it red, what it means in money. At the end, a simple framework: count the red flags and you have your decision.
A quick caveat. I'm writing this from the stack we use most often: Next.js, TypeScript, Postgres. For Symfony, Rails or Laravel the logic is the same, the numbers shift a bit. If you're running something very specific (legacy COBOL, Magento, Salesforce Commerce), this isn't the article for you and reading further won't help.
1. TypeScript coverage and any density
Threshold:
| State | Signal |
|---|---|
| < 30% of files have types | Red |
| 30-70%, or `strict: false` | Amber (typically "TS added halfway through and abandoned") |
| > 70% with `strict: true`, `any` density < 1 per 100 lines | Green |
How to check in 5 minutes:
1# Ratio of TS to JS2find src -name "*.ts" -o -name "*.tsx" | wc -l3find src -name "*.js" -o -name "*.jsx" | wc -l45# any density6rg ': any\b|as any\b' --type ts | wc -l78# Is strict mode on9cat tsconfig.json | grep -A2 '"strict"'1011# How many @ts-ignore / @ts-expect-error12rg '@ts-(ignore|expect-error)' --type ts | wc -l
What it costs you: renaming an API field without types means a day of searching across 12 files, each of which can compile fine and break at runtime. With types: the compiler shows you 12 places in 30 seconds. A senior hour costs $30-50 depending on the rate. On a six-month project, missing types means an extra 80-150 hours of maintenance work that someone is still going to pay for.
From our diagnostics: we sometimes see a Next.js 14 project with tsconfig set to strict: false, several hundred occurrences of : any, dozens of @ts-ignore. At first glance "it has TypeScript." In practice: the types protect no one and every change is a roulette.
Why this matters more today than a few years ago: AI assistants (Cursor, Copilot) happily write function processData(data: any): any if strict is off. The linter won't catch it. A year of working that way and you have a codebase that "has TypeScript" but might as well not.
2. Framework age vs the current stable version
Threshold:
| State | Signal |
|---|---|
| 0-1 major version behind | Green |
| 2 majors behind (typically 6-12 months of lag) | Amber |
| 3+ majors behind | Red, every overdue major is accumulated CVE |
How to check:
1npx npm-check-updates2# or directly3cat package.json | jq '.dependencies'4# and compare with npm view <pkg> version
Specific red lines per framework (as of 2026):
- Next.js: < 13 (no stable App Router, no React 18 features)
- React: < 18 (no Suspense, no concurrent features)
- WordPress: < 6.0 (no FSE, incompatible with PHP 8.x)
- Vue: 2 (Vue 3 is 5+ years old, Vue 2 EOL since end of 2023)
- Symfony LTS: < 6.4
What it costs you: every major version migration is typically 2-3 weeks of senior work plus testing. Three overdue majors means a month and a half to two months of upgrade work alone, before you add anything new. Plus security: no support means every CVE stays open until you upgrade.
Real example: Fundacja Znajdki, an animal shelter foundation, came to us with Next.js 12, React 17, last commit in package.json from 2023. Three Next majors, one React major. The upgrade alone took two weeks. Refactoring for App Router took another two. A month total before we added a single new feature.
3. Plugin count (WordPress) and external dependencies (npm)
Threshold:
| State | Signal |
|---|---|
| WordPress > 25 active plugins | Red |
| npm > 150 top-level deps | Red, > 300 = critical |
| Initial JS bundle > 500 KB | Red |
How to check:
1# WordPress2wp plugin list --status=active --format=count34# npm5cat package.json | jq '.dependencies | keys | length'67# How many are actually used8npx depcheck910# Bundle size11npx next build # check "First Load JS" in the output
Patterns to spot in 5 minutes:
- 6+ SEO plugins (Yoast + RankMath + AIOSEO at the same time, each wanting to be the one in charge)
- 3+ caching plugins (W3 Total Cache + WP Rocket + LiteSpeed)
- Elementor + 4-5 add-ons (Elementor Pro + Essential Addons + Ultimate Addons)
- In an npm repo:
lodash+ramda+underscoresimultaneously (three utility libraries, 80 KB each, each used in 2-3 places)
What it costs you: every plugin is a new attack surface for CVE (1-2 critical CVEs per month on average across the top 50 plugins), every plugin is a performance penalty (10-50 KB CSS+JS minimum), every plugin is a potential conflict with others on upgrades. A site with 35 active WP plugins cannot be maintained by one person at less than half a full-time role.
Real example: an e-commerce client had 47 active WordPress plugins. WooCommerce plus 12 of its extensions, 4 SEO, 3 caching, 8 marketing. PageSpeed mobile score 32, LCP 8.7s. First diagnostic: deactivate them all and check the bare site. Result: LCP 2.1s. So 6.6 seconds of load time was plugins alone. That's the moment refactoring is cheaper than maintenance.
4. Lighthouse and Core Web Vitals
Threshold (mobile, real-world):
| Metric | Red |
|---|---|
| LCP | > 4s |
| INP | > 500ms |
| CLS | > 0.25 |
| Lighthouse score (mobile) | < 50, < 30 = critical |
How to check:
1# Local lab data2npx lighthouse https://example.com --emulated-form-factor=mobile34# Field data (real users)5# pagespeed.web.dev → CrUX (Chrome User Experience Report)6# Search Console → Core Web Vitals
The thing many people miss: look at field data (real users), not just lab data (Lighthouse local). Lab shows what's theoretically possible, field shows what actually happens for users. If lab is 90 and field is 35, your CDN, hosting or user geography is breaking the experience in a way you won't see locally.
What it costs you: since 2024, Core Web Vitals have been part of Google's ranking criteria (Page Experience). A site with LCP 6s has no chance against a competitor running 1.8s on the same keyword. Plus conversion: according to Google's 2017 SOASTA study, 53% of mobile visits are abandoned if loading takes longer than 3 seconds. Every extra second hurts.
Real example: a healthcare client had Lighthouse mobile 28, LCP 7.3s. After three weeks of refactoring (removing 14 plugins, image optimization, inlining critical CSS): Lighthouse 87, LCP 1.9s. Google traffic doubled in 60 days, on the same keywords, with no content changes. Just Core Web Vitals.
5. Coverage tab — how much JavaScript is dead
Threshold:
| State | Signal |
|---|---|
| > 50% unused JS at initial paint | Amber |
| > 70% | Red |
| > 85% (typical WordPress + Elementor) | Critical |
How to check in 2 minutes:
- Open Chrome DevTools (Cmd+Opt+I)
- Cmd+Shift+P, type "Show Coverage"
- Click record (red dot)
- Reload the page
- Stop recording
- Sort by "Unused Bytes"
What you'll see in a typical WordPress + Elementor:
- jQuery (90 KB) loading on a page that uses no jQuery anywhere
- 4 versions of React (one per Elementor add-on, each with its own version)
- The entire Bootstrap CSS, when one class is actually used
lodash70 KB, when you only use_.debounce
What it costs you: every 100 KB of unused JS is roughly 200ms of parse time on a midrange Android, plus bandwidth on the user's bill. In mobile-heavy traffic that's a meaningful conversion drop, and on sites that make many requests (SPAs, dashboards) the effect compounds with every navigation.
Real example: a client on Next.js 13 with an initial bundle of 840 KB. Coverage showed 78% unused. The cause: import * from 'lodash' in three places, each pulling the full library. Plus moment.js with all locales, of which a single function was actually used. After switching to per-function imports and replacing moment with date-fns: bundle 280 KB, LCP minus 2.1s. One day of work.
6. Database architecture
Threshold:
| State | Signal |
|---|---|
| Schema in version control (`prisma/`, `migrations/`) | Green |
| Schema exists, but ALTER TABLE done by hand in prod | Amber |
| "You'd have to ask Tom, he knows" | Red |
How to check:
- Does a
prisma/,migrations/,db/,supabase/migrations/folder exist? - Does the README have a "Database setup" section?
- Run:
\d+ <table>in psql, are there foreign keys, indexes, constraints?
Antipatterns to spot:
- Tables without
created_at/updated_at - No foreign keys (relations exist in code, but aren't enforced at the DB level)
- Columns named
data1,data2,extra_field,comment_2,tag1,tag2... - JSON stored as a string in
VARCHAR(255)instead ofJSONB - Polish-named tables next to English ones, mixed conventions
- A user password column named differently in three tables of the same project:
pass,password_hash,hashed_password
What it costs you: without migrations in version control, every refactor starts with archaeology. What's in prod, what's in staging, what's in dev, why are they different. Usually a week of diagnosis alone before you touch any application code. Plus you can't recreate the environment, so a new developer gets a SQL dump from prod, which becomes a GDPR problem when personal data is involved.
Real example: a client project from 2019, MySQL, no migrations in git. 47 tables, 23 of them stored passwords in some form. Three tables had columns tag1, tag2, ..., tag10. Diagnosis took us 6 days of mapping alone, before we touched any application logic.
7. Auth written from scratch
Threshold:
| State | Signal |
|---|---|
| Standard library (Auth.js, Supabase Auth, Clerk, Firebase) | Green |
| Custom with `bcrypt` + JWT, with all security flags | Amber (worth an audit, usually fine) |
| Custom with `md5`/`sha1` passwords, JWT without expiry, no rate limit | Red |
How to check:
1# Is there a standard library2cat package.json | grep -E "next-auth|@auth|@supabase/auth|@clerk|firebase-auth"34# Custom?5rg "bcrypt|argon2|scrypt" --type ts --type js67# Obsolete hashing8rg "createHash\('md5'|createHash\('sha1'" --type ts --type js
Specific bugs we find regularly:
- No rate limit on
/login(dictionary attack in 30 seconds) - Password reset link without expiry (token lives forever)
- Password reset over HTTP instead of HTTPS (when someone added a reverse proxy without thinking it through)
- JWT in
localStorage(one XSS in any form = account takeover, permanently) - Cookies without
Secure,HttpOnly,SameSite=Lax - Plain-text email confirmation with a clickable link (a phishing magnet)
What it costs you (and beyond money): custom auth is CVE risk, regulatory complaint risk under GDPR Art. 32 (technical measures), incident risk that for a B2B SaaS can mean a real chance of reputational collapse. A standard library means someone else maintains it, someone else audits it, and it's easier to meet GDPR and NIS2 obligations.
Real example: we saw a B2B SaaS with custom JWT without expiry, stored in localStorage. An XSS in any form would mean the attacker has account access forever, with no way to invalidate it. Patching this without rewriting every endpoint isn't possible. That's a real moment when only a rewrite makes sense, regardless of the other signals.
Watch out when working with an AI assistant: Cursor, asked to "add login," will propose bcrypt + JWT if you don't steer it. It doesn't know you already have Supabase with Auth ready to use. If a developer copies the suggestion without verifying, you end up with custom auth when you didn't want it.
8. Tests and CI/CD
Threshold:
| State | Signal |
|---|---|
| 0 tests | Red, every change is a roulette |
| < 30% coverage on nontrivial code | Amber |
| > 60% with integration and E2E tests on critical paths | Green |
How to check:
1# Are there tests2find . -name "*.test.*" -o -name "*.spec.*" | wc -l34# Is there CI5ls -la .github/workflows/ .gitlab-ci.yml 2>/dev/null67# Coverage report (if configured)8npm test -- --coverage
What to verify beyond existence:
- Whether the tests actually run (often there's a
__tests__folder with five files that throw an error onnpm test) - Whether CI actually blocks merges on failure (
required checkin settings) - Whether there are E2E tests (Playwright/Cypress) for critical paths (login, purchase, contact form, payment, password reset)
What it costs you: without tests, refactoring without regression is impossible. Every change becomes manual click-through testing. On a six-month project, no CI/CD means an extra 40-60 hours of manual QA that nobody priced in but someone has to do.
Real example: an e-commerce client, zero tests, zero CI. The first post-migration fix broke something in the admin panel, noticed three days later through customer phone calls. In month two after migration we added Playwright on 8 critical paths (login, add to cart, checkout, payment, invoice, cancel, password reset, search). Manual QA time per release went from 4 hours to zero.
AI-slop signal in tests: Cursor writes tests that don't actually run, because they mock everything, including the function they're supposedly testing. Easy to spot: the test has expect(true).toBe(true) as the assertion, or the entire domain logic is mocked. The test "passes," catches no regressions.
Decision framework
Count the red flags from the 8 signals. One point each.
| Red flag count | Decision | Time to fix | Time to rewrite |
|---|---|---|---|
| 0-2 | Fix, you save 60-80% of the cost | 2-6 weeks | 4-6 months |
| 3-5 | Calculation, decision depends on deadline and budget | 8-16 weeks | 4-6 months |
| 6-8 | Rewrite only, but in chunks < 3 months (Strangler Pattern), not big bang | n/a | 4-8 months in chunks |
A big bang rewrite (the entire system from scratch in one go) only makes sense when the site has fewer than 5 pages, low traffic, and no external integrations. In every other case Strangler Pattern wins: you stand up the new stack alongside the old, migrate one module at a time, and the old system fades out naturally. It needs solid routing and coordination, but it gives you the option to stop at any point if something breaks.
What to do next
Three paths, depending on where you are.
If you're reading out of curiosity and want to go deeper: the full report is here (22 min, two case studies step by step, market context, when WordPress still makes sense).
If you have a specific site and want a subjective second opinion: send the URL to audyt@epko.tech with the subject "8 signals". In 48 hours I'll send back how it scores on each of the eight points. No commitment, no salesperson, one A4 page.
If the score is bad and you want a full technical audit: Lighthouse + coverage analysis + DB review + auth review + remediation plan in 5 working days. Pricing scales with the site, we can estimate it after a 15-minute call. Book it.
One thing to close on. These 8 signals give you 70% of the answer in half an hour. The remaining 30% is a conversation where we ask about things a scanner can't check: your product roadmap, planned integrations, team capabilities, deadlines. The cheapest refactor is the one we don't do, if you're going to walk away from the site in six months anyway.

