Skip to content

Fix or rewrite a website: 8 signals that decide

Patryk KorzeniowskiPatryk Korzeniowski12 min read

Eight concrete signals from our audits: TypeScript, framework, plugins, Core Web Vitals, database, auth, tests. How to decide in 30 minutes whether to fix or rewrite.

Intro

Every few weeks we get the same conversation. Someone hired a freelancer for a website months ago, and now they don't know what to do with it. The site works, but barely. Google won't index it, load times hit six seconds, the contact form silently drops emails. The question lands the same way every time: fix it, or rewrite it.

The honest answer is "it depends," but that doesn't help much. A better answer is the eight signals we look at in every audit. If you know what to check for, most of these conversations resolve in half an hour.

Below I break down each signal: how to check it, what threshold turns it red, what it means in money. At the end, a simple framework: count the red flags and you have your decision.

A quick caveat. I'm writing this from the stack we use most often: Next.js, TypeScript, Postgres. For Symfony, Rails or Laravel the logic is the same, the numbers shift a bit. If you're running something very specific (legacy COBOL, Magento, Salesforce Commerce), this isn't the article for you and reading further won't help.

1. TypeScript coverage and any density

Threshold:

StateSignal
< 30% of files have typesRed
30-70%, or `strict: false`Amber (typically "TS added halfway through and abandoned")
> 70% with `strict: true`, `any` density < 1 per 100 linesGreen

How to check in 5 minutes:

bash
1# Ratio of TS to JS
2find src -name "*.ts" -o -name "*.tsx" | wc -l
3find src -name "*.js" -o -name "*.jsx" | wc -l
4
5# any density
6rg ': any\b|as any\b' --type ts | wc -l
7
8# Is strict mode on
9cat tsconfig.json | grep -A2 '"strict"'
10
11# How many @ts-ignore / @ts-expect-error
12rg '@ts-(ignore|expect-error)' --type ts | wc -l

What it costs you: renaming an API field without types means a day of searching across 12 files, each of which can compile fine and break at runtime. With types: the compiler shows you 12 places in 30 seconds. A senior hour costs $30-50 depending on the rate. On a six-month project, missing types means an extra 80-150 hours of maintenance work that someone is still going to pay for.

From our diagnostics: we sometimes see a Next.js 14 project with tsconfig set to strict: false, several hundred occurrences of : any, dozens of @ts-ignore. At first glance "it has TypeScript." In practice: the types protect no one and every change is a roulette.

Why this matters more today than a few years ago: AI assistants (Cursor, Copilot) happily write function processData(data: any): any if strict is off. The linter won't catch it. A year of working that way and you have a codebase that "has TypeScript" but might as well not.

2. Framework age vs the current stable version

Threshold:

StateSignal
0-1 major version behindGreen
2 majors behind (typically 6-12 months of lag)Amber
3+ majors behindRed, every overdue major is accumulated CVE

How to check:

bash
1npx npm-check-updates
2# or directly
3cat package.json | jq '.dependencies'
4# and compare with npm view <pkg> version

Specific red lines per framework (as of 2026):

  • Next.js: < 13 (no stable App Router, no React 18 features)
  • React: < 18 (no Suspense, no concurrent features)
  • WordPress: < 6.0 (no FSE, incompatible with PHP 8.x)
  • Vue: 2 (Vue 3 is 5+ years old, Vue 2 EOL since end of 2023)
  • Symfony LTS: < 6.4

What it costs you: every major version migration is typically 2-3 weeks of senior work plus testing. Three overdue majors means a month and a half to two months of upgrade work alone, before you add anything new. Plus security: no support means every CVE stays open until you upgrade.

Real example: Fundacja Znajdki, an animal shelter foundation, came to us with Next.js 12, React 17, last commit in package.json from 2023. Three Next majors, one React major. The upgrade alone took two weeks. Refactoring for App Router took another two. A month total before we added a single new feature.

3. Plugin count (WordPress) and external dependencies (npm)

Threshold:

StateSignal
WordPress > 25 active pluginsRed
npm > 150 top-level depsRed, > 300 = critical
Initial JS bundle > 500 KBRed

How to check:

bash
1# WordPress
2wp plugin list --status=active --format=count
3
4# npm
5cat package.json | jq '.dependencies | keys | length'
6
7# How many are actually used
8npx depcheck
9
10# Bundle size
11npx next build # check "First Load JS" in the output

Patterns to spot in 5 minutes:

  • 6+ SEO plugins (Yoast + RankMath + AIOSEO at the same time, each wanting to be the one in charge)
  • 3+ caching plugins (W3 Total Cache + WP Rocket + LiteSpeed)
  • Elementor + 4-5 add-ons (Elementor Pro + Essential Addons + Ultimate Addons)
  • In an npm repo: lodash + ramda + underscore simultaneously (three utility libraries, 80 KB each, each used in 2-3 places)

What it costs you: every plugin is a new attack surface for CVE (1-2 critical CVEs per month on average across the top 50 plugins), every plugin is a performance penalty (10-50 KB CSS+JS minimum), every plugin is a potential conflict with others on upgrades. A site with 35 active WP plugins cannot be maintained by one person at less than half a full-time role.

Real example: an e-commerce client had 47 active WordPress plugins. WooCommerce plus 12 of its extensions, 4 SEO, 3 caching, 8 marketing. PageSpeed mobile score 32, LCP 8.7s. First diagnostic: deactivate them all and check the bare site. Result: LCP 2.1s. So 6.6 seconds of load time was plugins alone. That's the moment refactoring is cheaper than maintenance.

4. Lighthouse and Core Web Vitals

Threshold (mobile, real-world):

MetricRed
LCP> 4s
INP> 500ms
CLS> 0.25
Lighthouse score (mobile)< 50, < 30 = critical

How to check:

bash
1# Local lab data
2npx lighthouse https://example.com --emulated-form-factor=mobile
3
4# Field data (real users)
5# pagespeed.web.dev → CrUX (Chrome User Experience Report)
6# Search Console → Core Web Vitals

The thing many people miss: look at field data (real users), not just lab data (Lighthouse local). Lab shows what's theoretically possible, field shows what actually happens for users. If lab is 90 and field is 35, your CDN, hosting or user geography is breaking the experience in a way you won't see locally.

What it costs you: since 2024, Core Web Vitals have been part of Google's ranking criteria (Page Experience). A site with LCP 6s has no chance against a competitor running 1.8s on the same keyword. Plus conversion: according to Google's 2017 SOASTA study, 53% of mobile visits are abandoned if loading takes longer than 3 seconds. Every extra second hurts.

Real example: a healthcare client had Lighthouse mobile 28, LCP 7.3s. After three weeks of refactoring (removing 14 plugins, image optimization, inlining critical CSS): Lighthouse 87, LCP 1.9s. Google traffic doubled in 60 days, on the same keywords, with no content changes. Just Core Web Vitals.

5. Coverage tab — how much JavaScript is dead

Threshold:

StateSignal
> 50% unused JS at initial paintAmber
> 70%Red
> 85% (typical WordPress + Elementor)Critical

How to check in 2 minutes:

  1. Open Chrome DevTools (Cmd+Opt+I)
  2. Cmd+Shift+P, type "Show Coverage"
  3. Click record (red dot)
  4. Reload the page
  5. Stop recording
  6. Sort by "Unused Bytes"

What you'll see in a typical WordPress + Elementor:

  • jQuery (90 KB) loading on a page that uses no jQuery anywhere
  • 4 versions of React (one per Elementor add-on, each with its own version)
  • The entire Bootstrap CSS, when one class is actually used
  • lodash 70 KB, when you only use _.debounce

What it costs you: every 100 KB of unused JS is roughly 200ms of parse time on a midrange Android, plus bandwidth on the user's bill. In mobile-heavy traffic that's a meaningful conversion drop, and on sites that make many requests (SPAs, dashboards) the effect compounds with every navigation.

Real example: a client on Next.js 13 with an initial bundle of 840 KB. Coverage showed 78% unused. The cause: import * from 'lodash' in three places, each pulling the full library. Plus moment.js with all locales, of which a single function was actually used. After switching to per-function imports and replacing moment with date-fns: bundle 280 KB, LCP minus 2.1s. One day of work.

6. Database architecture

Threshold:

StateSignal
Schema in version control (`prisma/`, `migrations/`)Green
Schema exists, but ALTER TABLE done by hand in prodAmber
"You'd have to ask Tom, he knows"Red

How to check:

  • Does a prisma/, migrations/, db/, supabase/migrations/ folder exist?
  • Does the README have a "Database setup" section?
  • Run: \d+ <table> in psql, are there foreign keys, indexes, constraints?

Antipatterns to spot:

  • Tables without created_at / updated_at
  • No foreign keys (relations exist in code, but aren't enforced at the DB level)
  • Columns named data1, data2, extra_field, comment_2, tag1, tag2...
  • JSON stored as a string in VARCHAR(255) instead of JSONB
  • Polish-named tables next to English ones, mixed conventions
  • A user password column named differently in three tables of the same project: pass, password_hash, hashed_password

What it costs you: without migrations in version control, every refactor starts with archaeology. What's in prod, what's in staging, what's in dev, why are they different. Usually a week of diagnosis alone before you touch any application code. Plus you can't recreate the environment, so a new developer gets a SQL dump from prod, which becomes a GDPR problem when personal data is involved.

Real example: a client project from 2019, MySQL, no migrations in git. 47 tables, 23 of them stored passwords in some form. Three tables had columns tag1, tag2, ..., tag10. Diagnosis took us 6 days of mapping alone, before we touched any application logic.

7. Auth written from scratch

Threshold:

StateSignal
Standard library (Auth.js, Supabase Auth, Clerk, Firebase)Green
Custom with `bcrypt` + JWT, with all security flagsAmber (worth an audit, usually fine)
Custom with `md5`/`sha1` passwords, JWT without expiry, no rate limitRed

How to check:

bash
1# Is there a standard library
2cat package.json | grep -E "next-auth|@auth|@supabase/auth|@clerk|firebase-auth"
3
4# Custom?
5rg "bcrypt|argon2|scrypt" --type ts --type js
6
7# Obsolete hashing
8rg "createHash\('md5'|createHash\('sha1'" --type ts --type js

Specific bugs we find regularly:

  • No rate limit on /login (dictionary attack in 30 seconds)
  • Password reset link without expiry (token lives forever)
  • Password reset over HTTP instead of HTTPS (when someone added a reverse proxy without thinking it through)
  • JWT in localStorage (one XSS in any form = account takeover, permanently)
  • Cookies without Secure, HttpOnly, SameSite=Lax
  • Plain-text email confirmation with a clickable link (a phishing magnet)

What it costs you (and beyond money): custom auth is CVE risk, regulatory complaint risk under GDPR Art. 32 (technical measures), incident risk that for a B2B SaaS can mean a real chance of reputational collapse. A standard library means someone else maintains it, someone else audits it, and it's easier to meet GDPR and NIS2 obligations.

Real example: we saw a B2B SaaS with custom JWT without expiry, stored in localStorage. An XSS in any form would mean the attacker has account access forever, with no way to invalidate it. Patching this without rewriting every endpoint isn't possible. That's a real moment when only a rewrite makes sense, regardless of the other signals.

Watch out when working with an AI assistant: Cursor, asked to "add login," will propose bcrypt + JWT if you don't steer it. It doesn't know you already have Supabase with Auth ready to use. If a developer copies the suggestion without verifying, you end up with custom auth when you didn't want it.

8. Tests and CI/CD

Threshold:

StateSignal
0 testsRed, every change is a roulette
< 30% coverage on nontrivial codeAmber
> 60% with integration and E2E tests on critical pathsGreen

How to check:

bash
1# Are there tests
2find . -name "*.test.*" -o -name "*.spec.*" | wc -l
3
4# Is there CI
5ls -la .github/workflows/ .gitlab-ci.yml 2>/dev/null
6
7# Coverage report (if configured)
8npm test -- --coverage

What to verify beyond existence:

  • Whether the tests actually run (often there's a __tests__ folder with five files that throw an error on npm test)
  • Whether CI actually blocks merges on failure (required check in settings)
  • Whether there are E2E tests (Playwright/Cypress) for critical paths (login, purchase, contact form, payment, password reset)

What it costs you: without tests, refactoring without regression is impossible. Every change becomes manual click-through testing. On a six-month project, no CI/CD means an extra 40-60 hours of manual QA that nobody priced in but someone has to do.

Real example: an e-commerce client, zero tests, zero CI. The first post-migration fix broke something in the admin panel, noticed three days later through customer phone calls. In month two after migration we added Playwright on 8 critical paths (login, add to cart, checkout, payment, invoice, cancel, password reset, search). Manual QA time per release went from 4 hours to zero.

AI-slop signal in tests: Cursor writes tests that don't actually run, because they mock everything, including the function they're supposedly testing. Easy to spot: the test has expect(true).toBe(true) as the assertion, or the entire domain logic is mocked. The test "passes," catches no regressions.

Decision framework

Count the red flags from the 8 signals. One point each.

Red flag countDecisionTime to fixTime to rewrite
0-2Fix, you save 60-80% of the cost2-6 weeks4-6 months
3-5Calculation, decision depends on deadline and budget8-16 weeks4-6 months
6-8Rewrite only, but in chunks < 3 months (Strangler Pattern), not big bangn/a4-8 months in chunks

A big bang rewrite (the entire system from scratch in one go) only makes sense when the site has fewer than 5 pages, low traffic, and no external integrations. In every other case Strangler Pattern wins: you stand up the new stack alongside the old, migrate one module at a time, and the old system fades out naturally. It needs solid routing and coordination, but it gives you the option to stop at any point if something breaks.

What to do next

Three paths, depending on where you are.

If you're reading out of curiosity and want to go deeper: the full report is here (22 min, two case studies step by step, market context, when WordPress still makes sense).

If you have a specific site and want a subjective second opinion: send the URL to audyt@epko.tech with the subject "8 signals". In 48 hours I'll send back how it scores on each of the eight points. No commitment, no salesperson, one A4 page.

If the score is bad and you want a full technical audit: Lighthouse + coverage analysis + DB review + auth review + remediation plan in 5 working days. Pricing scales with the site, we can estimate it after a 15-minute call. Book it.

One thing to close on. These 8 signals give you 70% of the answer in half an hour. The remaining 30% is a conversation where we ask about things a scanner can't check: your product roadmap, planned integrations, team capabilities, deadlines. The cheapest refactor is the one we don't do, if you're going to walk away from the site in six months anyway.

Frequently asked questions

How long does your 8-signal audit take?
A subjective 8-signal assessment, one A4 page, comes back to you within 48 hours of sending the URL. A full technical audit (Lighthouse + coverage + DB review + auth review + remediation plan) takes 5 working days and ends with a prioritized report.
Is WordPress always a bad choice?
No. For a blog with two posts a month or a simple corporate site, WordPress is a sensible choice and cheaper to maintain than a custom Next.js build. The problem starts when scale, dynamics, load speed or AI-driven visibility come into play. Then every additional plugin becomes technical debt someone will eventually pay for.
What does "rewrite" mean if my real question is "should I just change the framework"?
Rewrite means the application substance (business logic, database schema, content) stays, but the code layer is written from scratch, usually on a different framework or a newer version of the same one. Big bang rewrite is replacing everything in one go. Strangler Pattern is replacing module by module: the old system runs in parallel, you switch traffic per module.
I have 3-5 red flags. What do I actually do?
This is the calculation zone. The decision depends on two variables: budget and deadline. With a deadline under 3 months and a tight budget, fixing still makes sense. With a 6+ month horizon and long-term product investment, a rewrite (Strangler) starts paying off. A useful question: in six months, will the same team and the same stack still be the right choice?
Do AI assistants (Cursor, Copilot) help or hurt during a refactor?
They help if you steer them deliberately: verify every suggestion, require `strict: true` in TypeScript, use standard libraries (Auth.js, Supabase Auth), don't copy code without reading. They hurt if you accept the first suggestion, mock tests to make them pass, or add `: any` for convenience. Cursor doesn't know your stack, so every suggestion needs review.

Does your digital product comply with EU law?

EAA, WCAG, GDPR, NIS2. These regulations are already in force. Enter your URL and we'll check EU compliance. Free, results in 48h.

WCAG 2.1 AAGDPR / cookiesNIS2 / securityResults within 48h
Helps us scope the audit and tailor the first recommendations.

What you actually get within 48 hours

A short, concrete document with the most important recommendations. No commitment, no sales pitch in disguise.

What you get

  • A short PDF report (2-3 pages) with a high-level score against WCAG, GDPR and NIS2
  • The top 3 risks worth addressing first
  • A list of quick wins you can implement on your own or with any vendor
  • An optional short online call if you want to discuss the results

Who does it

  • The audit is run by an EPKO compliance specialist, supported by automated scanners (Lighthouse, axe-core, our own checklists)
  • You correspond directly with the person who signed the report, with no account managers in between
  • If you decide to work with us on remediation, Eryk (CTO) or Patryk (CEO) joins the project

In what form

  • PDF sent by email, accessible to screen readers
  • A short summary of the results in the email body
  • Materials stay with you and can be shared with your legal or IT team
  • Turnaround: 48 hours from request confirmation, on business days

Common questions about the audit

Do I have to sign anything before the audit?
No. The audit is free and non-binding. There are no contracts or NDAs at this stage. If you decide to work with us, that is when we sign a data processing agreement and a scope of work.
How do you use the data from my audit?
Only to prepare your report. We do not share it with anyone, we do not use it in marketing and we do not turn it into a case study without your written consent. We delete the data 90 days after sending the report, unless we begin a project together.
Is the audit secretly a sales call?
No. The report contains specific risks and quick wins you can implement on your own or with any vendor. If quick wins are enough, we say so straight, even if it costs us the deal.
Do you need access to my system?
Not for the initial report. We work from the publicly accessible parts of the site and from your answers in the form. Admin or code access is only discussed if we move into the remediation phase.
What if my site is still being built?
Then the audit matters even more. Select "Not launched yet" for scale, and we will review the spec, mockups or repository and tell you what to plan before launch, so you do not have to retrofit it later.