Skip to main content
ValyouValyou.
Dispatch: web-application-perf... // Status: Published
December 28, 202414 min read

Web Application Performance: 10 Metrics That Matter

The specific metrics that correlate with user experience and business outcomes, with practical measurement approaches and target thresholds.

BD
Blake DahlinPrincipal Engineer
Share

"Our app is slow" is not actionable. Performance improvement requires specific metrics, target thresholds, and systematic measurement. Here are the 10 metrics that actually correlate with user experience, and how to use them.

The Metrics Hierarchy

Performance metrics exist at different levels:

User-centric (what users experience): - Core Web Vitals - Time to Interactive - Perceived performance

System-level (what engineers measure): - Response times - Throughput - Resource utilization

Business outcomes (what matters): - Conversion rates - Bounce rates - Session duration

This guide focuses on user-centric metrics that connect to business outcomes.


Metric 1: Largest Contentful Paint (LCP)

What it measures: How long until the largest visible content element renders.

Why it matters: LCP correlates most strongly with perceived load speed. Users judge a page as "loaded" when they see meaningful content.

Target thresholds: - Good: < 2.5 seconds - Needs improvement: 2.5-4.0 seconds - Poor: > 4.0 seconds

What affects LCP: - Server response time - Resource load time (images, fonts, CSS) - Client-side rendering time - Resource prioritization

How to measure: ```javascript // Web Vitals library import { getLCP } from 'web-vitals';

getLCP(metric => { console.log('LCP:', metric.value); // Send to analytics }); ```

Common fixes: - Optimize server TTFB (< 600ms target) - Preload LCP image: `<link rel="preload" as="image">` - Avoid lazy-loading above-fold content - Use CDN for static assets


Metric 2: Cumulative Layout Shift (CLS)

What it measures: Visual stability, how much page content unexpectedly shifts during load.

Why it matters: Layout shifts frustrate users. Clicking a button that moves causes accidental clicks. Reading text that jumps loses users' place.

Target thresholds: - Good: < 0.1 - Needs improvement: 0.1-0.25 - Poor: > 0.25

What causes layout shift: - Images without dimensions - Ads/embeds that load late - Web fonts causing FOUT - Dynamic content insertion

How to measure: ```javascript import { getCLS } from 'web-vitals';

getCLS(metric => { console.log('CLS:', metric.value); }); ```

Common fixes: - Always include width/height on images and videos - Reserve space for dynamic content - Use `font-display: optional` or preload fonts - Avoid inserting content above existing content


Metric 3: Interaction to Next Paint (INP)

What it measures: Responsiveness, how quickly the page responds to user interactions (clicks, taps, key presses).

Why it matters: Sluggish interactions make apps feel broken. Even 200ms delays are perceptible.

Target thresholds: - Good: < 200ms - Needs improvement: 200-500ms - Poor: > 500ms

What affects INP: - JavaScript execution time - Main thread blocking - Event handler complexity - Forced reflows

How to measure: ```javascript import { getINP } from 'web-vitals';

getINP(metric => { console.log('INP:', metric.value); }); ```

Common fixes: - Break up long tasks (> 50ms) - Defer non-critical JavaScript - Use web workers for heavy computation - Debounce rapid interactions


Metric 4: Time to First Byte (TTFB)

What it measures: Server responsiveness, time from request to first byte of response.

Why it matters: TTFB is the floor for all other metrics. You can't render content before it arrives.

Target thresholds: - Good: < 600ms - Needs improvement: 600-1800ms - Poor: > 1800ms

What affects TTFB: - Server processing time - Database queries - Network latency - DNS resolution

How to measure: ```javascript const [entry] = performance.getEntriesByType('navigation'); const ttfb = entry.responseStart - entry.requestStart; console.log('TTFB:', ttfb); ```

Common fixes: - Enable server-side caching - Optimize database queries - Use CDN for geographic distribution - Implement connection prewarming


Metric 5: First Contentful Paint (FCP)

What it measures: Time until first content (text, image) renders.

Why it matters: FCP tells users "something is happening." A blank screen feels broken even if LCP is fast.

Target thresholds: - Good: < 1.8 seconds - Needs improvement: 1.8-3.0 seconds - Poor: > 3.0 seconds

The FCP-LCP gap: If FCP is fast but LCP is slow, you might have: - Loading spinner showing quickly - Slow hero image loading - JavaScript-heavy content rendering

Common fixes: - Inline critical CSS - Eliminate render-blocking resources - Use server-side rendering - Preconnect to required origins


Metric 6: Total Blocking Time (TBT)

What it measures: Total time the main thread was blocked and unable to respond to input.

Why it matters: TBT predicts interactivity problems. High TBT = frozen page during load.

Target thresholds: - Good: < 200ms - Needs improvement: 200-600ms - Poor: > 600ms

What causes main thread blocking: - Large JavaScript bundles - Third-party scripts (analytics, chat, ads) - Heavy DOM manipulation - Synchronous layout calculations

How to measure (Lab only): TBT is a lab metric. Lighthouse and WebPageTest report it. Real User Monitoring uses INP instead.

Common fixes: - Code splitting: load only what's needed - Tree shaking: remove unused code - Defer third-party scripts - Use `requestIdleCallback` for non-urgent work


Metric 7: Speed Index

What it measures: How quickly visible content populates the viewport over time.

Why it matters: Speed Index captures the user's perception of speed better than any single timing metric.

Target thresholds: - Good: < 3.4 seconds - Needs improvement: 3.4-5.8 seconds - Poor: > 5.8 seconds

How it works: Speed Index is calculated by recording a video of page load and measuring how much of the viewport is visually complete at each point.

Low Speed Index strategies: - Progressive rendering (show content as it loads) - Prioritize above-fold content - Avoid content that appears all at once - Use skeleton screens thoughtfully


Metric 8: Time to Interactive (TTI)

What it measures: When the page becomes fully interactive (responds reliably to input).

Why it matters: A page that looks loaded but doesn't respond is worse than one that looks loading.

Target thresholds: - Good: < 3.8 seconds - Needs improvement: 3.8-7.3 seconds - Poor: > 7.3 seconds

TTI calculation: TTI is the point after FCP where: - The page has displayed useful content - Event handlers are registered for visible elements - The page responds within 50ms to interactions

Common problems: - JavaScript hydration in SPAs - Third-party scripts blocking main thread - Heavy framework initialization - Synchronous API calls during load


Metric 9: Error Rate

What it measures: Percentage of requests that fail (4xx, 5xx responses, JavaScript errors).

Why it matters: Errors directly impact user experience. A fast page that doesn't work isn't valuable.

Target thresholds: - Excellent: < 0.1% - Acceptable: 0.1-1% - Concerning: 1-5% - Critical: > 5%

What to track: - HTTP error responses (by endpoint) - JavaScript exceptions (by component) - Failed API calls - Resource load failures

How to measure: ```javascript // Track JavaScript errors window.onerror = function(message, source, lineno, colno, error) { // Send to error tracking service };

// Track unhandled promise rejections window.onunhandledrejection = function(event) { // Send to error tracking service }; ```

Common patterns: - Error spike after deployment = rollback trigger - Errors by browser/device = compatibility issues - Errors by geography = CDN or network problems


Metric 10: API Response Time (P95)

What it measures: Server response time for the 95th percentile of requests.

Why P95, not average? Averages hide outliers. If most requests are 50ms but 5% are 5 seconds, the average (250ms) doesn't tell the real story.

Target thresholds (depends on operation type): - Read operations: < 200ms P95 - Write operations: < 500ms P95 - Complex queries: < 1000ms P95

What to measure: - Response time by endpoint - Response time by user segment - Response time over time (detect degradation)

How to measure: ```javascript // Server-side timing const start = process.hrtime(); // ... handle request const [seconds, nanoseconds] = process.hrtime(start); const duration = seconds * 1000 + nanoseconds / 1e6;

// Track distribution histogram.observe({ endpoint: '/api/users' }, duration); ```

Common fixes: - Add database indexes - Implement query caching - Optimize N+1 queries - Add pagination to list endpoints


Putting It Together: The Performance Dashboard

Minimum viable monitoring:

| Metric | Source | Alert Threshold | |--------|--------|-----------------| | LCP | RUM | P75 > 3.5s | | CLS | RUM | P75 > 0.15 | | INP | RUM | P75 > 300ms | | TTFB | RUM | P95 > 1.2s | | Error Rate | APM | > 1% | | API P95 | APM | > 500ms |

Tools that provide these: - RUM (Real User Monitoring): Vercel Analytics, SpeedCurve, New Relic Browser - Lab testing: Lighthouse CI, WebPageTest - APM: Datadog, New Relic, Grafana


The Improvement Process

  1. . Baseline: Measure current performance across all metrics
  2. . Prioritize: Fix the worst metric first (biggest user impact)
  3. . Hypothesis: Identify specific cause of poor performance
  4. . Fix: Implement targeted improvement
  5. . Verify: Confirm metric improved without regression
  6. . Repeat: Move to next priority metric

Avoid premature optimization: Don't optimize metrics that are already "Good." Focus on moving "Poor" to "Needs Improvement" to "Good."

The business conversation: "Our LCP is 4.2 seconds (Poor). Industry data shows this correlates with 7% lower conversion. Improving to 2.5 seconds (Good) could increase revenue by $X."


Need help establishing performance baselines or identifying optimization opportunities? [Let's audit your application](/contact).

End Transmission

Want to discuss this topic?

We're always interested in conversations with people building interesting things.

Start a Conversation