Core Web Vitals have been part of Google's ranking signals since 2021, but the specific metrics and their thresholds have evolved. If you have not revisited your performance strategy recently, you may be optimizing for metrics that no longer exist or ignoring ones that now carry more weight. Here is where things stand and what to do about it.
The Current Metric Set
As of 2026, the three Core Web Vitals are Largest Contentful Paint (LCP), Interaction to Next Paint (INP), and Cumulative Layout Shift (CLS).
LCP measures loading performance. Specifically, it measures the time it takes for the largest content element visible in the viewport to render. This is typically a hero image, a heading, or a large block of text. The threshold for a good score is under 2.5 seconds. Anything above 4 seconds is considered poor.
INP measures responsiveness. It replaced First Input Delay (FID) in March 2024 because FID only measured the delay of the first interaction, which did not capture how a page felt to use over time. INP measures the latency of all interactions throughout the page's lifecycle and reports the worst one. A good INP score is under 200 milliseconds. Above 500 milliseconds is poor.
CLS measures visual stability. It quantifies how much the page layout shifts unexpectedly as content loads. If a button moves right as a visitor is about to click it, that is a layout shift. A good CLS score is under 0.1. Above 0.25 is poor.
These three metrics cover the full experience: does the page load quickly, does it respond to interaction, and does it remain stable while being used.
What Actually Moved the Needle
Knowing the metrics is one thing. Knowing what to do about them is another. Here are the optimizations that produce the largest improvements for each metric.
For LCP: Image Optimization
The LCP element is an image on the majority of web pages. Optimizing that single image is often the highest-impact change you can make. Use modern formats like WebP or AVIF, which provide significantly better compression than JPEG or PNG. Serve appropriately sized images using the srcset attribute so mobile devices do not download desktop-sized files. Preload the LCP image in the document head so the browser starts fetching it as early as possible.
Beyond images, reduce render-blocking resources. CSS that is not needed for above-the-fold content should be loaded asynchronously. JavaScript that is not critical for the initial render should be deferred. Every resource that blocks rendering pushes your LCP later.
For INP: JavaScript Reduction
INP problems are almost always caused by too much JavaScript executing on the main thread. When a visitor clicks a button and the browser is busy running a script, the response is delayed. The visitor perceives the page as sluggish or broken.
Audit your JavaScript bundle. Remove libraries you are not actively using. Break long tasks into smaller chunks using techniques like yielding to the main thread. Defer non-essential scripts. Third-party analytics, chat widgets, and tracking pixels are common culprits that block interaction responsiveness.
Event handlers should be lean. A click handler that triggers a complex calculation or DOM manipulation will delay the visual response. Move heavy processing off the main thread using Web Workers where possible, or defer it until after the visual update.
For CLS: Layout Reservation
Most layout shifts are caused by content that loads after the initial render and pushes other elements around. The fix is reserving space for dynamic content before it loads.
Set explicit width and height attributes on all images and videos so the browser can allocate space before the media downloads. Use CSS aspect-ratio for responsive containers. Avoid inserting content above existing content unless it is triggered by a user action. Web fonts that cause a flash of unstyled text should use font-display: swap with a fallback font that closely matches the dimensions of the web font.
The Font Loading Problem
Custom fonts are a significant and often overlooked performance bottleneck. A typical website loads 200 to 400 kilobytes of font files, sometimes more if multiple weights and styles are included. That is a substantial amount of data that directly impacts LCP and can cause layout shifts.
Font subsetting is one of the most effective optimizations. If your site is in English, you do not need to load glyphs for Cyrillic, Greek, or CJK characters. Subsetting your font files to include only the character ranges you actually use can reduce file sizes by 60 to 80 percent.
Use font-display: swap in your @font-face declarations. This tells the browser to show text immediately using a system font and swap in the custom font once it loads. This prevents invisible text during loading, which hurts both LCP and user experience.
Self-hosting your fonts instead of loading them from Google Fonts or other external services eliminates the additional DNS lookup and connection setup time. It also gives you full control over caching headers and subsetting. The performance difference is measurable, especially on slower connections.
Third-Party Scripts Are Still the Biggest Threat
Every analytics platform, chat widget, marketing pixel, and embedded tool you add to your site runs JavaScript on your visitor's browser. In aggregate, these scripts are the single largest source of performance problems on most websites.
Audit your third-party scripts quarterly. For each one, ask: is this actively being used? Is the data it collects being acted on? Would we notice if it disappeared? If the answer to any of those questions is no, remove it. Every script you remove improves INP, reduces page weight, and speeds up load time.
For scripts you need to keep, load them asynchronously whenever possible. Use the async or defer attribute on script tags. Consider loading heavy third-party tools only after the page has become interactive, using requestIdleCallback or intersection observers to delay execution until the browser has spare capacity.
Privacy-focused analytics alternatives like Plausible and Fathom are worth evaluating. They are significantly lighter than Google Analytics, do not require cookie consent banners in most jurisdictions, and provide the metrics most companies actually use. The performance benefit of switching from a 45-kilobyte analytics library to a 1-kilobyte script is substantial.
Where to Focus Next
Lab tools like Lighthouse give you a controlled environment for testing, but Google uses field data from real users to determine rankings. That means Real User Monitoring (RUM) is essential. Tools like Google Search Console's Core Web Vitals report show how your pages perform for actual visitors on real devices and connections. This is the data that matters for SEO.
Performance budgets are an underused tool for maintaining speed over time. Set a budget for total page weight, JavaScript size, and the number of requests. Integrate these budgets into your CI pipeline so performance regressions are caught before they reach production. Without a budget, performance will gradually degrade as new features and content are added.
Check performance before and after every deployment. A single unoptimized image, a new third-party script, or an inefficient component can undo months of optimization work. Continuous monitoring is not optional. It is the only way to maintain the performance you have built.
Monthly aggregate reviews of your Core Web Vitals data should be part of your regular reporting. Look for trends rather than individual data points. A gradual decline over three months is more informative than a single bad day, and it gives you time to identify and fix the root cause before it impacts rankings.
Frequently Asked Questions
Do Core Web Vitals actually affect search rankings?
Yes, but they function as a tiebreaker rather than a dominant ranking factor. Content relevance, backlinks, and topical authority still carry more weight. However, when two pages are roughly equal on those factors, the one with better Core Web Vitals will rank higher. More importantly, good performance directly improves user engagement metrics like bounce rate and time on site, which have their own indirect effect on rankings.
What is the difference between lab data and field data?
Lab data is collected in a controlled environment with standardized hardware and network conditions. It is useful for debugging specific issues and testing changes before deployment. Field data is collected from real users visiting your site on their own devices and connections. Google uses field data from the Chrome User Experience Report (CrUX) to assess Core Web Vitals for ranking purposes. A page can score well in lab testing but poorly in the field if many of your visitors use slow devices or connections. Always prioritize field data for decision-making.
How often should I check my Core Web Vitals?
Continuously, with monthly aggregate reviews. Set up Real User Monitoring so you have ongoing data collection. Review the aggregated data at least once per month to identify trends. Additionally, check performance before and after every significant deployment to catch regressions early. Waiting for Google Search Console to flag a problem means the issue has already been affecting real users and potentially your rankings for weeks.
Can a single-page application achieve good Core Web Vitals?
Yes, but it requires deliberate architecture decisions. The most effective approach is using server-side rendering (SSR) or static site generation (SSG) for the initial page load to achieve good LCP, combined with client-side routing for subsequent navigations to maintain a smooth user experience. Frameworks like Next.js, Nuxt, and SvelteKit are designed for this hybrid approach. A purely client-rendered SPA will almost always struggle with LCP because the browser has to download, parse, and execute JavaScript before any content appears.