Modern teams love Astro for its hybrid rendering story and pre-rendered-by-default mindset, but shipping a snappy production site still requires deliberate choices. Over the past few launches I have leaned on a simple loop: measure, slice, and stabilise.
Measure without assumptions
Anecdotal performance anecdotes rarely translate to user-perceived gains. Start with a reproducible script: use npm run build coupled with Lighthouse CI or Calibre. Capture not only the performance score but render blocking waterfalls and CPU timelines. The goal is to understand where hydration creeps in and which assets balloon.
npm run build
npx astro preview --host
npx @lhci/cli collect --url=http://localhost:4321
With a baseline, you can guard against regressions as features land.
Slice the critical path
Astro pages render fast when the browser downloads less. Audit your components: drop client directives unless interaction is non-negotiable; prefer content collections to collocate data with views; replace bespoke images with the built-in <Image /> optimisations. One recurring win is isolating marketing experiments into server islands instead of hydrating entire hero sections.
Stabilise the platform
Performance work often slips unless the entire delivery pipeline amplifies it. Automate bundle size checks via CI, document budgets directly in pull requests, and align product stakeholders on objective metrics such as Largest Contentful Paint. Astro enables exceptional baselines; our job is to protect them.
Performance is not an end state. It is a culture of respecting every user’s device headroom.
By combining thoughtful measurement, ruthless scoping, and automated guardrails you can ship Astro sites that feel instant even on challenged networks.