Client-Side Image Compression: What's Actually Happening in the Browser
How browsers compress images without touching a server — the real pipeline behind Canvas, OffscreenCanvas, and WebCodecs, and when it's not enough.
Most people who use a browser-based image compressor think of it as a convenience thing — no install, quick drag-and-drop. What they don’t realize is that the tool often never talks to a server at all. The image stays on their machine. For anything private — medical scans, legal docs with attached photos, product shots before launch — that distinction matters a lot.
This article walks through what the browser is actually doing when it compresses an image, where the limits are, and when you genuinely need a server in the loop.
The basic pipeline: decode, draw, re-encode
Whether a tool uses the straightforward Canvas API or the newer WebCodecs, the fundamental pipeline is the same:
- Read the file — the File API gives JavaScript a byte stream without sending anything to a server.
- Decode — the browser parses the compressed image into raw pixel data (RGBA, 4 bytes per pixel).
- Optionally resize — draw to a canvas at a smaller resolution.
- Re-encode — write the pixels back out as JPEG, WebP, or AVIF at a target quality setting.
- Hand back a Blob — the result is a new
Blobthe page can offer for download or show in a preview.
Step 2 is where the raw memory cost hits you. A 20-megapixel DSLR photo decoded into RGBA takes roughly 20,000,000 × 4 = 80 MB of RAM before any processing has happened. Most modern phones and laptops handle this fine, but it is why compressing large batches in parallel can cause a tab to crash on low-memory devices.
Canvas: the workhorse
The canvas approach has been around since the early 2010s and works in every browser released in the last decade:
async function compressJpeg(file, quality = 0.8) {
const bitmap = await createImageBitmap(file);
const canvas = new OffscreenCanvas(bitmap.width, bitmap.height);
const ctx = canvas.getContext('2d');
ctx.drawImage(bitmap, 0, 0);
return canvas.convertToBlob({ type: 'image/jpeg', quality });
}
The quality parameter runs from 0 to 1. At 0.8 you typically land around 60–70% smaller than the original JPEG while keeping visible quality acceptable for web use. At 0.6 you can hit 80% reduction, but you’ll see blocking artifacts on smooth gradients.
One thing worth knowing: quality does not work the same way across all output types. For JPEG and WebP it controls the quantization tables — it’s a genuine quality dial. For PNG, the parameter is ignored entirely.
Why PNG compression is different
PNG is lossless. There is no quality knob because no data is thrown away. The file size is determined by the deflate compression level (how hard the encoder tries) and, more importantly, the image content itself. Solid areas of color compress beautifully; photographic noise compresses badly.
If you want to reduce a PNG, your real options are:
- Reduce color depth — if the image only uses 256 colors, encode it as an 8-bit palette PNG instead of 24-bit. Tools like pngquant do this well, but they require either a native binary or a WebAssembly build, which brings additional complexity.
- Convert to WebP — for photos saved as PNG, switching to WebP with
quality: 0.85will produce dramatically smaller files at visually similar quality. Try our Image Format Converter if this is your situation. - Resize — reducing dimensions is the bluntest but most effective tool against large PNGs.
OffscreenCanvas and workers
The OffscreenCanvas API lets you move all the canvas work off the main thread, which means the UI doesn’t freeze during compression of large files. Support is solid across Chrome, Edge, and Firefox. Safari added it in version 17 (late 2023), so as of 2026 it’s broadly safe to use. The pattern is to spin up a Web Worker, transfer the canvas to it, and post the resulting Blob back.
WebCodecs: more control, more complexity
The WebCodecs API (available in Chrome 94+, Edge 94+, Firefox 130+, but not yet in Safari as of early 2026) gives you frame-level access to codecs. For compression purposes this matters mainly for AVIF: the canvas convertToBlob path for AVIF is slower and less configurable than driving an AV1VideoEncoder directly via WebCodecs.
If you need to compress a lot of images to AVIF in the browser with reasonable speed, WebCodecs is where you end up. The code is considerably more complex — you’re managing encoder configuration, chunk callbacks, and stream assembly yourself.
JPEG vs. WebP vs. AVIF: the practical tradeoffs
| Format | Compression | Browser support | Encode speed in-browser | Lossy/lossless |
|---|---|---|---|---|
| JPEG | Good | Universal | Fast (native codec) | Lossy only |
| WebP | 25–35% better than JPEG | All modern browsers | Fast (native codec) | Both |
| AVIF | 50% better than JPEG | All modern (not Safari on older iOS) | Slow via canvas, faster via WebCodecs | Both |
The canvas API routes JPEG and WebP encoding through the browser’s native codec, which is hardware-accelerated on most devices. AVIF through convertToBlob is often software-only and can take several seconds for a large image on a mid-range phone.
If you’re building a compressor and want broad compatibility with good results, WebP at quality: 0.85 is the pragmatic default in 2026.
The quality vs. file size relationship isn’t linear
Going from quality: 1.0 to quality: 0.9 often cuts file size by 30–40% with barely perceptible change. Going from 0.5 to 0.4 might only save another 5% while introducing obvious artifacts. The useful range for most web content is 0.7–0.85.
The reason is that JPEG/WebP quantization discards high-frequency detail first — fine texture, noise, sharp edges. Below a certain threshold you start losing mid-frequency information too, which is where blocking artifacts appear in smooth areas like sky or skin.
A practical strategy: target a file size (say, under 200 KB for a hero image), then binary-search the quality parameter until you land there. Some tools do this automatically. Our Image Compressor lets you adjust quality manually and see the before/after size in real time.
Memory costs at scale
For a single image, even a 50 MP camera raw, you’re usually fine. The problem is batch compression. If a user drops 30 photos from a recent trip — each one 10 MB on disk, 60+ MB decoded in RAM — you’re looking at several gigabytes of working memory if you decode them all at once.
The right pattern is to compress one at a time (or two at a time if you want to saturate a multi-core CPU) and release each bitmap before moving to the next. bitmap.close() on an ImageBitmap will return that memory immediately. Skipping this is the most common source of out-of-memory crashes in browser-side image tools.
When server-side compression still wins
Client-side compression covers a lot of ground, but there are real cases where it falls short:
Batch processing at production scale. Running a marketing team’s asset pipeline for hundreds of images through a browser tab works but is fragile. A proper ImageMagick or libvips pipeline on a server is faster, more configurable, and doesn’t depend on someone keeping a tab open.
AVIF on older or low-end phones. The encoder is slow in software. If your audience includes older Android devices, generating AVIF in the browser will time out or produce an unusable UX. Server-side AVIF encoding via libavif is orders of magnitude faster.
Advanced optimizations. Things like chroma subsampling tuning, mozjpeg-specific quantization tables, or pngquant palette reduction require either a compiled binary or a WebAssembly port. Some tools (notably Squoosh — Google’s browser-based compressor, which was a significant early demonstration of this technique) ship WASM builds of codecs precisely for this reason. It works, but the WASM bundles are large (1–5 MB) and the load time is noticeable.
ICC profile handling. Some cameras embed wide-gamut ICC profiles. The canvas API strips these on re-encode, which can cause color shifts. If color accuracy matters — print, medical imaging — server-side tools that preserve ICC data are safer.
What this means for privacy
When a tool compresses your image in-browser, the raw pixel data never leaves your device. The only thing that crosses a network is the compressed output file when you click download — and that goes directly to your local filesystem, not to any server.
This is relevant any time the image contains something sensitive: a photo of a document, a screenshot of a private conversation, product mockups under NDA, medical imagery. The question to ask any image tool is: does the image data go to a server? With canvas-based client-side tools, the answer is structurally no. With upload-based tools, you’re trusting their privacy policy, their security posture, and whoever has access to their storage bucket.
You can verify client-side processing yourself: open DevTools, go to the Network tab, and compress an image. If you see no outgoing requests carrying image data, the processing is local.
Quick reference: which API to reach for
- Simple resize + JPEG/WebP output:
createImageBitmap+OffscreenCanvas.convertToBlobin a Worker. Works everywhere, fast. - PNG palette reduction: need a WASM port of pngquant or similar —
canvasalone won’t do it. - AVIF with acceptable speed: WebCodecs
VideoEncoderon Chromium; fall back to canvas on Safari. - Preserving metadata (EXIF, ICC): read with a library like
piexifjs, re-inject after re-encode. Canvas strips it all.
If you want to experiment with the actual output quality tradeoffs without writing code, try our Image Compressor or resize first with the Image Resizer — both run entirely in your browser.
The browser has become a capable image processing runtime. It’s not ImageMagick, but for the common case — shrink this photo for the web, without sending it anywhere — it’s more than enough.
Tools mentioned in this article
- Image Compressor — Compress images by adjusting quality to reduce file size without losing visual clarity.
- Image Format Converter — Convert images between PNG, JPEG, and WebP formats in one click.
- Image Resizer — Resize images by pixels or percentage with aspect ratio lock.