The Canvas API Approach
The core idea behind browser-based image compression is straightforward: load an image into an HTMLImageElement, draw it onto an invisible Canvas, then re-encode it using canvas.toBlob() with a lower quality setting. The browser's built-in JPEG and WebP encoders handle the actual compression — you just control the quality parameter. In my implementation, the compressImage function creates a fresh canvas for every image. It loads the file via URL.createObjectURL(), draws it with ctx.drawImage(), and calls canvas.toBlob() with a MIME type and a quality value between 0 and 1. The quality parameter maps directly to the encoder's compression level: 1.0 means maximum quality (largest file), and 0.0 means maximum compression (smallest file, worst quality). I default to 80% quality (0.8) because that's the sweet spot I found after testing dozens of photos. At 80%, JPEG files typically shrink by 60-80% with no perceptible quality loss to the human eye. Drop below 60% and you start seeing blocky artifacts and color banding, especially in gradients and skin tones. Above 90%, the file size savings become negligible — you're paying a lot of bytes for quality improvements nobody can see. One thing I added that proved essential: a max-width resize option. Many users upload photos straight from their phone camera at 4000px+ resolution. Before compression even starts, I check if the image exceeds the selected max width (1920px, 1280px, or 800px) and scale it down proportionally. This often reduces file size more dramatically than quality reduction alone.
const canvas = document.createElement("canvas");
const ctx = canvas.getContext("2d");
// Draw the image onto an invisible canvas
canvas.width = width;
canvas.height = height;
ctx.drawImage(img, 0, 0, width, height);
// Re-encode with quality parameter (0-1)
// 0.8 = 80% quality — the sweet spot
canvas.toBlob(
(blob) => {
// blob is the compressed image
const url = URL.createObjectURL(blob);
},
"image/jpeg",
0.8 // quality: 0 (worst) to 1 (best)
);Why 80% Quality?
JPEG compression between 75-85% discards visual information the human eye is least sensitive to — fine color gradients and high-frequency detail. Below 60%, blocky artifacts and color banding become noticeable. I default my tool to 80% because it consistently delivers 60-80% file size reduction with no visible quality loss.
The PNG Surprise
This was my biggest 'aha moment' — and honestly, it felt more like an 'oh no' moment. I spent a frustrating afternoon wondering why my PNG compression wasn't reducing file sizes at all. The answer: the Canvas API's toBlob() quality parameter is silently ignored for PNG. The spec is clear about this — the quality argument only applies to lossy formats like JPEG and WebP. PNG is a lossless format, which means the browser won't discard any pixel data regardless of what quality value you pass. You can pass 0.1 or 0.99 — the output PNG will be identical. In my code, I explicitly handle this by passing undefined as the quality parameter when the output format is PNG. This makes the behavior explicit rather than relying on the browser to silently ignore the value. I also added a warning message in the UI that appears when users select PNG format, explaining that the quality slider has no effect and that size reduction comes only from resizing. The workaround for users who need smaller files from PNG sources is format conversion. If the image doesn't need transparency, converting to JPEG at 80% quality dramatically reduces file size. If transparency is needed, WebP supports both lossy compression and alpha channels — the best of both worlds. This is exactly why I built format switching into the compressor: users can upload a PNG and convert it to WebP or JPEG in one step.
// PNG is lossless — the quality parameter is IGNORED
const q = outputFormat === "png" ? undefined : quality / 100;
canvas.toBlob(
(blob) => { /* ... */ },
mimeType,
q // undefined for PNG, so the browser ignores it
);The PNG Trap
I spent hours debugging why my PNG compression "wasn't working" before realizing the Canvas API simply cannot do lossy PNG compression. The toBlob quality parameter is silently ignored for image/png. Your only option for reducing PNG size via Canvas is resizing the dimensions — which is why I added max-width presets (1920px, 1280px, 800px) as a workaround.
Canvas API Quality Parameter by Format
| Format | Quality Parameter | Compression Type | Size Reduction |
|---|---|---|---|
| JPEG | Respected (0-1) | Lossy | 60-80% at quality 0.8 |
| WebP | Respected (0-1) | Lossy | 70-85% at quality 0.8 |
| PNG | Ignored | Lossless only | Only via resize |
WebP — The Best Format Nobody Talks About
If there's one takeaway from building this tool, it's this: WebP is criminally underused. Developed by Google, WebP produces files 25-35% smaller than equivalent JPEG images at the same visual quality. It supports both lossy and lossless compression, and it handles transparency (alpha channel) — something JPEG can't do at all. In 2026, browser support for WebP is above 96%. Chrome, Firefox, Safari, Edge — they all support it. The holdouts are ancient browsers that nobody should be targeting anyway. When I added WebP as a conversion option in my Image Compressor, the results were immediately impressive: a 2.4MB JPEG photo compressed to 1.1MB as JPEG at 80%, but only 780KB as WebP at the same quality. That's a 67% reduction from the original. The implementation is trivially simple — canvas.toBlob() natively supports 'image/webp' as a MIME type. The browser's WebP encoder is highly optimized and runs synchronously in the same toBlob call. No external libraries, no WebAssembly modules, no server-side processing. What about AVIF? It's the next frontier — typically 50-60% smaller than JPEG — but browser support in 2026 is around 87%, and crucially, canvas.toBlob() does NOT support 'image/avif' in most browsers yet. When it does, I'll add it as an option. For now, WebP is the practical choice: universally supported, significantly smaller than JPEG, and zero extra dependencies.
Image Format Comparison (Same Visual Quality)
| Format | Typical Size | Browser Support | Best For |
|---|---|---|---|
| JPEG | 100% (baseline) | 100% | Photos, complex images |
| WebP | 65-75% of JPEG | 96%+ (2026) | Web images, all-rounder |
| AVIF | 50-60% of JPEG | ~87% (2026) | Next-gen, growing support |
| PNG | 3-5x larger than JPEG | 100% | Transparency, screenshots |
Info
WebP was developed by Google and supports both lossy and lossless compression. In my tool, converting a JPEG to WebP at the same quality setting typically saves an additional 25-35% — the easiest performance win available. AVIF is even smaller but browser support is still catching up.
Handling Large Files and Memory
Processing images entirely in the browser means you're working within the browser's memory constraints. A 20MB DSLR photo at 6000x4000 pixels requires roughly 96MB of uncompressed pixel data in memory (6000 x 4000 x 4 bytes per pixel). Multiply that by the number of images in a batch upload, and you can easily hit memory limits. In my implementation, I manage memory carefully with URL.createObjectURL() and URL.revokeObjectURL(). Every time an image is loaded for preview or after compression, I create an object URL. When the user removes a file or clears the list, I revoke those URLs immediately to free memory. Failing to do this is a classic memory leak in browser-based file tools — each unreleased object URL holds a reference to the entire blob in memory. I also process batch uploads with Promise.all(), which compresses all images in parallel. This is fast but memory-intensive. For very large batches, a sequential approach or chunked processing would be safer, and it's something I plan to add. The drag-and-drop UX was surprisingly important. I filter incoming files with f.type.startsWith('image/') to reject non-image files silently. When files are added, I immediately show processing placeholders in the UI, then replace them with the compressed results as each one finishes. This gives users instant feedback even when compression takes a moment. One thing the Canvas API does that surprised me: it strips ALL EXIF metadata. When you draw an image onto a canvas and re-export it, the orientation, camera settings, GPS coordinates, copyright information — everything in the EXIF headers — is gone. For a privacy-focused tool, this is actually a feature: users uploading personal photos don't accidentally share their GPS location. But if you need to preserve metadata, you'll need a library like piexifjs to read it from the original file and inject it back into the compressed output. I chose not to, keeping the tool simple and privacy-first.
// Create object URLs for previews
const originalUrl = URL.createObjectURL(file);
// IMPORTANT: Revoke URLs when removing files
// to prevent memory leaks
const handleRemove = (id: string) => {
const file = files.find((f) => f.id === id);
if (file) {
URL.revokeObjectURL(file.originalUrl);
if (file.compressedUrl)
URL.revokeObjectURL(file.compressedUrl);
}
};Canvas Strips EXIF Data
When you draw an image onto a Canvas element and re-export it, ALL EXIF metadata is lost — orientation, camera settings, GPS coordinates, copyright info, everything. This is actually a privacy feature for web uploads, but if you need to preserve metadata, the Canvas API is not the right tool. You would need a library like piexifjs to read and re-inject EXIF data after compression.
Sources & Further Reading
Try the Image Compressor
Compress your images entirely in the browser — no uploads, no server processing, complete privacy. Switch between JPEG, WebP, and PNG to see the difference for yourself.
Open Image Compressor