Back to blog

Building Filagram: 28 File Tools That Never Touch a Server

How I built a browser-based file toolkit where everything runs client-side. No uploads, no server processing, no data leaves your browser.

6 min read
Gagan Deep Singh

Gagan Deep Singh

Founder | GLINR Studios


The problem with online file tools

Every time you use an online converter, compressor, or PDF editor, you are uploading your files to someone else's server. That server might be logging what you upload. It might store your files longer than the UI implies. The company running it might be monetizing your data in ways you cannot see.

Most of the time this is probably fine. But for anything sensitive -- medical documents, financial statements, client contracts, personal photos -- it is a genuinely bad deal. You are trusting a random website with files that matter.

I wanted to build something different. A full file toolkit where nothing ever leaves your browser.

What Filagram is

Filagram is a collection of 28 file tools that run entirely client-side. Image compression, format conversion, PDF merging and splitting, background removal, QR code generation, metadata stripping -- all of it happens in your browser tab. No server receives your files. No uploads happen. The processing is local.

The goal was to match the feature depth of the big online tools while being meaningfully safer to use. You should not have to choose between capability and privacy.

The tool list

The 28 tools span a few categories. Image tools cover compression, resizing, format conversion (PNG, JPEG, WebP, AVIF), cropping, rotation, and color adjustment. PDF tools handle merging multiple files, splitting by page range, rotating pages, extracting images, and adding password protection. Utility tools include QR code generation, file metadata inspection, hash calculation, and base64 encoding.

The one that took the most work was background removal. That one is a different category of complexity compared to the others.

Running ML in the browser with ONNX and WASM

Background removal is a machine learning task. The model needs to identify the subject of an image and mask out everything else. Doing this well requires a neural network, which traditionally means a server with a GPU.

The @imgly/background-removal library changed the calculus here. It ships ONNX models that run via WebAssembly directly in the browser. No server, no API key, no upload. The model loads from a CDN on first use, gets cached by the browser, and then runs locally for every subsequent image.

import { removeBackground } from "@imgly/background-removal";
 
const blob = await removeBackground(imageFile, {
  progress: (key, current, total) => {
    setProgress(Math.round((current / total) * 100));
  },
});

The tradeoff is the initial load. The ONNX model is around 40MB. First use on a slow connection is noticeable. I added a progress indicator for model loading and cached the result aggressively so subsequent uses are instant. It is one of those cases where the right UX framing matters as much as the performance itself.

The stack

Filagram is built on Next.js 16 with React 19. The app is fully static -- all pages are generated at build time and there is no server component doing anything interesting. The "server" is just Vercel serving static assets.

PDF work uses two libraries. pdf-lib handles creation and modification: merging files, splitting pages, rotating, adding metadata. pdfjs-dist handles rendering and reading: extracting page previews, parsing existing PDFs so users can see what they are working with before committing to an operation.

import { PDFDocument } from "pdf-lib";
 
async function mergePDFs(files: File[]): Promise<Uint8Array> {
  const merged = await PDFDocument.create();
  for (const file of files) {
    const bytes = await file.arrayBuffer();
    const doc = await PDFDocument.load(bytes);
    const pages = await merged.copyPages(doc, doc.getPageIndices());
    pages.forEach((page) => merged.addPage(page));
  }
  return merged.save();
}

Image processing uses the Canvas API for most operations. It is surprisingly capable for compression, resizing, and format conversion. The browser has had this for years and it is fast enough for typical file sizes.

Making heavy processing feel instant

The UX challenge with client-side processing is that the browser is doing real work. A large PDF merge or a background removal operation can take several seconds. If you just freeze the UI while that happens, users assume something broke.

I spent a lot of time on progress reporting. Every tool that takes more than a moment has a progress indicator tied to actual processing state, not a fake loading bar. For background removal, the progress callback from the library is wired directly to a percentage display. For PDF operations, I break the work into chunks and yield between them so the UI stays responsive.

The other thing that helped was processing files in Web Workers where possible. Offloading computation off the main thread keeps the interface interactive while heavy work runs in the background. Not every operation benefits from this, but for the ones that do, it is a clear win.

TypeScript everywhere

Every tool is typed end to end. File inputs produce File objects, processing functions accept typed parameters and return typed results, and output is always a Blob or Uint8Array before being handed to the download handler. There is no any anywhere in the codebase.

This matters more than it might seem. When you are wiring together multiple file processing libraries, type safety catches a lot of subtle mismatches. It also makes refactoring much safer as the tool list grows.

What is next

Two things are at the top of the list.

Batch processing is the obvious one. Right now most tools handle one file at a time. Compressing 50 images one by one is tedious. A queue-based batch mode where you drop a folder and let it run is the natural next step.

Tool chains are more interesting. The idea is that you could define a preset -- "compress this image, strip its metadata, then convert it to WebP" -- and apply it to a set of files in one operation. Instead of visiting three tools in sequence, you build a chain once and reuse it. This is something the server-based tools cannot easily offer because chaining operations client-side is much cheaper when the processing is free.

The privacy angle is what makes Filagram worth building. The web has plenty of file tools. It does not have many that you can actually trust with sensitive documents. That is the space I am trying to fill.


Contact