
Leveraging Cloudflare
How We Used Cloudflare to Optimize Our Media-Heavy App Without Losing Our Minds (or Money)
When most people hear Cloudflare, they think of those "Verify you are human" checks with those weird little puzzles of crosswalks, buses, motorcycles, and blurry traffic lights. But Cloudflare is far more than just an anti-bot bouncer. It’s quietly become one of the most powerful infrastructure platforms on the web, offering tools for edge networking, storage, security, serverless computing and content delivery optimization. Also, part of their encryption randomness comes from a wall of lava lamps in their San Francisco office. Seriously.
The TL;DR on the lava lamps is that computers have a hard time with true randomness. Computers do what they are told to do, that is what programming is. They generally take some input, run it through some logic and should output the same results every time. A lot of common tools, such as JavaScript's famous Math.random() function is sort of faking randomness through an algorithm. Math.random() is actually a pseudo-random number generator (PRNG). So, to avoid predictability, Cloudflare has a wall of lava lamps with camera's on them and use the data from the floating lava bubbles instead of faking it with algorithms.
A project I am currently working on is a media-heavy, streaming-focused web app that we’ve been building using primarily Next.js and Supabase. I have been a big fan of Supabase for several years now. I wrote about the platform before, exploring BaaS platforms, and have loved it for personal projects. Supabase is now out of beta and ready for the big leagues. Supabase is awesome but it does have some limitations, especially for a media-heavy streaming platform. Supabase does has media storage buckets with all of the features that you would expect from a modern infrastructure tool, but when building to scale, you run the risk of bloating/bottlenecking your Supabase instance and incurring possible unpredictably high database egress charges. Data egress refers to the data leaving the database (opposed to ingress which is incoming data). Media streaming means high egress. To solve this problem, we leaned into a tool we were already using for other services anyway, Cloudflare.
Here’s how Cloudflare helped us cut costs, boost performance, and future-proof key parts of our stack.
Offloading Media to Cloudflare R2
In the early stages of the MVP build out, Supabase handled almost everything from the authentication, the actual PostgreSQL database, and the media file storage which included various buckets of images and our media files. The all-in-one simplicity was great for the developer experience and let us move super quickly at first, but we always knew that would never scale well for streaming media. Every image fetch and media play meant more storage reads, more egress bandwidth, and more cost.
To fix that, we moved our static assets to Cloudflare R2. R2 is a storage media bucket, much like the famous AWS S3. In fact, the R2 buckets are completely AWS S3 compatible, so migrating between them should be easy. Total S3 compatibility also means you can even use the S3 AWS SDK Client in your Next.js (or Node.js or React Native) app; which seems pretty popular with over 10,000,000 weekly downloads.
Aside from not being Amazon, we chose R2 because Cloudflare doesn’t charge for R2 data egress when it is paired with Cloudflare’s CDN. So, moving our media files didn't reduce the egress costs, it got rid of them.
The way you pair R2 and a CDN is with Cloudflare Workers, which are serverless functions running at the edge. Think AWS Lambda. Workers act as a middlemen between the client and our storage layer, helping us proxy and cache requests globally. The result is fast, cheap, reliable media delivery, without hammering our database.
CDN Integration and Workers
Once the R2 buckets were initialized and the media was moved over, we connected it to our app through a custom API route in our Next.js application. This let us dynamically serve files, append headers when needed, and hook into any request logic.
So, the Cloudflare Workers act as the middlemen between the R2 storage and the global CDN network. The Workers are serverless functions that run at the edge. That means they are literally at data centers around the world, close to the end users. When someone requests a media file, the Worker intercepts that request, fetches the file from R2 if it's not already cached, and serves it through Cloudflare's CDN.
The beauty of this setup is in how seamlessly it all connects. The Next.js app makes a request to what looks like a normal API endpoint, but behind the scenes, a Worker is handling the heavy lifting. The Worker can add custom headers, handle authentication, resize images on-the-fly, or even serve different file formats based on the user's browser capabilities. All of this happens at the edge, meaning the processing occurs at the data center closest to the end user, not on some server halfway across the world.
From there, Cloudflare's global CDN takes over. Every time someone streams a media file or loads an image, that request is routed to the nearest edge node, cached if possible, and served quickly. The performance benefits are immediate.
The performance benefits are pretty incredible:
- Improved latency: Files are served from edge locations close to users, typically reducing load times by 40-60%.
- Intelligent caching: The CDN automatically caches popular content at edge nodes, so frequently accessed files load instantly.
- Bandwidth optimization: Cloudflare automatically compresses files and serves them in the most efficient format for each user's connection.
- Global reach: With data centers in 320+ cities worldwide, users get fast performance regardless of their location.
Organization and scalability wins:
- Centralized media management: All our assets live in R2 buckets with clean, organized folder structures.
- Version control: Easy to update, replace, or rollback media files without touching application code.
- Automatic scaling: No need to provision servers or worry about traffic spikes—the CDN handles it all.
- Security: Workers can add authentication layers, rate limiting, and access controls at the edge.
The setup also gave us some unexpected benefits. We could now A/B test different media formats, implement smart compression based on user connection speeds, and even serve personalized content without additional infrastructure. The Worker layer can became a Swiss Army knife for media delivery.
Privacy-Conscious Geo Data Collection
While we were reworking our infrastructure, we also wanted a better way to collect geographic usage data. This can be a tricky area to tread in, collecting geo data can be expensive, sometimes unethical and sometimes even illegal. We needed just enough to help us make decisions about caching, content targeting and basic analytics.
There are a ton of third party geographic IP look up APIs. IP API, ipapi (super creative names), IP Info, and DB IP are bigger ones. The way these work is that your IP address is exposed via your browser to every website that you visit. IP addresses are intimate and personally identifying, which is why it can be sketchy sending your users' IP addresses to third party APIs where you do not control the data.
In addition to not controlling sensitive data, these APIs are expensive because they are so convenient. Your browser exposes your IP address but that technically is still just a string of random numbers, you need to be able to decode the IP address to get useful information out of it. These third party APIs handle that part for you. The cost of that convenience varies but one of the APIs our team was considering had a price tag of $0.02 per request. That number seems low until you read that Cloudflare's R2 pricing is $0.36 per million requests (for class B operations).
Rather than paying for a third-party geo-IP API, we self-hosted a binary geolocation database, dropped it into our R2 bucket, and built an internal API route to handle the lookup. When a user performs a certain action, such as streaming a specific media file, we grab their IP, pull the database from R2 (with smart in-memory caching), and parse the geo info. That geo info looks a little something like this:
{
"city": { "names": { "en": "Brooklyn" } },
"country": { "iso_code": "US", "names": { "en": "United States" } },
"subdivisions": [{ "iso_code": "NY", "names": { "en": "New York" } }],
"location": {
"latitude": 40.6782,
"longitude": -73.9442,
"time_zone": "America/New_York"
},
"postal": { "code": "11201" }
}
We then save the relevant data to our Supabase Postgres instance. The end result is a minimal overhead, no recurring geo-IP fees, and tighter control over user data.
We are still using Supabase for authentication, our Postgres db, structured data, and real-time features. But for static media and supporting infrastructure, Cloudflare has earned its place in our core stack. It gave us the speed and scale we needed without the overhead.