I Built My Own Google Drive

6 hours ago 1

Why Bother?

Whenever I want to download a folder from Google Drive, it starts zipping it for what feels like minutes, and then it downloads the whole zip. There’s no incremental download, and I can’t add it as a network drive for obvious reasons. These limitations are frustrating, but they also got me thinking: what if I could have a solution that’s more flexible, more customizable, and truly mine?

The natural instinct for a systems engineer is: “Fine, I’ll build my own NAS”.

But then you remember that a NAS at home means a mini data center: constant uptime, power backup, static IPs, UPS, maybe even an inverter if you live in India. Suddenly, the “simple” idea of owning your storage becomes a logistics project.

I already had a cloud VM sitting idle. Something I’d spun up in 2022 for experiments. So I thought: what if I just turn that into my “own Google Drive”?

Attach a 100–200 GB volume, mount it, run Nextcloud, and I’d have a private, scalable “Google Drive” that exists entirely under my control.

The only catch: I didn’t want to just make it work.

I wanted to learn something new in the process, specifically about Cloudflare’s networking stack. So instead of exposing ports and managing SSL myself, I decided to route everything through Cloudflare Tunnel. That meant I could have a nice domain like lorbic.com sitting in front of my VM, with Cloudflare handling SSL, DDoS protection, and tunneling, no public IP exposure at all.

There are obviously easier paths. Hetzner’s Storage Box. Dropbox. Even a hosted Nextcloud provider. But that’s not ownership; that’s delegation. I wanted to see my own storage service boot up from scratch, because there’s a quiet kind of satisfaction in watching the pieces come alive and realizing, this is mine.

Setup

Everything revolved around Docker.

I wrote a clean little docker-compose.yml with four containers: MariaDB, Redis, Nextcloud, and a Cron worker. No external dependencies, no magic scripts. Just containers and mounted volumes.

Here’s the core of it:

services: db: image: mariadb:12.0.2 restart: always command: > --transaction-isolation=READ-COMMITTED --binlog-format=ROW --innodb-buffer-pool-size=1024M volumes: - /srv/nextcloud/db:/var/lib/mysql environment: - MYSQL_ROOT_PASSWORD=... - MYSQL_DATABASE=nextcloud - MYSQL_USER=nextcloud - MYSQL_PASSWORD=... redis: image: redis:7-alpine restart: always command: ["redis-server", "--maxmemory", "256mb", "--maxmemory-policy", "allkeys-lru"] volumes: - /srv/nextcloud/redis:/data app: image: nextcloud:32.0.0-apache restart: always depends_on: [db, redis] ports: - 8080:80 environment: - MYSQL_HOST=db - REDIS_HOST=redis - PHP_MEMORY_LIMIT=1024M volumes: - /srv/nextcloud/app:/var/www/html - /srv/nextcloud/data:/var/www/html/data cron: image: nextcloud:32.0.0-apache depends_on: [db, redis] entrypoint: /cron.sh volumes: - /srv/nextcloud/app:/var/www/html - /srv/nextcloud/data:/var/www/html/data

Simple, clean and repeatable.

Except it didn’t work.

The Nextcloud container couldn’t reach the internet. Not even a curl google.com from inside the container worked. I went down every rabbit hole: DNS settings, IPTables, VCN routes, even Cloudflare Tunnel’s network binding. Nothing changed. Three hours gone.

Then I noticed something: Docker was installed via Snap.

This server was old, probably an artifact of some experiment from two years ago. Snap had quietly sandboxed Docker in a way that broke outbound connectivity for containers. Once I ripped out Snap and reinstalled Docker from apt. Everything started working like magic. One of those moments where you simultaneously feel relieved and slightly betrayed by your past self.

The Cloudflare Tunnel

Next step: make it accessible to the outside world.

Instead of opening port 8080 (or reverse proxying it) and managing SSL certificates (although it can be done with certbot), I ran a Cloudflare Tunnel using cloudflared. My config.yml looked like this:

tunnel: <tunnel-id> ingress: - hostname: lorbic.com service: http://localhost:8080 - service: http_status:404

Newer version of cloudflared does not rely on the credential-file based config. It needs you to directly use the run command. So, be mindful of the version you’re using when setting up the tunnel.

That’s it.

Cloudflare handles DNS, SSL, and traffic routing. My VM stays hidden behind the tunnel.

No open ports. No certbot. No firewall headache.

From my perspective, it’s almost unfair how simple it is. The tunnel connected within seconds, and I could access https://lorbic.com from anywhere in the world.

The Nextcloud setup wizard appeared. I connected it to the MariaDB container, configured Redis, and within minutes had my own private cloud running securely behind Cloudflare.

Some Issues

Everything looked good until I tried to log in from the Android and Mac Nextcloud apps.

It threw a cryptic error:

“The polling URL does not start with HTTPS despite the login URL starting with HTTPS”.

Basically, the app was paranoid, and rightfully so. If the polling endpoint (used during login) isn’t HTTPS, it refuses to proceed for security reasons.

After some investigation, I realized I had misconfigured the hostname in the Nextcloud setup. I was using the http version in my domain. So, while my login started with HTTPS (https://lorbic.com), the polling went to the tunnel URL, which was HTTP.

Fixing the Nextcloud config to use the HTTPS version of the domain and adding some header related config solved it. Change it here: file: nextcloud/app/config/config.php

'trusted_domains' => array ( 0 => 'lorbic.com', 1 => 'localhost', 2 => '<tunnel-id>.cfargotunnel.com', ), 'overwrite.cli.url' => 'https://lorbic.com', 'overwriteprotocol' => 'https', 'trusted_proxies' => ['127.0.0.1', '::1'], 'forwarded_for_headers' => ['HTTP_X_FORWARDED_FOR'],

The Warp Problem

Then came Cloudflare Warp, the VPN.

I like Warp because it’s secure and integrates beautifully with the Cloudflare network. But here’s the irony: when I connected Warp on my phone, https://lorbic.com stopped working.

Warp, being part of the same Cloudflare ecosystem, was actually routing my requests differently, through Cloudflare’s internal network rather than over the open internet. And since I had configured the tunnel for external access only, Warp essentially short-circuited the route.

The workaround was simple: add a local override or exclude that domain from Warp’s routing. That way, Warp would stop being too clever for its own good. There are proper ways to setup Cloudflare Tunnel to work with Warp, but that’s a topic for another post.

What I Learned

Beyond the technical triumph, this little project taught me how powerful modern abstractions have become.

Cloudflare Tunnel abstracts away networking in the same way Docker abstracts away system configuration. Together, they turn the once-daunting process of self-hosting into something playful, something you can do in an evening with coffee and a terminal.

I also ended up writing a simple cron job to sync some important directories from the cloud volume to an S3 bucket. It’s not elegant, but it’s effective, a redundancy layer for my “private Google Drive”.

More importantly, I now have a personal cloud that I control.

No ads. No quota nags. No “Your storage is almost full”. Just a small, efficient, self-contained service that does exactly what I want, and nothing I didn’t agree to.

There’s something oddly satisfying about this kind of engineering.

Not because it saves money, but because it gives back a sense of ownership, that feeling of knowing your system, line by line, container by container.

When I log into lorbic.com, it doesn’t feel like another SaaS dashboard.

It feels like home.

PS: This post is a work in progress. I’ll add more details as I go. And there are some simpler and much cheaper ways to get cloud storage than self-hosting Nextcloud. For example, you can use Hetzner Storage Box, which is a great option for a small, private cloud storage solution.
Note: The domain lorbic.com is just a placeholder.




Read Entire Article