It Worked on My Machine — But Not on EC2: A Docker Confession
Docker Isn’t the Problem. The Assumption Is.

Diving into Cloud and DevOps like a kid who learned cycling yesterday and signed up for a race today. I break things, read error logs like bedtime stories, and slowly figure out how not to repeat the same mistakes. If you enjoy honest tech journeys with a bit of humor, you’ll fit right in.
A few weeks ago I felt like I was finally getting somewhere.
My tiny Flask app — just a “hello” page that showed my name and the current time — ran perfectly on my laptop. I wrote a Dockerfile, ran docker build -t myapp . and docker run -p 5000:5000 myapp, then opened http://localhost:5000 and there it was: clean, fast, mine. Confident that I’d cracked “build once, run anywhere,” I spun up a free-tier EC2 instance, installed Docker per the docs, pulled my image from Docker Hub, and ran the exact same command. The container started with no errors, but the page never loaded.
I sat watching the browser’s loading spinner and felt that familiar sinking realization — you thought you understood this, but you clearly don’t.
I really believed Docker would handle everything
At the start I was so confident. Docker had made my app portable, right? “Build once, run anywhere.” I’d read that sentence at least ten times. My laptop and the EC2 instance were both Linux under the hood (well, sort of — mine was Ubuntu). Same image, same command. What could possibly go wrong?
I genuinely thought the only difference was location. Local = laptop, cloud = faster and public. That was it.
That’s when the confusion kicked in
I SSH’d into the instance and checked the usual things:
docker psshowed my container happily runningdocker logslooked identical to what I saw at homeNo crash, no permission errors, nothing
I even curled localhost:5000 from inside the EC2 instance and got the hello page instantly. So the app was alive.
But from my laptop, using the public IP and port 5000? Dead silence.
I spent the next forty minutes googling variations of “docker works locally but not on aws.” I restarted the container. I rebuilt the image on the server. I tried running it with sudo. I checked the Dockerfile for the hundredth time. Nothing changed.
This is where I got stuck. My brain kept looping: Docker is broken on EC2. Or maybe AWS hates me.
This is where I had to slow down
I forced myself to stop typing commands and actually think.
I asked one simple question: If the container is running and responding on the instance itself, where is the traffic getting blocked?
That single question shifted everything. I stopped looking inside Docker and started looking at the machine around Docker.
I remembered something I’d skimmed in the AWS console when I launched the instance — a box called “Security groups.” I had left it on default because… well, the launch wizard said “SSH is already allowed, you’re good.” I never touched it again.
Here’s what the system was actually doing
The security group is basically a cloud firewall. By default it says: “Only let traffic in on port 22 (SSH). Everything else? Denied.”
My laptop has no such firewall for localhost — I can talk to my own machine freely. But an EC2 instance lives in Amazon’s network, and Amazon is paranoid (for good reason). Every packet from the outside world has to be explicitly invited in.
Docker had done its job perfectly: it mapped port 5000 on the container to port 5000 on the host. The host was listening. But the cloud was standing at the door saying “nope.”
I had assumed the environment was the same. It wasn’t. The difference wasn’t Docker. It was the invisible layer I’d never thought about.
The moment it clicked
I went back to the AWS console, clicked on the security group, and added one rule:
Type: Custom TCP
Port: 5000
Source: Anywhere (
0.0.0.0/0) — just for learning, I know it’s not secure forever
I saved it, waited ten seconds, refreshed my browser… and there was my little hello page, glowing on the public internet.
I actually laughed out loud in my room. Not because it was clever, but because it was so simple. The fix took thirty seconds once I stopped blaming the tool.
These are the things that stayed with me
In the end I learned the hard but useful lesson behind the punchline: “It worked on my machine” is a confession, not a guarantee. Docker made my app portable, but it didn’t magically eliminate differences in networking, firewall rules, or how my app bound to interfaces. The container was running — the problem was that nothing outside the EC2 instance could reach the Flask server. Once I stopped assuming “same image = same environment” and methodically checked the layers (app bind address, Docker port mapping, EC2 security group, and host firewall), the fix was obvious and quick.
If you take anything away, let it be these practical points:
Ensure your app listens on 0.0.0.0 (not 127.0.0.1) when running in a container.
Confirm Docker port mappings (
docker run -p host:container) and verify withdocker psOpen the EC2 security group for the port you’re using (or use a load balancer/proxy on standard ports).
Check the instance firewall (ufw/iptables) and test connectivity from inside and outside the machine (curl, nc).
Automate environment parity (Docker Compose, CI builds, infrastructure as code) and add health checks so you catch these issues earlier.
Above all, stay humble and curious: debugging deployment problems teaches you more about the stack than an easy first run ever could.



