<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Aditya Raj Singh]]></title><description><![CDATA[Aditya Raj Singh]]></description><link>https://blog.adityarajsingh.in</link><generator>RSS for Node</generator><lastBuildDate>Fri, 10 Apr 2026 06:49:11 GMT</lastBuildDate><atom:link href="https://blog.adityarajsingh.in/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[It Worked on My Machine — But Not on EC2: A Docker Confession]]></title><description><![CDATA[A few weeks ago I felt like I was finally getting somewhere.
My tiny Flask app — just a “hello” page that showed my name and the current time — ran perfectly on my laptop. I wrote a Dockerfile, ran do]]></description><link>https://blog.adityarajsingh.in/it-worked-on-my-machine-but-not-on-ec2-a-docker-confession</link><guid isPermaLink="true">https://blog.adityarajsingh.in/it-worked-on-my-machine-but-not-on-ec2-a-docker-confession</guid><category><![CDATA[Docker]]></category><category><![CDATA[ec2]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Devops]]></category><category><![CDATA[#learning-in-public]]></category><dc:creator><![CDATA[Aditya Raj Singh]]></dc:creator><pubDate>Fri, 27 Feb 2026 12:03:16 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/694ec4d1d13bb1b59dd0f107/d459105d-c484-4504-aa83-db69fade8984.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A few weeks ago I felt like I was finally getting somewhere.</p>
<p>My tiny Flask app — just a “hello” page that showed my name and the current time — ran perfectly on my laptop. I wrote a Dockerfile, ran <code>docker build -t myapp .</code> and <code>docker run -p 5000:5000 myapp</code>, then opened <a href="http://localhost:5000"><strong>http://localhost:5000</strong></a> and there it was: clean, fast, mine. Confident that I’d cracked “build once, run anywhere,” I spun up a free-tier EC2 instance, installed Docker per the docs, pulled my image from Docker Hub, and ran the exact same command. The container started with no errors, but the page never loaded.</p>
<p>I sat watching the browser’s loading spinner and felt that familiar sinking realization — you thought you understood this, but you clearly don’t.</p>
<p><strong>I really believed Docker would handle everything</strong></p>
<p>At the start I was so confident. Docker had made my app portable, right? “Build once, run anywhere.” I’d read that sentence at least ten times. My laptop and the EC2 instance were both Linux under the hood (well, sort of — mine was Ubuntu). Same image, same command. What could possibly go wrong?</p>
<p>I genuinely thought the only difference was location. Local = laptop, cloud = faster and public. That was it.</p>
<p><strong>That’s when the confusion kicked in</strong></p>
<p>I SSH’d into the instance and checked the usual things:</p>
<ul>
<li><p><code>docker ps</code> showed my container happily running</p>
</li>
<li><p><code>docker logs</code> looked identical to what I saw at home</p>
</li>
<li><p>No crash, no permission errors, nothing</p>
</li>
</ul>
<p>I even curled <code>localhost:5000</code> <em>from inside the EC2 instance</em> and got the hello page instantly. So the app was alive.</p>
<p>But from my laptop, using the public IP and port 5000? Dead silence.</p>
<p>I spent the next forty minutes googling variations of “docker works locally but not on aws.” I restarted the container. I rebuilt the image on the server. I tried running it with sudo. I checked the Dockerfile for the hundredth time. Nothing changed.</p>
<p>This is where I got stuck. My brain kept looping: <em>Docker is broken on EC2. Or maybe AWS hates me.</em></p>
<p><strong>This is where I had to slow down</strong></p>
<p>I forced myself to stop typing commands and actually think.</p>
<p>I asked one simple question: <em>If the container is running and responding on the instance itself, where is the traffic getting blocked?</em></p>
<p>That single question shifted everything. I stopped looking inside Docker and started looking at the machine <em>around</em> Docker.</p>
<p>I remembered something I’d skimmed in the AWS console when I launched the instance — a box called “Security groups.” I had left it on default because… well, the launch wizard said “SSH is already allowed, you’re good.” I never touched it again.</p>
<p><strong>Here’s what the system was actually doing</strong></p>
<p>The security group is basically a cloud firewall. By default it says: “Only let traffic in on port 22 (SSH). Everything else? Denied.”</p>
<p>My laptop has no such firewall for localhost — I can talk to my own machine freely. But an EC2 instance lives in Amazon’s network, and Amazon is paranoid (for good reason). Every packet from the outside world has to be explicitly invited in.</p>
<p>Docker had done its job perfectly: it mapped port 5000 on the container to port 5000 on the host. The host was listening. But the <em>cloud</em> was standing at the door saying “nope.”</p>
<p>I had assumed the environment was the same. It wasn’t. The difference wasn’t Docker. It was the invisible layer I’d never thought about.</p>
<p><strong>The moment it clicked</strong></p>
<p>I went back to the AWS console, clicked on the security group, and added one rule:</p>
<ul>
<li><p>Type: Custom TCP</p>
</li>
<li><p>Port: 5000</p>
</li>
<li><p>Source: Anywhere (<code>0.0.0.0/0</code>) — just for learning, I know it’s not secure forever</p>
</li>
</ul>
<p>I saved it, waited ten seconds, refreshed my browser… and there was my little hello page, glowing on the public internet.</p>
<p>I actually laughed out loud in my room. Not because it was clever, but because it was so <em>simple</em>. The fix took thirty seconds once I stopped blaming the tool.</p>
<p><strong>These are the things that stayed with me</strong></p>
<p>In the end I learned the hard but useful lesson behind the punchline: “It worked on my machine” is a confession, not a guarantee. Docker made my app portable, but it didn’t magically eliminate differences in networking, firewall rules, or how my app bound to interfaces. The container was running — the problem was that nothing outside the EC2 instance could reach the Flask server. Once I stopped assuming “same image = same environment” and methodically checked the layers (app bind address, Docker port mapping, EC2 security group, and host firewall), the fix was obvious and quick.</p>
<p>If you take anything away, let it be these practical points:</p>
<ul>
<li><p>Ensure your app listens on 0.0.0.0 (not 127.0.0.1) when running in a container.</p>
</li>
<li><p>Confirm Docker port mappings (<code>docker run -p host:container</code>) and verify with <code>docker ps</code></p>
</li>
<li><p>Open the EC2 security group for the port you’re using (or use a load balancer/proxy on standard ports).</p>
</li>
<li><p>Check the instance firewall (ufw/iptables) and test connectivity from inside and outside the machine (curl, nc).</p>
</li>
<li><p>Automate environment parity (Docker Compose, CI builds, infrastructure as code) and add health checks so you catch these issues earlier.</p>
</li>
</ul>
<p>Above all, stay humble and curious: debugging deployment problems teaches you more about the stack than an easy first run ever could.</p>
]]></content:encoded></item><item><title><![CDATA[Day 10 of #30DaysOfTerraform: Writing Smarter Terraform with Expressions]]></title><description><![CDATA[#30daysofawsterraform
By the time I reached Day 10, Terraform had already stopped feeling like a tool for “just creating resources.” Days 8 and 9 changed that mindset completely. Meta-arguments taught]]></description><link>https://blog.adityarajsingh.in/day-10-of-30daysofterraform-writing-smarter-terraform-with-expressions</link><guid isPermaLink="true">https://blog.adityarajsingh.in/day-10-of-30daysofterraform-writing-smarter-terraform-with-expressions</guid><category><![CDATA[Terraform]]></category><category><![CDATA[AWS]]></category><category><![CDATA[#30DaysOfAWSTerraform]]></category><category><![CDATA[expressions]]></category><category><![CDATA[automation]]></category><dc:creator><![CDATA[Aditya Raj Singh]]></dc:creator><pubDate>Wed, 25 Feb 2026 09:07:09 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1770124196825/ea30e8f6-16b6-453c-977a-7746634a0fd4.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>#30daysofawsterraform</p>
<p>By the time I reached Day 10, Terraform had already stopped feeling like a tool for “just creating resources.”<br /> Days 8 and 9 changed that mindset completely. Meta-arguments taught me how Terraform scales resource creation, and lifecycle rules taught me to respect change.</p>
<p>Day 10 took things one step further.<br /> This day wasn’t about adding more resources at all — it was about <strong>writing better Terraform</strong>.</p>
<p>Cleaner. Smarter. More intentional.</p>
<p><a href="https://github.com/ars0a/30-Days-Of-Terraform.git"><strong>GitHub - ars0a/30-Days-Of-Terraform: This repo contains my journey where I learn Terraform from…</strong><br />*This repo contains my journey where I learn Terraform from scratch and share my progress every single day. Each day has…*github.com</a></p>
<h3>From Static Configs to Decision-Making Infrastructure</h3>
<p>Up to now, most of my Terraform configurations were static. Variables helped, but the structure was still fairly rigid. Day 10 introduced the idea that Terraform configs can <em>make decisions</em>.</p>
<p>Conditional expressions were the first thing that clicked.</p>
<p>The syntax itself is simple:</p>
<pre><code class="language-bash">condition ? true_value : false_value
</code></pre>
<p>But the impact is big. Suddenly, the same Terraform code can behave differently based on environment, region, or intent — without copying files or branching logic everywhere.</p>
<p>I started thinking less in terms of “dev code” and “prod code” and more in terms of <strong>one codebase that adapts</strong>.</p>
<h3>Conditional Expressions — Small Logic, Big Clarity</h3>
<p>What stood out to me wasn’t just that conditionals exist, but <strong>where they make sense</strong>.</p>
<p>Using them for things like instance sizes, monitoring flags, or resource counts felt natural. Instead of maintaining multiple versions of the same resource, Terraform now expresses intent clearly: <em>if this condition is true, do this — otherwise, do that.</em></p>
<p>At the same time, Day 10 made something else clear:<br /> Not everything should be conditional.</p>
<p>When logic starts getting complex, it stops being readable. That’s when locals or higher-level structure is a better choice. This was an important balance to learn — just because Terraform allows logic doesn’t mean you should overuse it.</p>
<h3>Dynamic Blocks — The First Time Duplication Really Disappeared</h3>
<p>Dynamic blocks were probably the most satisfying part of Day 10.</p>
<p>Before this, repeating nested blocks felt unavoidable. Security group rules, IAM policies, route definitions — they all required copy-paste with minor changes. It worked, but it felt clumsy.</p>
<p>Dynamic blocks changed that.</p>
<p>Instead of repeating blocks, Terraform now <em>generates</em> them from data. A list or map defines the structure, and Terraform handles the repetition. Adding or removing rules becomes a data change, not a structural rewrite.</p>
<p>What I liked most here was how naturally this fits with variables. Infrastructure starts to look data-driven instead of hardcoded.</p>
<p>At the same time, I learned to be careful. Dynamic blocks are powerful, but they can hurt readability if overused. Day 10 reinforced an idea that keeps coming up: <strong>clarity matters more than cleverness</strong>.</p>
<h3>Splat Expressions — Seeing Terraform Think in Lists</h3>
<p>Splat expressions were quieter, but they completed the picture.</p>
<p>resource_list[*].attribute</p>
<p>This single line replaces loops, indexing, and manual extraction. Once I understood it, I started noticing how often Terraform deals with lists of things — instances, subnets, IDs, ARNs.</p>
<blockquote>
<p>Splat expressions feel like Terraform saying:<br /> “I already know this is a collection — let me handle it.”</p>
</blockquote>
<p>This made outputs cleaner, resource linking simpler, and overall configurations easier to read. It’s one of those features that doesn’t look impressive at first, but once you use it, you don’t want to go back.</p>
<h3>Where I Got Stuck (and What I Learned)</h3>
<p>The main friction on Day 10 wasn’t syntax — it was <strong>judgment</strong>.</p>
<p>When should I use a conditional vs a variable file?<br /> When does a dynamic block improve clarity, and when does it hide intent?<br /> Is this expression making the code better, or just more complex?</p>
<p>I found myself rewriting sections just to make them more readable. That process taught me that Terraform isn’t just about making things work — it’s about making them understandable to future you.</p>
<h3>How Day 10 Connects to the Bigger Picture</h3>
<p>Looking back, Day 10 ties together everything learned so far:</p>
<ul>
<li><p>Variables (Day 5) made configs flexible</p>
</li>
<li><p>Modules (Day 6) made them structured</p>
</li>
<li><p>Workspaces (Day 7) made them environment-aware</p>
</li>
<li><p>Meta-arguments (Day 8) made them scalable</p>
</li>
<li><p>Lifecycle rules (Day 9) made them safe</p>
</li>
<li><p>Expressions (Day 10) made them intelligent</p>
</li>
</ul>
<p>This was the day Terraform started to feel less like configuration and more like <strong>design</strong>.</p>
<h3>What I Really Took Away from Day 10</h3>
<p>Day 10 wasn’t about memorizing syntax. It was about learning how to express intent clearly in Terraform.</p>
<p>Conditional expressions taught me how to handle differences without duplication.<br /> Dynamic blocks showed me how to let data drive structure.<br /> Splat expressions helped me think in collections instead of individual resources.</p>
<p>Most importantly, I learned that good Terraform isn’t just correct — it’s readable, intentional, and maintainable.</p>
<h3>Takeaway</h3>
<p>Day 10 reinforced a simple idea:<br /> Terraform is powerful, but power needs discipline.</p>
<p>Expressions make Terraform flexible and expressive, but only when used thoughtfully. The goal isn’t to write clever configurations — it’s to write infrastructure that makes sense six months later.</p>
<p>This day felt less like learning a new feature and more like learning how to <strong>write Terraform properly</strong>.</p>
]]></content:encoded></item><item><title><![CDATA[Terraform on AWS: Deploy a Highly Available Django App with Auto Scaling and Load Balancing]]></title><description><![CDATA[In the world of cloud computing, terms like “highly available” and “scalable architecture” often float around in whitepapers, certification courses, and online tutorials. They’re buzzwords that sound ]]></description><link>https://blog.adityarajsingh.in/Deployed-a-Highly-Available-Django-App-With-AWS-Terraform</link><guid isPermaLink="true">https://blog.adityarajsingh.in/Deployed-a-Highly-Available-Django-App-With-AWS-Terraform</guid><category><![CDATA[Terraform]]></category><category><![CDATA[AWS]]></category><category><![CDATA[autoscaling]]></category><category><![CDATA[fault tolerance]]></category><category><![CDATA[Docker]]></category><category><![CDATA[Django]]></category><category><![CDATA[Infrastructure as code]]></category><dc:creator><![CDATA[Aditya Raj Singh]]></dc:creator><pubDate>Mon, 23 Feb 2026 08:58:23 GMT</pubDate><enclosure url="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/694ec4d1d13bb1b59dd0f107/429e7587-5c50-44b7-b9bd-8ab22d0d9f3c.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the world of cloud computing, terms like “highly available” and “scalable architecture” often float around in whitepapers, certification courses, and online tutorials. They’re buzzwords that sound impressive but can feel abstract until you’ve rolled up your sleeves and built one yourself. That’s exactly where I was before this project — a solid grasp of the individual pieces, but no hands-on experience seeing them orchestrate under real-world conditions.</p>
<p>I knew a load balancer distributes traffic to prevent any single server from becoming a bottleneck. I understood that Auto Scaling Groups (ASGs) dynamically adjust the number of instances based on demand. I’d read about private subnets shielding resources from the public internet and NAT gateways enabling outbound connections without exposing those resources. But reading is one thing; watching these elements collaborate to handle failures and spikes in load is entirely another.</p>
<p>This project was my opportunity to bridge that gap. I set out to deploy a simple web application on AWS using Terraform, focusing exclusively on networking, compute, scaling, and fault tolerance. No databases involved — just a stateless Django app running in Docker containers. The goal wasn’t to create a production-ready monolith but to observe how these components behave when pushed. I wanted to move from theoretical knowledge to empirical understanding, seeing the system self-heal and adapt in real time.</p>
<h3>Designing the Foundation: Networking First</h3>
<p>Every robust cloud architecture starts with a strong networking base, and in AWS, that means the Virtual Private Cloud (VPC). I began by provisioning a VPC with Terraform, ensuring DNS support and DNS hostnames were enabled. Why? Without these, even basic resolutions — like accessing the load balancer’s endpoint — could fail, leading to frustrating connectivity issues that derail the entire setup.</p>
<p>To infuse redundancy, I divided the VPC into four subnets: two public and two private, each spanning a different Availability Zone (AZ). This wasn’t arbitrary; AZs are physically isolated data centers within a region, so spreading resources across them ensures that a failure in one — like a power outage or network disruption — doesn’t take down the whole application. If AZ1 goes dark, AZ2 picks up the slack seamlessly.</p>
<p>The public subnets housed internet-facing resources: the Application Load Balancer (ALB) for handling incoming traffic and NAT Gateways for outbound connectivity from private resources. Meanwhile, the private subnets were dedicated to the EC2 instances running my web app. These instances were configured without public IP addresses, adhering to the principle of least privilege — exposing only what’s necessary to the outside world.</p>
<p>An Internet Gateway (IGW) was attached to the VPC to facilitate inbound and outbound traffic for public subnets. But the private EC2 instances still needed to reach the internet — for pulling Docker images from public repositories or fetching software updates. Enter NAT Gateways: I deployed one in each public subnet (one per AZ) to provide high-availability outbound routing. This setup routes traffic from private instances through the NAT, masking their origins and keeping them secure.</p>
<p>At this point, the networking layer felt tangible. Unlike my earlier toy projects with flat, single-subnet VPCs, this had deliberate separation of concerns, built-in redundancy, and a clear security posture. It was a foundation that screamed “enterprise-ready,” even for a modest app.</p>
<h3>Putting the Load Balancer in Front</h3>
<p>With the network skeleton in place, it was time to add the traffic director: the Application Load Balancer. Using Terraform, I configured the ALB to listen on port 80 (HTTP) and forward requests to a target group comprising my EC2 instances. The ALB was placed in the public subnets, making it the sole public entry point — users would never directly hit the backend servers.</p>
<p>Health checks were a critical detail here. I set them up on the root path (“/”) with a 200–299 success code threshold. This means the ALB periodically pings each instance; if it doesn’t get a healthy response, it stops sending traffic there. It’s a simple mechanism, but as I’d soon discover, it’s the linchpin for fault tolerance.</p>
<p>This configuration enhanced security by hiding the instances behind the ALB and improved resilience by intelligently routing around problems. Traffic flow: Internet → IGW → ALB → Healthy EC2 instances in private subnets. No direct exposure, no single point of failure — elegant in its simplicity.</p>
<h3>Launch Templates and Auto Scaling</h3>
<p>Behind the ALB, the real magic happens with the compute layer. I created a Launch Template to define the blueprint for EC2 instances: an Amazon Linux 2 AMI, t3.micro instance type (cost-effective for testing), appropriate security groups (allowing inbound from the ALB on port 80), and a user data script.</p>
<p>The user data script was the automation glue. On boot, it installed Docker, pulled my Django app image from Docker Hub, and ran the container with port mapping — exposing the app’s port 8000 to the host’s port 80. This ensured the ALB could communicate with the containerized app without fuss.</p>
<p>Next, the Auto Scaling Group. I configured it to use the Launch Template, spanning both private subnets (and thus both AZs). Minimum capacity: 1 instance; desired: 2; maximum: 4. Scaling policies were tied to CloudWatch metrics, specifically average CPU utilization. Scale out if CPU &gt; 70% for 2 minutes; scale in if &lt; 30% for 5 minutes. Warm-up and cooldown periods prevented rapid oscillations.</p>
<p>On paper — or in Terraform code — this looked flawless. But infrastructure as code is only as good as its runtime behavior. Would it actually scale under load? Handle failures gracefully? That was the litmus test.</p>
<h3>Watching Health Checks in Action</h3>
<p>After a terraform apply, the stack came to life. The ALB’s DNS name resolved in my browser, serving the Django app. Success! But digging into the AWS Console revealed a hiccup: one instance in the target group was marked “unhealthy.”</p>
<p>Panic set in briefly — was my config broken? Then, the ASG kicked in: it terminated the unhealthy instance and spun up a replacement. The new one initialized, passed health checks after a few cycles, and joined the pool.</p>
<p>The culprit? Bootstrapping time. The user data script took ~2–3 minutes to install Docker, pull the image (a few hundred MB), and start the container. During this window, health checks failed, triggering the ASG’s replacement logic. It wasn’t a bug; it was the system self-healing.</p>
<p>This moment was revelatory. High availability isn’t about perfection — it’s about detection and recovery. The infrastructure wasn’t brittle; it was resilient, proactively addressing issues before they impacted users.</p>
<h3>Applying Load with Apache JMeter</h3>
<p>Theory validated, now for scalability. I fired up Apache JMeter on my local machine to simulate traffic. Starting with 10 concurrent users ramping to 100, I hammered the ALB endpoint with GET requests.</p>
<p>At first… crickets. No new instances launched. CloudWatch showed CPU peaking at 40% — below my 70% threshold. Lesson learned: Scaling isn’t triggered by “busy-ness” alone; it’s metric-driven. The app was handling the load efficiently, so why add capacity?</p>
<p>To force the issue, I temporarily lowered the threshold to 50% and re-ran the test. CPU spiked, alarms fired, and the ASG bumped desired capacity to 3. I watched in the console as the new instance launched, bootstrapped, registered with the target group, passed health checks, and started receiving traffic. The ALB distributed requests evenly, CPU stabilized, and the system hummed.</p>
<p>It was exhilarating — seeing abstract concepts like “elasticity” manifest in logs and metrics.</p>
<h3>Observing Scale-In and Connection Draining</h3>
<p>Post-test, I halted JMeter. CPU plummeted, triggering the scale-in policy. But termination wasn’t abrupt: The ASG initiated connection draining, giving active sessions (up to 300 seconds) to complete before deregistering the instance from the ALB.</p>
<p>No dropped connections, no user disruption — just a graceful contraction. This underscored controlled scaling: Not just growing, but shrinking efficiently to optimize costs without chaos.</p>
<h3>Understanding What “Highly Available” Really Means</h3>
<p>Pre-project, I equated high availability with “multiple servers.” Now, I see it as a symphony of interdependent components:</p>
<ul>
<li><p>The ALB distributes load and enforces health, acting as the vigilant gatekeeper.</p>
</li>
<li><p>The ASG monitors and adjusts capacity, replacing failures automatically.</p>
</li>
<li><p>CloudWatch metrics and alarms provide the intelligence for proactive decisions.</p>
</li>
<li><p>NAT Gateways ensure backend connectivity without compromising security.</p>
</li>
<li><p>Private subnets enforce isolation, minimizing attack surfaces.</p>
</li>
</ul>
<p>It’s about clear roles and failover paths. AZ failure? Traffic reroutes. Instance crash? Auto-replacement. Load surge? Scale out. Downtime? Minimal, if any.</p>
<p>The beauty is in the predictability — no heroics required; the system adapts quietly.</p>
<h3>Closing the Loop</h3>
<p>With testing complete, a terraform destroy wiped the slate clean, reclaiming resources and closing the experiment. This project transformed my perspective: Cloud architecture isn’t rote memorization of services — it’s designing for behavior under stress.</p>
<p>For aspiring DevOps engineers or cloud architects, I recommend this: Build it, break it, observe it. Documentation pales against the clarity of real-time metrics and auto-recovery in action. High availability isn’t a checkbox; it’s a lived experience that builds intuition for crafting truly resilient systems.</p>
]]></content:encoded></item><item><title><![CDATA[Debugging Is Mostly Thinking, Not Commands]]></title><description><![CDATA[I remember staring at my screen late one night, trying to get a simple Docker container running on my local machine. It was part of this beginner DevOps project I’d set up—a basic web app that should ]]></description><link>https://blog.adityarajsingh.in/debugging-is-mostly-thinking-not-commands</link><guid isPermaLink="true">https://blog.adityarajsingh.in/debugging-is-mostly-thinking-not-commands</guid><category><![CDATA[Devops]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[debugging]]></category><category><![CDATA[Commands]]></category><dc:creator><![CDATA[Aditya Raj Singh]]></dc:creator><pubDate>Sun, 22 Feb 2026 09:33:45 GMT</pubDate><enclosure url="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/694ec4d1d13bb1b59dd0f107/998f8135-6a48-4579-9e98-3aa0b6b1302f.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I remember staring at my screen late one night, trying to get a simple Docker container running on my local machine. It was part of this beginner DevOps project I’d set up—a basic web app that should have deployed without fuss. But it wouldn’t start. The logs spat out errors about ports and permissions, and I kept typing commands like docker logs and docker ps, hoping something would click.</p>
<p>At first, it felt like a puzzle I could solve with enough trial and error. I’d Google the error messages, copy-paste fixes from Stack Overflow, and rerun everything. But nothing stuck. The container would crash again, and I’d feel that familiar frustration building. It wasn’t just the code; it was me, wondering if I was cut out for this Cloud stuff.</p>
<p>This happens a lot when you’re starting out in DevOps. You think the tools are the barrier—learning Kubernetes or Terraform seems daunting—but really, it’s the moments when things break that test you. And that’s where debugging comes in, or at least where I thought it did.</p>
<h3>My Initial Belief About Debugging</h3>
<p>I used to see debugging as a toolkit. Grab the right command, point it at the problem, and watch it resolve. In my mind, it was like being a mechanic: diagnose with kubectl describe or aws ec2 describe-instances, then fix with a tweak here or there. It made sense because that’s how tutorials present it. They say, “Run this to check logs,” or “Use that flag to inspect resources.”</p>
<p>I assumed that with enough practice, I’d memorize these commands and become efficient. Why wouldn’t I? In programming bootcamps, debugging is often reduced to breakpoints and print statements. Extending that to Cloud and DevOps felt logical—scale it up to infrastructure, but the process stays the same.</p>
<p>It felt reasonable because early successes reinforced it. When a script failed due to a missing environment variable, echo $VAR revealed the issue quickly. I’d pat myself on the back, thinking, <em>See? Commands are key.</em></p>
<h3>Where It Started Breaking</h3>
<p>But then came this one deployment hiccup that wouldn’t budge. I was experimenting with a CI/CD pipeline on GitHub Actions, pushing code to an AWS ECR repository, then pulling it into an EC2 instance. Simple in theory. The build passed, the push succeeded, but the container on EC2 kept failing with a vague “permission denied” error.</p>
<p>I hammered away with commands: aws ecr describe-repositories, docker pull, sudo chmod on files. Nothing. The error persisted, mocking me. I even restarted the instance, thinking maybe it was a transient glitch. Hours slipped by, and I got confused—why weren’t the tools working?</p>
<p>The confusion deepened when I realized I was looping. I’d try a fix, fail, search for similar issues, apply another command, and repeat. It felt like chasing shadows. That’s when doubt crept in: <em>Am I missing some advanced flag? Or is this just how DevOps is—endless firefighting?</em></p>
<h3>Slowing Down and Thinking</h3>
<p>Eventually, I stopped typing. I closed my terminal and just sat there, staring at the error message: “Error response from daemon: unauthorized: authentication required.” It was an auth issue, but I’d already checked my AWS credentials with aws configure list. They looked fine.</p>
<p>This is where I got stuck, but also where I started to shift. Instead of more commands, I asked myself basic questions. What assumptions was I making? I assumed the credentials were propagating correctly from my local setup to the EC2 instance. But why? Because I’d set them up once and they worked before.</p>
<p>I jotted down what I knew:</p>
<ul>
<li><p>Local pull worked fine.</p>
</li>
<li><p>ECR policy allowed public pulls, but wait, was it public? No, I’d set it private.</p>
</li>
<li><p>Instance role—did it have the right IAM permissions?</p>
</li>
</ul>
<p>Step by step, I reasoned through the flow. The pipeline pushes to ECR using my personal access keys, but the EC2 pulls using its instance profile. Were they synced? I hadn’t explicitly attached the ECR read policy to the instance role.</p>
<p>It wasn’t a command that revealed this; it was tracing the mental model of how AWS auth works. I recalled from a video that instance roles are separate from user credentials. So, I visualized the chain: code commit → GitHub Actions (with secrets) → ECR push → EC2 pull (via role).</p>
<h3>What the System Was Actually Doing</h3>
<p>The system wasn’t broken; it was following rules I’d overlooked. ECR repositories default to private, requiring specific IAM policies for access. My instance role had basic EC2 permissions but nothing for ECR. So, when Docker tried to pull, it hit an auth wall—not because of bad commands, but because the identity wasn’t authorized.</p>
<p>In plain terms, it’s like trying to enter a locked building with the wrong key card. You can jiggle the handle (run commands) all day, but without questioning if you have access at all, you’re stuck outside. The error message was a symptom, not the cause. The real behavior was AWS enforcing least-privilege security, which is great in theory but invisible until you think about identities separately from actions.</p>
<p>I avoided diving into jargon here, but terms like “IAM role” popped up naturally in my head because they’re how Cloud systems think about trust. It’s not magic; it’s layered checks.</p>
<h3>The Turning Point</h3>
<p>What finally worked was simple, but it came from that pause. I logged into the AWS console—not to run commands, but to visually inspect the instance role. Sure enough, no ECR policy attached. I added AmazonEC2ContainerRegistryReadOnly, saved, and tried the pull again. It worked.</p>
<p>Why did it work? Because I addressed the assumption, not the surface error. The commands were secondary; I could have used CLI to attach the policy (aws iam attach-role-policy), but the insight was in realizing the mismatch between push and pull identities. That click happened when I drew a quick diagram on paper—arrows from GitHub to ECR to EC2, with question marks on the auth steps.</p>
<p>It wasn’t triumphant; it was relieving. And partial—I still wondered about best practices, like using temporary credentials. But it moved me forward.</p>
<h3>What This Taught Me</h3>
<ul>
<li><p><em>Assumptions hide in the setup</em>: I learned that early configurations, like roles and policies, often cause downstream failures. It’s not about the debug command; it’s questioning if the foundation is solid.</p>
</li>
<li><p><em>Thinking builds a map</em>: By slowing down, I started seeing DevOps as interconnected systems, not isolated tools. Each piece affects others, and debugging is redrawing that map when it doesn’t match reality.</p>
</li>
<li><p><em>Confusion is a signal</em>: Getting stuck isn’t failure; it’s a prompt to step back. Rushing with commands often deepens the hole, while reflection uncovers the why.</p>
</li>
<li><p><em>Simplicity over complexity</em>: The fix was basic, but it required peeling away my rushed mindset. This makes me approach new setups with more caution now.</p>
</li>
</ul>
<p>In the end, debugging in Cloud and DevOps isn’t about mastering a list of commands. It’s about cultivating patience to think through the invisible layers. I’m still a beginner, making plenty of mistakes, but these moments remind me that understanding grows from reflection, not speed. Next time something breaks, I’ll try to remember: pause first, assume less. What hidden assumption might be tripping you up right now?</p>
]]></content:encoded></item><item><title><![CDATA[Leaving Resources Running Was My First Cloud Mistake]]></title><description><![CDATA[I Didn’t Understand Cloud Costs Until Resources Kept Running
A few months ago, I signed up for an AWS account. The promise of the cloud felt almost magical — servers I could spin up in minutes, no har]]></description><link>https://blog.adityarajsingh.in/leaving-resources-running-was-my-first-cloud-mistake</link><guid isPermaLink="true">https://blog.adityarajsingh.in/leaving-resources-running-was-my-first-cloud-mistake</guid><category><![CDATA[Terraform]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[Infrastructure as code]]></category><dc:creator><![CDATA[Aditya Raj Singh]]></dc:creator><pubDate>Sun, 22 Feb 2026 09:18:33 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1770124189383/f1dd23ba-ccd0-42b7-b00b-0f2dce1766f2.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I Didn’t Understand Cloud Costs Until Resources Kept Running</p>
<p>A few months ago, I signed up for an AWS account. The promise of the cloud felt almost magical — servers I could spin up in minutes, no hardware to buy, just pay for what I use.</p>
<p>I launched my first EC2 instance, a tiny t2.micro, to practice deploying a simple Node.js app. I spent a Saturday afternoon on it: installed Nginx, cloned a repo, got everything running. When I was satisfied, I closed the SSH window and the AWS console tab. Done, I thought. Back to regular life.</p>
<p>Two weeks later, I opened an email from AWS. Subject line: “Your bill is ready.” It wasn’t a huge amount — maybe $12 — but it was money I hadn’t planned to spend. I stared at the breakdown, confused. The charges were almost entirely for “EC2: Running Hours.”</p>
<p>I hadn’t touched that instance in days. Why was it still costing me?</p>
<h3>What I Believed About the Cloud</h3>
<p>I had pictured “pay as you go” like electricity in an empty house. If no one’s home, the meter barely moves. I assumed that when I wasn’t actively using the instance — no SSH, no traffic, no processes I was running — the cost would drop to nearly zero.</p>
<p>The free tier reinforced that feeling. 750 hours a month sounded like more than enough for learning. I figured a single small instance left idle wouldn’t matter.</p>
<p>I also thought closing the browser or the terminal was enough to “pause” things. It felt intuitive — like logging out of a website.</p>
<h3>The First Crack in That Picture</h3>
<p>Logging back into the EC2 console, I saw the instance status: <strong>running</strong>. Green checks everywhere.</p>
<p>I clicked around. No obvious “power off” button that I remembered using. I started second-guessing myself. Had I forgotten to shut it down? Was there a setting I missed?</p>
<p>The billing dashboard showed a steady line of hourly charges, day after day, even though I hadn’t connected once.</p>
<p>That’s when the unease settled in. The system wasn’t broken. I had misunderstood something basic.</p>
<h3>Trying to Understand What Was Actually Happening</h3>
<p>Instead of panic-deleting everything, I slowed down.</p>
<p>I read the EC2 documentation carefully, line by line. I looked up blog posts from other beginners. I opened Cost Explorer and filtered to just that instance.</p>
<p>What I found was simple but jarring:</p>
<ul>
<li><p>An EC2 instance in “running” state is a virtual machine that is fully powered on, 24 hours a day, until I explicitly stop or terminate it.</p>
</li>
<li><p>Being idle doesn’t reduce the compute cost. The clock keeps ticking because the provider is reserving CPU, memory, and network for me.</p>
</li>
<li><p>The free tier gives 750 hours per month, but those hours are consumed whether I’m using the instance or not.</p>
</li>
</ul>
<p>In other words, I had rented a server and left the lights on.</p>
<p>I ran a small experiment. I selected the instance and chose <strong>Stop</strong>. Within minutes, the compute charges stopped appearing. Storage (the EBS volume) still had a tiny cost, but the big hourly line went flat.</p>
<p>Then, feeling brave, I <strong>Terminated</strong> it. Everything disappeared. The cost line went to zero.</p>
<p>I launched a fresh instance, did a quick task, and terminated it immediately after. No surprise charges.</p>
<h3>The Moment It Clicked</h3>
<p>The shift wasn’t in finding a clever trick. It was realizing the cloud isn’t a magic utility that senses when I’m done. It’s infrastructure I control.</p>
<blockquote>
<p>The responsibility for turning things off sits entirely with me.</p>
</blockquote>
<p>No timeout. No gentle reminder. Just whatever state I last left it in.</p>
<p>That small realization changed how I see every resource now — instances, databases, storage buckets. They all have their own metering rules, and silence doesn’t equal free.</p>
<h3>What This Experience Is Teaching Me</h3>
<ul>
<li><p><strong>Inactivity isn’t the same as off.</strong> Resources keep costing until I explicitly stop or delete them.</p>
</li>
<li><p><strong>Assumptions are expensive when untested.</strong> I filled a knowledge gap with intuition, and intuition was wrong.</p>
</li>
<li><p><strong>Early visibility matters.</strong> I now check the billing dashboard weekly, even when I think I’m being careful.</p>
</li>
<li><p><strong>Small habits prevent big surprises.</strong> I started adding calendar reminders or simple scripts to terminate test instances after a few hours.</p>
</li>
</ul>
<p>I still don’t know everything about cloud costs — far from it. But this mistake forced me to confront a quiet assumption I didn’t even know I had.</p>
<p>Now, every time I launch something, I pause and ask: <em>How long do I actually need this to stay on?</em></p>
<p>That question feels small. But it’s already saving me money — and, more importantly, it’s making me think more clearly about the systems I’m building on.</p>
]]></content:encoded></item><item><title><![CDATA[Day 9 of #30DaysOfTerraform — Understanding Terraform Lifecycle and Safe Changes]]></title><description><![CDATA[#30daysofawsterraform
By the time I reached Day 9, Terraform had already started changing the way I think about infrastructure. Day 8 was a big moment — meta-arguments like count and for_each showed me how Terraform can generate infrastructure dynami...]]></description><link>https://blog.adityarajsingh.in/day-9-of-30daysofterraform-understanding-terraform-lifecycle-and-safe-changes-caf698733969</link><guid isPermaLink="true">https://blog.adityarajsingh.in/day-9-of-30daysofterraform-understanding-terraform-lifecycle-and-safe-changes-caf698733969</guid><category><![CDATA[AWS]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Infrastructure as code]]></category><category><![CDATA[#IaC]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[automation]]></category><dc:creator><![CDATA[Aditya Raj Singh]]></dc:creator><pubDate>Tue, 17 Feb 2026 15:29:58 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1770124207185/5baff64b-558c-4fe5-a8f2-01b0340d4d9d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>#30daysofawsterraform</p>
<p>By the time I reached Day 9, Terraform had already started changing the way I think about infrastructure. Day 8 was a big moment — meta-arguments like <code>count</code> and <code>for_each</code> showed me how Terraform can generate infrastructure dynamically instead of forcing me to copy and paste blocks over and over again. It felt powerful, almost like programming infrastructure instead of just describing it.</p>
<p>But Day 9 slowed everything down.</p>
<p>It wasn’t about creating more resources or scaling faster. It was about understanding what <em>really happens</em> when infrastructure changes — and why Terraform sometimes behaves in ways that feel surprising if you’re not paying attention.</p>
<h3 id="heading-when-a-small-change-is-not-a-small-change">When a Small Change Is Not a Small Change</h3>
<p>Up until this point, I had been making changes fairly casually. Modify a variable, tweak a resource, run <code>terraform plan</code>, apply, and move on. Most changes felt safe because Terraform handled them smoothly.</p>
<p>Then I hit a moment where a minor-looking update caused Terraform to plan a <strong>destroy and recreate</strong> action. Nothing was technically wrong, but it forced me to stop and think. If this were a production environment, that single change could have meant downtime or data loss.</p>
<p>That’s when it really clicked for me: Terraform doesn’t think in terms of “small” or “big” changes. It thinks in terms of <strong>state transitions</strong>. If a resource cannot be updated in place, Terraform replaces it. And it will do that confidently, unless you tell it otherwise.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770124205170/41f2e76b-4363-42a6-98f2-b1a726efcedf.png" alt /></p>
<p>How Terraform lifecycle rules protect resources, control replacement order, and manage change safely over time.</p>
<h3 id="heading-discovering-the-lifecycle-block">Discovering the <code>lifecycle</code> Block</h3>
<p>Day 9 introduced the <code>lifecycle</code> block, and at first it looked deceptively simple. A few extra lines inside a resource definition didn’t seem like a big deal. But the more I experimented, the more I realized how much control this block gives you over Terraform’s behavior.</p>
<p>The first rule I explored was <code>prevent_destroy</code>. Adding it felt almost like putting a lock on a resource. Terraform didn’t stop managing it, but it refused to delete it without explicit intervention. The moment Terraform failed a plan because destruction was blocked, I understood its value.</p>
<p>A basic example:</p>
<pre><code class="lang-bash">resource <span class="hljs-string">"aws_s3_bucket"</span> <span class="hljs-string">"example"</span> {
bucket = <span class="hljs-string">"my-app-bucket"</span>

lifecycle {
prevent_destroy = <span class="hljs-literal">true</span>
    }
}
</code></pre>
<p>Adding this immediately changed Terraform’s behavior. When I tried an operation that would delete the bucket, Terraform failed the plan instead of proceeding.</p>
<p>That failure wasn’t an error. It was Terraform protecting me from myself.</p>
<h3 id="heading-thinking-about-what-should-never-be-deleted">Thinking About What Should Never Be Deleted</h3>
<p>Once I saw <code>prevent_destroy</code> in action, my mindset shifted. I started thinking less about how to build resources and more about which ones <strong>should never disappear automatically</strong>.</p>
<p>S3 buckets with data. Core networking components. Anything that holds state or traffic. These aren’t resources you want disappearing because of a refactor or an accidental variable change.</p>
<p>Terraform gives you power, but Day 9 made it clear that responsibility comes with it. Guardrails aren’t optional — they’re part of good infrastructure design.</p>
<h3 id="heading-learning-how-terraform-replaces-resources">Learning How Terraform Replaces Resources</h3>
<p>Another important realization came from <code>create_before_destroy</code>. Before this, I hadn’t paid much attention to the order in which Terraform replaced resources. I assumed updates were mostly in-place.</p>
<p>Seeing Terraform destroy first and then create made me uncomfortable — and that discomfort was useful. It forced me to think about availability and continuity.</p>
<p>The <code>create_before_destroy</code> rule flips that order:</p>
<pre><code class="lang-bash">lifecycle {
create_before_destroy = <span class="hljs-literal">true</span>
}
</code></pre>
<p>Terraform now ensured the new resource existed before removing the old one.</p>
<ol>
<li><p>Creates the new resource</p>
</li>
<li><p>Switches dependencies</p>
</li>
<li><p>Destroys the old one</p>
</li>
</ol>
<p>This pattern is essential for high-availability setups and rolling replacements.</p>
<p>Using <code>create_before_destroy</code> changed the sequence completely. This wasn’t just a configuration tweak; it was a way of designing for reliability instead of hoping for it.</p>
<h3 id="heading-accepting-that-not-all-drift-is-a-problem">Accepting That Not All Drift Is a Problem</h3>
<p>One of the quieter lessons of Day 9 came from <code>ignore_changes</code>. I noticed Terraform repeatedly trying to “fix” things that weren’t actually broken — tags modified manually, attributes adjusted by AWS itself.</p>
<p>At first, I thought Terraform was being strict. Later, I realized it was being honest. Terraform doesn’t know which changes matter unless you tell it.</p>
<p>That’s where <code>ignore_changes</code> helps:</p>
<pre><code class="lang-bash">lifecycle {
ignore_changes = [tags]
}
</code></pre>
<p>Using <code>ignore_changes</code> felt like acknowledging reality: not all infrastructure is perfectly controlled, and not all differences require correction. This was an important step toward using Terraform pragmatically instead of dogmatically.</p>
<h3 id="heading-how-day-9-connected-everything-before-it">How Day 9 Connected Everything Before It</h3>
<p>Looking back, Day 9 made sense only because of the days before it.</p>
<p>Day 6 taught me to structure infrastructure using modules. Day 7 showed how environments should be isolated using workspaces. Day 8 explained how Terraform creates and orders resources dynamically. Day 9 tied all of that together by answering a harder question: <strong>what happens when those resources change over time?</strong></p>
<p>Terraform wasn’t just building infrastructure anymore. It was managing its lifecycle.</p>
<h3 id="heading-what-i-actually-learned">What I Actually Learned</h3>
<p>Day 9 didn’t teach me new commands as much as it taught me caution. It taught me to slow down and read <code>terraform plan</code> carefully. It taught me that Terraform will do exactly what you ask — even if that means deleting something important.</p>
<p>Lifecycle rules aren’t shortcuts or hacks. They are deliberate tools to make change safer, clearer, and more intentional.</p>
<p>This was the day Terraform stopped feeling like a convenience tool and started feeling like something that deserves respect.</p>
<h3 id="heading-takeaway">Takeaway</h3>
<p>Day 9 was about learning to control change instead of reacting to it. Terraform gives you the power to create, destroy, and replace infrastructure, but lifecycle rules help you decide <strong>when and how</strong> those actions should happen.</p>
<p>At this point in the journey, Terraform feels less like a script that builds infrastructure and more like a system that helps manage it responsibly over time.</p>
]]></content:encoded></item><item><title><![CDATA[Day 8 of #30DaysOfTerraform — Terraform Meta-Arguments: Writing Dynamic Infrastructure]]></title><description><![CDATA[#30daysofawsterraform
Day 8 was one of those days where you suddenly see Terraform not just as a way to describe infrastructure, but as a tool that can generate it — based on patterns and inputs instead of copy-paste blocks.
Up until now, each AWS re...]]></description><link>https://blog.adityarajsingh.in/day-8-of-30daysofterraform-terraform-meta-arguments-writing-dynamic-infrastructure-fd716da8dda8</link><guid isPermaLink="true">https://blog.adityarajsingh.in/day-8-of-30daysofterraform-terraform-meta-arguments-writing-dynamic-infrastructure-fd716da8dda8</guid><category><![CDATA[AWS]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[#30DaysOfAWSTerraform]]></category><category><![CDATA[Infrastructure as code]]></category><category><![CDATA[#IaC]]></category><category><![CDATA[Learning Journey]]></category><dc:creator><![CDATA[Aditya Raj Singh]]></dc:creator><pubDate>Fri, 13 Feb 2026 13:59:02 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1770124212608/59b6518f-665a-4a09-b009-01c2126b04df.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>#30daysofawsterraform</p>
<p>Day 8 was one of those days where you suddenly see Terraform not just as a way to describe infrastructure, but as a tool that can <em>generate</em> it — based on patterns and inputs instead of copy-paste blocks.</p>
<p>Up until now, each AWS resource we created lived inside its own block. Want two S3 buckets? You wrote two blocks. Need three EC2 instances? That meant three resource definitions. That works at first, but as soon as you start thinking about <em>scale</em>, you see its limitations. On Day 8, I learned how Terraform’s <strong>meta-arguments</strong> — especially <code>count</code>, <code>for_each</code>, and <code>depends_on</code> — change that game entirely.</p>
<h3 id="heading-what-meta-arguments-really-do">What Meta-Arguments Really Do</h3>
<p>Meta-arguments are special arguments built into Terraform itself — not part of a provider like AWS. They let you control <em>how many</em> resources are created and <em>in what order</em>, without repeating block definitions.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770124210672/c7e14b08-cdfd-43c8-a406-34cb9a220d34.png" alt /></p>
<p>How Terraform uses meta-arguments to dynamically create and order infrastructure.</p>
<p>Before Day 8, I might write:</p>
<pre><code class="lang-bash">resource <span class="hljs-string">"aws_s3_bucket"</span> <span class="hljs-string">"bucket1"</span> {
bucket = <span class="hljs-string">"my-app-bucket-1"</span>
tags = var.tags
}
resource <span class="hljs-string">"aws_s3_bucket"</span> <span class="hljs-string">"bucket2"</span> {
bucket = <span class="hljs-string">"my-app-bucket-2"</span>
tags = var.tags
}
</code></pre>
<p>That’s simple and clear — until you need ten buckets. It’s then that repetition gets ugly, error-prone, and hard to maintain.</p>
<p>Meta-arguments give you a better pattern.</p>
<h3 id="heading-leveraging-count-for-repetition">Leveraging <code>count</code> for Repetition</h3>
<p>The <code>count</code> meta-argument lets you turn <em>one</em> resource block into <em>many</em>:</p>
<pre><code class="lang-bash">resource <span class="hljs-string">"aws_s3_bucket"</span> <span class="hljs-string">"buckets"</span> {
count = length(var.bucket_names)
bucket = var.bucket_names[count.index]
tags = var.tags
}
</code></pre>
<p>Here’s what’s nice about this:</p>
<ul>
<li><p>One block generates multiple resources</p>
</li>
<li><p><code>count.index</code> gives each instance its place</p>
</li>
<li><p>You can control names through variables</p>
</li>
</ul>
<p>This makes the infrastructure <strong>elastic without duplication</strong>.</p>
<h3 id="heading-when-foreach-makes-more-sense">When <code>for_each</code> Makes More Sense</h3>
<p><code>for_each</code> is similar to <code>count</code>, but more stable when your list of values doesn’t map cleanly to numerical indexes.</p>
<pre><code class="lang-bash">resource <span class="hljs-string">"aws_s3_bucket"</span> <span class="hljs-string">"buckets"</span> {
for_each = toset(var.bucket_names)
bucket = each.value
tags = var.tags
}
</code></pre>
<p>This pattern is especially useful when:</p>
<ul>
<li><p>You need stable resource identity over time</p>
</li>
<li><p>The order may change but you don’t want Terraform to recreate everything</p>
</li>
<li><p>You derive the list from maps or sets</p>
</li>
</ul>
<p>That predictability matters in production.</p>
<h3 id="heading-controlling-dependencies-with-dependson">Controlling Dependencies with <code>depends_on</code></h3>
<p>Terraform usually figures out dependencies automatically — based on references in the code. But there are real scenarios where the order matters <em>even when there’s no direct reference</em>.</p>
<p>For example:</p>
<ul>
<li><p>An S3 bucket must exist before a policy attache</p>
</li>
<li><p>A VPC must be fully created before subnets</p>
</li>
<li><p>Custom IAM roles need to be in place before resources depend on them</p>
</li>
</ul>
<p>In these cases, <code>depends_on</code> gives you explicit control:</p>
<pre><code class="lang-bash">resource <span class="hljs-string">"aws_s3_bucket"</span> <span class="hljs-string">"example"</span> {
bucket = <span class="hljs-string">"my-tf-bucket"</span>
tags = var.tags
depends_on = [
aws_vpc.main
    ]
}
</code></pre>
<p>This forces Terraform to create the VPC before the bucket — even if otherwise they don’t depend on each other.</p>
<h3 id="heading-what-this-changes-in-your-terraform-workflow">What This Changes in Your Terraform Workflow</h3>
<p>By the end of Day 8’s exercise, a few clear shifts happened in my thinking:</p>
<ul>
<li><p>I no longer write the same resource block 5–10 times accidentally</p>
</li>
<li><p>I treat Terraform like a <strong>mini programming language for infrastructure</strong></p>
</li>
<li><p>Dependencies become safer to manage, not accidents waiting to happen</p>
</li>
<li><p>I see patterns in infrastructure that can be abstracted and repeated consistently</p>
</li>
</ul>
<p>This is where Terraform starts feeling <em>smart</em>, not just convenient.</p>
<h3 id="heading-a-real-world-example-beyond-s3">A Real-World Example (Beyond S3)</h3>
<p>Imagine you’re provisioning multiple security groups, or spinning up clusters with a variable number of instances. With <code>count</code> or <code>for_each</code>, you can scale your HCL just like you scale resources:</p>
<ul>
<li><p>Create n subnets dynamically</p>
</li>
<li><p>Generate multiple IAM users from a list</p>
</li>
<li><p>Provision network ACLs for each AZ</p>
</li>
<li><p>Build tags or naming conventions that adapt by environment or workspace</p>
</li>
</ul>
<p>These patterns are the difference between a one-off script and infrastructure that grows with requirement changes.</p>
<h3 id="heading-takeaway">Takeaway</h3>
<p>Day 8 wasn’t just a lesson on Terraform syntax — it was about <strong>thinking differently</strong>:</p>
<ol>
<li><p><code>count</code> and <code>for_each</code> replace repetition with pattern-based infrastructure</p>
</li>
<li><p><code>depends_on</code> gives you safety when implicit linking isn’t obvious</p>
</li>
<li><p>Terraform starts to feel like a <strong>declarative automation language</strong>, not just a config tool</p>
</li>
</ol>
<p>Once you stop copying code and start writing patterns with meta-arguments, your infrastructure becomes cleaner, safer, and closer to real engineering.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/ars0a/30-days-of-aws-terraform.git">https://github.com/ars0a/30-days-of-aws-terraform.git</a></div>
]]></content:encoded></item><item><title><![CDATA[Day 7 of #30DaysOfTerraform: Managing Multiple Environments with Terraform Workspaces]]></title><description><![CDATA[#30daysofawsterraform
By Day 7, Terraform usage starts to resemble real production workflows. You now have structured code, reusable modules, variables, and clean repositories. The next problem becomes obvious: how do you manage multiple environments...]]></description><link>https://blog.adityarajsingh.in/day-7-of-30daysofterraform-managing-multiple-environments-with-terraform-workspaces-d43efd6bfb3c</link><guid isPermaLink="true">https://blog.adityarajsingh.in/day-7-of-30daysofterraform-managing-multiple-environments-with-terraform-workspaces-d43efd6bfb3c</guid><category><![CDATA[AWS]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[development]]></category><category><![CDATA[production]]></category><category><![CDATA[#30DaysOfAWSTerraform]]></category><dc:creator><![CDATA[Aditya Raj Singh]]></dc:creator><pubDate>Thu, 12 Feb 2026 13:49:58 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1770124217773/2ad6718f-ed9c-4cd1-a45d-d48b5b55a5dd.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>#30daysofawsterraform</p>
<p>By Day 7, Terraform usage starts to resemble real production workflows. You now have structured code, reusable modules, variables, and clean repositories. The next problem becomes obvious: <strong>how do you manage multiple environments without duplicating everything?</strong></p>
<p>Development, staging, and production environments almost always need similar infrastructure with slightly different values. Day 7 focuses on <strong>Terraform workspaces</strong>, a feature designed to solve this exact problem by isolating state while reusing the same configuration.</p>
<h3 id="heading-the-environment-problem-in-infrastructure-as-code">The Environment Problem in Infrastructure as Code</h3>
<p>Without a clear strategy, teams often handle environments by copying directories:</p>
<pre><code class="lang-bash">dev/
staging/
prod/
</code></pre>
<p>Each folder contains nearly identical Terraform code with small changes. This approach works initially, but it introduces serious problems:</p>
<ul>
<li><p>Code duplication</p>
</li>
<li><p>Drift between environments</p>
</li>
<li><p>Risky manual edits</p>
</li>
<li><p>Difficult automation</p>
</li>
</ul>
<p>Terraform workspaces offer a cleaner alternative by separating <strong>state</strong>, not code.</p>
<h3 id="heading-what-terraform-workspaces-actually-are">What Terraform Workspaces Actually Are</h3>
<p>A Terraform workspace is <strong>a separate state associated with the same configuration</strong>.<br />The code stays the same, but Terraform tracks infrastructure independently for each workspace.</p>
<p>Conceptually:</p>
<ul>
<li><p>Same Terraform files</p>
</li>
<li><p>Different state files</p>
</li>
<li><p>Isolated infrastructure per environment</p>
</li>
</ul>
<p>You can list workspace using:</p>
<p><em>terraform workspace list</em></p>
<p>Create a new one:</p>
<p><em>terraform workspace new dev</em></p>
<p>Switch between them:</p>
<p>terraform workspace select prod</p>
<p>Each workspace maintains its own state, which means changes in one environment never affect another.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770124215610/b46acdac-7c4e-4247-885e-a043acaa8478.png" alt /></p>
<p>Terraform Workspaces and Environment Isolation</p>
<h3 id="heading-how-terraform-uses-workspaces-internally">How Terraform Uses Workspaces Internally</h3>
<p>Terraform automatically exposes the active workspace name via:</p>
<p>terraform.workspace</p>
<p>This value becomes incredibly useful when combined with variables, locals, and naming logic.</p>
<p>For example:</p>
<pre><code class="lang-bash">locals {
environment = terraform.workspace
}
</code></pre>
<p>Now your infrastructure becomes environment-aware without extra configuration.</p>
<h3 id="heading-using-workspaces-for-resource-customization">Using Workspaces for Resource Customization</h3>
<p>Once the workspace name is available, it can influence resource behavior.</p>
<p>Example:</p>
<pre><code class="lang-bash">resource <span class="hljs-string">"aws_s3_bucket"</span> <span class="hljs-string">"example"</span> {
bucket = <span class="hljs-string">"my-app-<span class="hljs-variable">${terraform.workspace}</span>-bucket"</span>

tags = {
    Environment = terraform.workspace
    }
}
</code></pre>
<p>With this setup:</p>
<ul>
<li><p><code>dev</code> workspace creates <code>my-app-dev-bucket</code></p>
</li>
<li><p><code>prod</code> workspace creates <code>my-app-prod-bucket</code></p>
</li>
</ul>
<p>Same code. Different infrastructure. Clean separation.</p>
<h3 id="heading-workspaces-and-remote-state">Workspaces and Remote State</h3>
<p>When combined with a remote backend like S3, workspaces become even more powerful. Terraform automatically namespaces state files by workspace:</p>
<p>s3://terraform-state-bucket/project-name/dev/terraform.tfstate<br />s3://terraform-state-bucket/project-name/prod/terraform.tfstate</p>
<p>This ensures:</p>
<ul>
<li><p>No state collisions</p>
</li>
<li><p>Safe parallel usage</p>
</li>
<li><p>Clean environment isolation</p>
</li>
</ul>
<p>This pattern is commonly used in real-world Terraform deployments, especially when paired with CI/CD pipelines.</p>
<h3 id="heading-when-workspaces-make-sense-and-when-they-dont">When Workspaces Make Sense (and When They Don’t)</h3>
<p>Workspaces are excellent for:</p>
<ul>
<li><p>Environment separation (dev, staging, prod)</p>
</li>
<li><p>Small-to-medium projects</p>
</li>
<li><p>Teams sharing the same codebase</p>
</li>
</ul>
<p>They are not ideal for:</p>
<ul>
<li><p>Completely different architectures per environment</p>
</li>
<li><p>Very large organizations with strict account separation</p>
</li>
<li><p>Scenarios requiring separate Terraform repos</p>
</li>
</ul>
<p>Understanding this trade-off is part of using Terraform responsibly.</p>
<h3 id="heading-the-real-lesson-of-day-7">The Real Lesson of Day 7</h3>
<p>Day 7 isn’t about memorizing workspace commands. It’s about <strong>thinking in environments</strong>.</p>
<p>Instead of writing different code for different environments, you design infrastructure that:</p>
<ul>
<li><p>Adapts based on context</p>
</li>
<li><p>Isolates risk through state separation</p>
</li>
<li><p>Encourages consistency</p>
</li>
</ul>
<p>That mindset is what allows Terraform to scale beyond personal labs into real systems.</p>
<h3 id="heading-takeaway">Takeaway</h3>
<p>Day 7 introduced a critical Terraform capability:</p>
<ol>
<li><p><strong>Workspaces separate state, not code</strong></p>
</li>
<li><p><strong>The same configuration can safely manage multiple environments</strong></p>
</li>
<li><p><strong>Combining workspaces with variables and remote state enables production-grade workflows</strong></p>
</li>
</ol>
<p>With workspaces, Terraform moves from “infrastructure provisioning” to <strong>environment management</strong> — a key milestone in any DevOps journey.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/ars0a/30-Days-Of-Terraform.git">https://github.com/ars0a/30-Days-Of-Terraform.git</a></div>
]]></content:encoded></item><item><title><![CDATA[Day 6 of #30DaysOfTerraform: Thinking in Modules, Not Files]]></title><description><![CDATA[#30daysofawsterraform
By Day 6, Terraform starts to demand a different mindset.Up until now, writing everything in a single directory worked fine. Variables improved flexibility, remote state improved reliability, but the structure was still flat. Th...]]></description><link>https://blog.adityarajsingh.in/day-6-of-30daysofterraform-thinking-in-modules-not-files-d926a0e5a4bc</link><guid isPermaLink="true">https://blog.adityarajsingh.in/day-6-of-30daysofterraform-thinking-in-modules-not-files-d926a0e5a4bc</guid><category><![CDATA[AWS]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[automation]]></category><category><![CDATA[#30DaysOfAWSTerraform]]></category><category><![CDATA[Infrastructure as code]]></category><dc:creator><![CDATA[Aditya Raj Singh]]></dc:creator><pubDate>Wed, 11 Feb 2026 13:30:56 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1770124222225/37181f40-ee9e-4a59-8d04-c6e0f031d5d1.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>#30daysofawsterraform</p>
<p>By Day 6, Terraform starts to demand a different mindset.<br />Up until now, writing everything in a single directory worked fine. Variables improved flexibility, remote state improved reliability, but the structure was still flat. That approach breaks down quickly as infrastructure grows.</p>
<p>Day 6 was about <strong>Terraform modules</strong> — not as a feature, but as a design philosophy. This is the day where Terraform stops feeling like configuration files and starts behaving like a real infrastructure framework.</p>
<h3 id="heading-why-modules-exist-at-all">Why Modules Exist at All</h3>
<p>Without modules, Terraform encourages repetition. You define the same VPC logic, security groups, or tagging patterns again and again across environments. That repetition creates drift, inconsistency, and eventually fear of change.</p>
<p>Modules solve this by introducing <strong>encapsulation</strong>. They allow you to define infrastructure once and reuse it everywhere with different inputs. Instead of copying code, you compose infrastructure.</p>
<p>This mirrors good software design:</p>
<ul>
<li><p>Functions instead of duplicated logic</p>
</li>
<li><p>Interfaces instead of hardcoded values</p>
</li>
<li><p>Inputs and outputs instead of hidden assumptions</p>
</li>
</ul>
<h3 id="heading-what-a-terraform-module-really-is">What a Terraform Module Really Is</h3>
<p>At its core, a module is just a directory with Terraform files. There’s no special syntax, no magic wrapper. Terraform treats any directory with <code>.tf</code> files as a module.</p>
<p>A typical module structure looks like this:</p>
<pre><code class="lang-bash">Day-6/
├── main.tf
├── variables.tf
├── locals.tf
├── outputs.tf
├── providers.tf
└── modules/
└── vpc/
├── main.tf
├── variables.tf
├── outputs.tf
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770124220963/41f04a3e-fafd-49ac-91f2-19cab5e2f775.png" alt /></p>
<p>Inside the module, resources are defined normally. What changes is <strong>how values enter and leave</strong> the module.</p>
<h3 id="heading-defining-inputs-and-outputs">Defining Inputs and Outputs</h3>
<p>Modules communicate through variables and outputs. This explicit interface is what makes them safe and reusable.</p>
<p>Inside a VPC module, you might see:</p>
<pre><code class="lang-bash">variable <span class="hljs-string">"cidr_block"</span> {
<span class="hljs-built_in">type</span> = string
}
resource <span class="hljs-string">"aws_vpc"</span> <span class="hljs-string">"this"</span> {
cidr_block = var.cidr_block
}
</code></pre>
<p>And an output:</p>
<pre><code class="lang-bash">output <span class="hljs-string">"vpc_id"</span> {
value = aws_vpc.this.id
}
</code></pre>
<p>The module itself doesn’t care <em>where</em> it’s used. It only knows what inputs it receives and what outputs it exposes. That separation is intentional and powerful.</p>
<h3 id="heading-using-a-module-from-the-root-configuration">Using a Module from the Root Configuration</h3>
<p>At the root level, modules are called explicitly:</p>
<pre><code class="lang-bash">module <span class="hljs-string">"vpc"</span> {
<span class="hljs-built_in">source</span> = <span class="hljs-string">"./modules/vpc"</span>
cidr_block = <span class="hljs-string">"10.0.0.0/16"</span>
}
</code></pre>
<p>This is where composition happens. The root module orchestrates infrastructure by wiring together smaller, focused modules. Each module solves one problem well.</p>
<p>If you need multiple VPCs or environments, you don’t rewrite the module. You reuse it with different inputs.</p>
<h3 id="heading-why-this-changes-everything">Why This Changes Everything</h3>
<p>Modules force discipline. They push you to:</p>
<ul>
<li><p>Separate concerns</p>
</li>
<li><p>Define clear boundaries</p>
</li>
<li><p>Avoid hidden dependencies</p>
</li>
<li><p>Treat infrastructure like reusable building blocks</p>
</li>
</ul>
<p>They also make Terraform safer. When logic lives in one place, fixes and improvements propagate automatically. When something breaks, you know where to look.</p>
<p>From a team perspective, modules enable collaboration. One person can maintain networking modules while another consumes them without needing to understand every internal detail.</p>
<h3 id="heading-local-vs-remote-modules">Local vs Remote Modules</h3>
<p>Day 6 typically starts with <strong>local modules</strong>, sourced from directories in the same repo. That’s intentional. It builds understanding before introducing complexity.</p>
<p>Later, these same modules can live in:</p>
<ul>
<li><p>Git repositories</p>
</li>
<li><p>Versioned module registries</p>
</li>
<li><p>Shared internal platforms</p>
</li>
</ul>
<p>The interface stays the same. Only the source changes.</p>
<p>That consistency is one of Terraform’s strongest design decisions.</p>
<h3 id="heading-the-mental-shift-of-day-6">The Mental Shift of Day 6</h3>
<p>The biggest takeaway from Day 6 wasn’t syntax. It was architectural thinking.</p>
<p>Instead of asking:</p>
<blockquote>
<p>“H<em>ow do I create this resource?”</em></p>
</blockquote>
<p>You start asking:</p>
<blockquote>
<p>“H<em>ow do I design this so it can be reused safely?”</em></p>
</blockquote>
<p>That’s the difference between infrastructure that works today and infrastructure that scales tomorrow.</p>
<h3 id="heading-takeaway">Takeaway</h3>
<p>Day 6 introduced the idea that <strong>Terraform is not about writing files — it’s about composing systems</strong>.</p>
<ul>
<li><p>Modules encapsulate infrastructure logic</p>
</li>
<li><p>Variables and outputs define clean interfaces</p>
</li>
<li><p>Reuse replaces duplication</p>
</li>
<li><p>Structure replaces chaos</p>
</li>
</ul>
<p>Once you start thinking in modules, every Terraform project becomes easier to reason about, safer to change, and far more professional.</p>
<p>This is where Infrastructure as Code begins to feel like engineering, not configuration.</p>
<p><a target="_blank" href="https://github.com/ars0a/30-Days-Of-Terraform.git"><strong>GitHub - ars0a/30-Days-Of-Terraform: This repo contains my journey where I learn Terraform from…</strong><br />*This repo contains my journey where I learn Terraform from scratch and share my progress every single day. Each day has…*github.com</a></p>
]]></content:encoded></item><item><title><![CDATA[The Hidden Cost of the Free Tier in Cloud Platforms]]></title><description><![CDATA[When I started learning cloud and DevOps, the free tiers felt like a gift. AWS, Google Cloud, Azure — all of them offered a year of generous limits or a certain amount of credits for new accounts. I remember reading the marketing pages and thinking, ...]]></description><link>https://blog.adityarajsingh.in/the-hidden-cost-of-the-free-tier-in-cloud-platforms-c42e31187498</link><guid isPermaLink="true">https://blog.adityarajsingh.in/the-hidden-cost-of-the-free-tier-in-cloud-platforms-c42e31187498</guid><category><![CDATA[Cloud Computing]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[Cloud Platforms]]></category><category><![CDATA[AWS]]></category><category><![CDATA[GCP]]></category><category><![CDATA[Azure]]></category><dc:creator><![CDATA[Aditya Raj Singh]]></dc:creator><pubDate>Tue, 10 Feb 2026 12:35:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1770124192932/a708193a-b281-4045-9b31-21b5a660f0ce.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When I started learning cloud and DevOps, the free tiers felt like a gift. AWS, Google Cloud, Azure — all of them offered a year of generous limits or a certain amount of credits for new accounts. I remember reading the marketing pages and thinking, “This is perfect. I can experiment without worrying about money.” Most beginners I talked to felt the same. The common belief was simple: as long as you stay within the free tier boundaries, nothing costs anything.</p>
<h3 id="heading-the-first-bill-that-changed-everything">The First Bill That Changed Everything</h3>
<p>It was small about $8 but it shook me. I had been careful, or so I thought. I’d only spun up a few virtual machines, stored some files, and run a couple of experiments. Nothing extravagant.</p>
<p>Yet there it was: charges for data transfer, storage, and something called a NAT gateway that I barely remembered creating.</p>
<p>I spent an evening digging through the billing dashboard, trying to match line items to my actions. That was the moment the “free” part started to feel conditional.</p>
<h3 id="heading-patterns-that-appear-only-with-practice">Patterns That Appear Only With Practice</h3>
<p>Over the next year, as I kept practicing — building small projects, tearing them down, making mistakes, rebuilding — I began to see patterns.</p>
<p>The surprises weren’t random; they followed predictable rules that weren’t obvious from the promotional pages. The free tier wasn’t a trap, but it also wasn’t a blanket safety net. It had gaps, and those gaps taught me more about cloud economics than any tutorial ever could.</p>
<h3 id="heading-idle-resources-the-silent-meter">Idle Resources: The Silent Meter</h3>
<p>One of the earliest patterns I noticed was around idle resources.</p>
<p>I’d launch a virtual machine to test something, get distracted, and forget about it. A few days later I’d see it still running. The free tier in AWS, for example, gives you 750 hours per month of t2.micro or t3.micro usage. That sounds like a lot — roughly one instance running continuously.</p>
<p>But if you launch two instances, even small ones, the hours add up across all of them. The meter runs on total usage, not per instance. I learned this the hard way when I left a second instance running for a weekend “just in case” I needed it.</p>
<h3 id="heading-storage-isnt-just-about-space">Storage Isn’t Just About Space</h3>
<p>Storage works similarly.</p>
<p>You get a certain amount of free object storage (like S3 or Cloud Storage), but every object you upload counts until you delete it. I once created multiple buckets while experimenting with static websites, forgot to empty them, and slowly accumulated gigabytes.</p>
<p>The storage itself stayed free for a while, but retrieval requests and eventual overflow pushed me over the limit.</p>
<h3 id="heading-region-choices-have-real-costs">Region Choices Have Real Costs</h3>
<p>Region mistakes were another recurring surprise.</p>
<p>I initially assumed pricing and free tier eligibility were consistent across the world. They’re not. Some services have different limits or costs depending on the region.</p>
<p>More importantly, data transfer between regions — or out to the internet — often costs money, even when the compute or storage is free. I once deployed a small database in a region close to me for lower latency, then copied data from a tutorial bucket in a US region. The egress charges appeared quietly in the bill.</p>
<p>The cloud providers optimize their infrastructure regionally, so moving data across those boundaries has a real cost for them, which they pass on.</p>
<h3 id="heading-resources-that-charge-just-for-existing">Resources That Charge Just for Existing</h3>
<p>Another subtle one: certain resources have minimum charges regardless of usage.</p>
<p>NAT gateways and load balancers, for example, bill by the hour they exist, not just when traffic flows through them. I created a NAT gateway while following a guide on private subnets, then deleted the tutorial environment but forgot that one piece.</p>
<p>It sat there, idle, costing a few cents per hour — enough to add up over a month.</p>
<h3 id="heading-what-free-tier-actually-means">What “Free Tier” Actually Means</h3>
<p>Through all these small incidents, my understanding evolved from “free means zero cost” to “free means heavily discounted with sharp edges.”</p>
<p>The system is logical once you see it: providers want to attract learners and startups, so they subsidize common learning workloads. But they also need to protect themselves from abuse and cover real infrastructure costs.</p>
<p>The free tier boundaries are drawn where typical experimentation ends and production-like usage begins. Idle high-bandwidth resources, cross-region data movement, and long-lived networking components all resemble production patterns more than learning ones, so they fall outside the subsidy.</p>
<h3 id="heading-the-trade-offs-nobody-talks-about">The Trade-Offs Nobody Talks About</h3>
<p>There are trade-offs, of course.</p>
<p>The free tier encourages hands-on practice, which is invaluable. Without it, far fewer people would learn cloud skills. But it also trains a kind of vigilance early on.</p>
<p>You learn to check regions deliberately, to tag resources, to set reminders or scripts to clean up. The alternative — truly unlimited free usage — would either bankrupt the providers or force them to impose heavy restrictions that would hurt learning more than occasional small bills do.</p>
<h3 id="heading-how-this-changed-my-approach-to-cloud-work">How This Changed My Approach to Cloud Work</h3>
<p>This experience quietly changed how I approach cloud work.</p>
<p>I no longer assume an action is free just because I’m on a new account. I check the pricing page for the specific service and region before I click “create.” I make a habit of reviewing running resources every few days.</p>
<p>I even started using the billing alarms that most providers offer — simple thresholds that email you if spending crosses a limit. None of this feels like extra work anymore; it feels like part of understanding the platform.</p>
<h3 id="heading-the-real-value-of-those-small-bills">The Real Value of Those Small Bills</h3>
<p>Looking back, those small unexpected bills were some of the most effective teachers I had. They carried real-world consequences — not large enough to hurt, but enough to focus attention. The fear of another surprise charge made me read documentation more carefully, ask better questions, and notice details I’d previously skimmed over.</p>
<p>In a way, the hidden costs of the free tier aren’t hidden at all once you’ve paid them a few times. They’re just the tuition for learning how cloud pricing actually works.</p>
]]></content:encoded></item><item><title><![CDATA[Day 5 of #30DaysOfTerraform — Mastering Variables for Flexible Infrastructure]]></title><description><![CDATA[#30daysofawsterraform
By Day 5 of this journey, you’ve already stood up infrastructure, handled remote state, and built a foundation of reusable Terraform configurations. But real Infrastructure-as-Code isn’t just about what you build — it’s about ho...]]></description><link>https://blog.adityarajsingh.in/day-5-of-30daysofterraform-mastering-variables-for-flexible-infrastructure-2043b00876ae</link><guid isPermaLink="true">https://blog.adityarajsingh.in/day-5-of-30daysofterraform-mastering-variables-for-flexible-infrastructure-2043b00876ae</guid><category><![CDATA[AWS]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[Infrastructure as code]]></category><category><![CDATA[#IaC]]></category><category><![CDATA[Devops]]></category><category><![CDATA[#30DaysOfAWSTerraform]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[automation]]></category><dc:creator><![CDATA[Aditya Raj Singh]]></dc:creator><pubDate>Mon, 09 Feb 2026 13:35:30 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1770124229323/285e8541-0c64-4c33-8250-8081c22d9819.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>#30daysofawsterraform</p>
<p>By Day 5 of this journey, you’ve already stood up infrastructure, handled remote state, and built a foundation of reusable Terraform configurations. But real Infrastructure-as-Code isn’t just about <em>what</em> you build — it’s about <em>how flexibly</em> you build it.</p>
<p>Day 5 was all about <strong>variables — input, output, and locals — and how they give your configurations real muscle</strong>. Instead of hardcoding values everywhere, you learn to write Terraform that adapts, scales, and responds to different environments or use cases without changing the code itself. This shift — from static definitions to dynamic infrastructure — changes how you think about IaC forever.</p>
<h3 id="heading-why-variables-matter">Why Variables Matter</h3>
<p>In your early days with Terraform, you probably hardcoded simple values like CIDR blocks, bucket names, or instance sizes. That’s fine for a lab. But once you start building real environments — <em>dev</em>, <em>staging</em>, <em>prod</em> — hardcoding becomes a liability: it requires copy-paste, invites errors, and makes maintenance painful.</p>
<p>Variables let you write one configuration that can behave differently depending on context. They turn Terraform from a single-purpose script into a reusable definition of intent.</p>
<p>The official curriculum for Day 5 lists the key areas covered: input variables, output variables, locals, variable precedence, and variable files (<code>*.tfvars</code>). All of these pieces come together to make your code reusable and safe at scale.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770124225742/545f0341-6069-4ecd-b3bb-2d193e0b38ea.png" alt /></p>
<h3 id="heading-input-variables-your-terraform-knobs">Input Variables — Your Terraform Knobs</h3>
<p>The simplest form of variability in Terraform comes from <strong>input variables</strong>. These let you parameterize your configuration without rewriting code. A basic variable definition looks like this:</p>
<pre><code class="lang-bash">variable <span class="hljs-string">"region"</span> {
    description = <span class="hljs-string">"The AWS region to deploy into"</span>
    <span class="hljs-built_in">type</span> = stringdefault = <span class="hljs-string">"us-east-1"</span>
}
</code></pre>
<p>This declares a named variable with a type and a default. When you run Terraform, you can override the default using a <code>.tfvars</code> file or command-line flags.</p>
<p>For instance, a <code>terraform.tfvars</code> file might contain:</p>
<p>region = "us-west-2"</p>
<p>This pattern decouples values from logic, making your configs far more flexible. It also lets teammates or automation pipelines inject values without touching code.</p>
<h3 id="heading-output-variables-sharing-useful-data">Output Variables — Sharing Useful Data</h3>
<p>Once resources are created, you often want to expose information from them — maybe an IP, a DNS name, or a resource ARN. Terraform’s <strong>output variables</strong> are designed precisely for that.</p>
<p>Here’s a typical output:</p>
<pre><code class="lang-bash">output <span class="hljs-string">"vpc_id"</span> {
    description = <span class="hljs-string">"The ID of the created VPC"</span>
    value = aws_vpc.example.id
}
</code></pre>
<p>By exposing outputs, you turn Terraform into not just a builder of infrastructure, but a <strong>data provider</strong>. Outputs can feed other automation, be shown to users, or be passed into remote systems in CI/CD pipelines.</p>
<h3 id="heading-locals-refactoring-and-reuse">Locals — Refactoring and Reuse</h3>
<p>When you find yourself repeating expressions or values derived from multiple variables, Terraform’s <strong>locals</strong> help you collapse that repetition into a single source of truth.</p>
<p>For example:</p>
<pre><code class="lang-bash">locals {
    common_tags = {
    Environment = var.environment
    }
}
</code></pre>
<p>You can reference <code>local.common_tags</code> anywhere. This isn’t just a convenience — it reduces duplication, minimizes risk of drift, and makes refactoring safer.</p>
<h3 id="heading-variable-precedence-and-files">Variable Precedence and Files</h3>
<p>Terraform evaluates variables in a well-defined order: defaults, environment variables, <code>.tfvars</code> files (like <code>terraform.tfvars</code>), and command-line flags. Understanding this precedence lets you design configurations that behave predictably in different contexts — local development, automated tests, or production pipelines.</p>
<p>Variable files (<code>*.tfvars</code>) let you maintain separate configurations for <em>dev</em> vs <em>prod</em> without changing any HCL. They’re especially useful in automation, where different environments need different inputs but share the same underlying module logic.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770124227278/45574236-6eb1-432b-8d9d-329ca907abf5.png" alt /></p>
<h3 id="heading-how-this-changes-your-mental-model">How This Changes Your Mental Model</h3>
<p>What made Day 5 a real shift wasn’t just learning syntax, but the <em>intent behind variables</em>:</p>
<ul>
<li><p>They detach environment-specific settings from code</p>
</li>
<li><p>They encourage reuse and modular design</p>
</li>
<li><p>They make automation robust, predictable, and safe</p>
</li>
</ul>
<p>With variables, your Terraform configurations stop being brittle scripts and start looking like <strong>declarative interfaces</strong> to infrastructure.</p>
<h3 id="heading-takeaway">Takeaway</h3>
<p>Day 5 was where Terraform truly began to feel like real Infrastructure-as-Code:</p>
<ol>
<li><p><strong>Input variables</strong> let configurations adapt without rewriting code.</p>
</li>
<li><p><strong>Outputs and locals</strong> make data reusable and configurations reusable.</p>
</li>
<li><p><strong>Understanding variable precedence and tfvars</strong> makes environments manageable and pipelines reliable.</p>
</li>
</ol>
<p>Once you think in terms of inputs and outputs instead of hardcoded values, Terraform stops being a set of commands and becomes a <em>framework for predictable, reusable, and automated infrastructure</em>.</p>
<p>#Terraform #DevOps #AWS #IaC #30DaysOfTerraform #InfrastructureAsCode</p>
]]></content:encoded></item><item><title><![CDATA[Day 4 of #30DaysOfTerraform: Why Terraform State Matters (and How Remote Backends Change the Game)]]></title><description><![CDATA[#30DaysOfAWSTerraform
By Day 4, I already know how to define AWS resources in Terraform and apply them. But real infrastructure work depends on something deeper: state. Terraform’s internal snapshot of your infrastructure determines what exists, what...]]></description><link>https://blog.adityarajsingh.in/day-4-of-30daysofterraform-why-terraform-state-matters-and-how-remote-backends-change-the-game-6349883ec70f</link><guid isPermaLink="true">https://blog.adityarajsingh.in/day-4-of-30daysofterraform-why-terraform-state-matters-and-how-remote-backends-change-the-game-6349883ec70f</guid><category><![CDATA[AWS]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[terraform-cloud]]></category><category><![CDATA[#30DaysOfAWSTerraform]]></category><category><![CDATA[Devops]]></category><category><![CDATA[DevOps Journey]]></category><category><![CDATA[automation]]></category><category><![CDATA[Cloud]]></category><dc:creator><![CDATA[Aditya Raj Singh]]></dc:creator><pubDate>Sun, 08 Feb 2026 11:32:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1770124233776/66793040-08be-4ead-8535-5b1b15bfb993.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>#30DaysOfAWSTerraform</p>
<p>By Day 4, I already know how to define AWS resources in Terraform and apply them. But real infrastructure work depends on something deeper: <strong>state</strong>. Terraform’s internal snapshot of your infrastructure determines what exists, what needs to change, and what should be destroyed. Understanding how state works, why it matters, and how to manage it beyond local files becomes a pivot point in any serious Terraform workflow.</p>
<p>If Day 3 was about building resources, Day 4 was about trusting them.</p>
<h3 id="heading-why-terraform-state-matters">Why Terraform State Matters</h3>
<p>At first glance, Terraform seems straightforward: declare resources and let Terraform create them. Under the hood, Terraform needs a way to <strong>track</strong> infrastructure across runs. It does that through the <strong>state file</strong>, a structured map of resource metadata that tells Terraform what’s already deployed and how future changes should be applied.</p>
<p>Local state works while you’re experimenting alone. The moment you introduce collaboration, CI/CD pipelines, or long-lived environments, storing state on your laptop stops being viable. On Day 4, I moved that state into AWS using an S3 backend. It’s a small change in configuration, but a major change in how Terraform workflows behave in a team environment.</p>
<h3 id="heading-what-terraform-state-really-is">What Terraform State Really Is</h3>
<p>Every <code>terraform apply</code> updates a file called <code>terraform.tfstate</code>. This file maps your HCL configuration to real cloud resources. Think of it as Terraform’s memory. Without it, Terraform would have to guess what exists, and that’s unacceptable in any production environment.</p>
<blockquote>
<p>The file contains:</p>
</blockquote>
<ul>
<li><p>Resource identifiers (like VPC IDs, bucket names, etc.)</p>
</li>
<li><p>Resource relationships and dependencies</p>
</li>
<li><p>Computed values and outputs</p>
</li>
<li><p>Provider and metadata information</p>
</li>
</ul>
<p>This allows Terraform to compute accurate diffs during <code>plan</code> and execute safe updates during <code>apply</code> or <code>destroy</code>. Commands like <code>terraform plan</code> and <code>terraform destroy</code> rely entirely on this file.</p>
<p>By default, it lives locally as:</p>
<p>terraform.tfstate<br />terraform.tfstate.backup</p>
<p>Fine for a solo lab. Not fine for a multi-developer environment.</p>
<h3 id="heading-architecture-at-a-glance">Architecture at a Glance</h3>
<blockquote>
<p>Here’s the architecture I used on Day 4:</p>
</blockquote>
<ul>
<li><p><strong>Terraform CLI</strong>: local execution and configuration</p>
</li>
<li><p><strong>AWS S3</strong>: remote backend for the <code>terraform.tfstate</code></p>
</li>
<li><p><strong>AWS resources</strong>: VPCs, S3 buckets, etc.</p>
</li>
</ul>
<blockquote>
<p>How it fits together:</p>
</blockquote>
<ol>
<li><p>Backend configuration tells Terraform where to store state.</p>
</li>
<li><p><code>terraform init</code> sets up the backend and migrates existing state.</p>
</li>
<li><p><code>plan</code> and <code>apply</code> use the remote state for consistent diffing and changes.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770124232027/6cfbe5c5-05ff-47da-aa34-67134fa0317b.png" alt /></p>
<p>Terraform operates locally as the control plane, but its memory — the state — lives in S3 where it’s shared, durable, and safely accessible.</p>
<h3 id="heading-setting-up-a-remote-backend-in-code">Setting Up a Remote Backend in Code</h3>
<p>Here’s the backend block I used:</p>
<pre><code class="lang-bash">terraform {
    backend <span class="hljs-string">"s3"</span> {
    bucket = <span class="hljs-string">"aditya-tf-day3-3d879655"</span>
    key = <span class="hljs-string">"dev/terraform.tfstate"</span>
    region = <span class="hljs-string">"us-east-1"</span>
    encrypt = <span class="hljs-literal">true</span>
    use_lockfile = <span class="hljs-literal">true</span>
    }
}
</code></pre>
<p>This tells Terraform to push its state into the S3 bucket under the <code>dev/terraform.tfstate</code> key. <code>encrypt = true</code> ensures encryption at rest. <code>use_lockfile = true</code> introduces native locking so multiple operations don’t collide.</p>
<p>After running:</p>
<pre><code class="lang-bash">tf init
</code></pre>
<p>Terraform prompted me to migrate my existing local state, then initialized successfully. From this point onward, <code>tf plan</code> and <code>tf apply</code> behave the same, but now with shared state stored in AWS is ideal for any real DevOps pipeline.</p>
<h3 id="heading-resources-and-the-code-that-matters">Resources and the Code That Matters</h3>
<p>After configuring the backend, I created real AWS resources:</p>
<pre><code class="lang-bash">provider <span class="hljs-string">"aws"</span> {
region = <span class="hljs-string">"us-east-1"</span>
}

resource <span class="hljs-string">"aws_vpc"</span> <span class="hljs-string">"example"</span> {
cidr_block = <span class="hljs-string">"10.0.0.0/16"</span>
}
</code></pre>
<p>I also created a bucket with a generated suffix for uniqueness:</p>
<pre><code class="lang-bash">resource <span class="hljs-string">"random_id"</span> <span class="hljs-string">"suffix"</span> {
byte_length = 4
}

resource <span class="hljs-string">"aws_s3_bucket"</span> <span class="hljs-string">"example"</span> {
bucket = <span class="hljs-string">"aditya-tf-day3-<span class="hljs-variable">${random_id.suffix.hex}</span>"</span>
force_destroy = <span class="hljs-literal">true</span>
}
</code></pre>
<blockquote>
<p>Two practical lessons here:</p>
</blockquote>
<ul>
<li><p>Terraform doesn’t need hard-coded names generated IDs to avoid conflicts.</p>
</li>
<li><p><code>force_destroy = true</code> lets Terraform delete buckets even when objects exist.</p>
</li>
</ul>
<p>This style of writing infrastructure feels natural once you start thinking declaratively: define intent, let Terraform translate it into API calls.</p>
<h3 id="heading-small-insights-that-matter">Small Insights That Matter</h3>
<p>Terraform’s state file isn’t just a bookkeeping tool it’s the <strong>source of truth</strong>. With a remote backend, Terraform doesn’t recreate resources unnecessarily because it checks state first. That’s more than convenient; it’s the safety layer that keeps teams from stepping on each other’s work.</p>
<p>Another insight from Day 4: S3 as a backend isn’t just storage. With locking enabled, Terraform avoids race conditions when two users apply changes at the same time. That capability alone makes remote state mandatory for any serious deployment workflow.</p>
<h3 id="heading-takeaways-from-day-4">Takeaways from Day 4</h3>
<p>By the end of the day, I hadn’t just written more HCL I had changed how Terraform relates to infrastructure. Moving state into S3 gave me shared, durable Terraform memory. I saw how backend configuration interacts with AWS during initialization and planning, and I deployed real resources while watching Terraform track them reliably through state.</p>
<p>Day 4 was about mastering state, not just creating resources and that’s a turning point in understanding how Terraform scales beyond a personal lab.</p>
<p><a target="_blank" href="https://github.com/ars0a/30-Days-Of-Terraform.git"><strong>GitHub - ars0a/30-Days-Of-Terraform: This repo contains my journey where I learn Terraform from…</strong><br />*This repo contains my journey where I learn Terraform from scratch and share my progress every single day. Each day has…*github.com</a></p>
<p><em>See you tomorrow…</em></p>
]]></content:encoded></item><item><title><![CDATA[Day 3 of Terraform: When Infrastructure Becomes Predictable]]></title><description><![CDATA[#30daysofawsterraform
By Day 3 of working with Terraform, the focus naturally shifts from writing configuration to understanding behavior. Provisioning an S3 bucket or a VPC is not the hard part. The real learning starts when something fails, partial...]]></description><link>https://blog.adityarajsingh.in/day-3-of-terraform-when-infrastructure-becomes-predictable-7a947d96c26a</link><guid isPermaLink="true">https://blog.adityarajsingh.in/day-3-of-terraform-when-infrastructure-becomes-predictable-7a947d96c26a</guid><category><![CDATA[Terraform]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Infrastructure as code]]></category><category><![CDATA[automation]]></category><dc:creator><![CDATA[Aditya Raj Singh]]></dc:creator><pubDate>Sat, 07 Feb 2026 13:36:26 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1770124238988/b8ac6255-4257-48d2-b6be-f335e6d87efd.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>#30daysofawsterraform</p>
<p>By Day 3 of working with Terraform, the focus naturally shifts from <em>writing configuration</em> to <em>understanding behavior</em>. Provisioning an S3 bucket or a VPC is not the hard part. The real learning starts when something fails, partially succeeds, or behaves differently than expected.</p>
<p>This day was about building AWS infrastructure with Terraform, but more importantly, about understanding how Terraform thinks: how it plans, applies, tracks state, and reacts to failure. That mental model is what separates copy-paste infrastructure from reliable Infrastructure as Code.</p>
<h3 id="heading-the-architecture-simple-by-design-intentional-by-choice">The Architecture: Simple by Design, Intentional by Choice</h3>
<p>The architecture for Day 3 was deliberately minimal:</p>
<ul>
<li><p>An AWS VPC, representing foundational network infrastructure</p>
</li>
<li><p>A private S3 bucket, representing a globally scoped AWS resource</p>
</li>
<li><p>Terraform state managing both resources</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770124237529/518958f8-9477-42a3-9faa-1469287e0130.png" alt /></p>
<p>Terraform acting as the control plane, provisioning and managing AWS VPC and S3 resources through the AWS provider.</p>
<p>There is no direct dependency between a VPC and an S3 bucket, and that’s precisely why this setup is useful. It allows Terraform’s execution model to be observed clearly. Each resource exists independently, yet both are governed by the same state file. That shared state is what ties everything together.</p>
<p>Terraform’s job is not to “run commands” against AWS. Its job is to continuously reconcile three things: configuration, state, and real infrastructure. Once you see Terraform through that lens, many confusing behaviors start making sense.</p>
<h3 id="heading-writing-the-configuration">Writing the Configuration</h3>
<p>The starting point was straightforward. An AWS provider and a basic VPC definition:</p>
<pre><code class="lang-bash">provider <span class="hljs-string">"aws"</span> {
region = <span class="hljs-string">"us-east-1"</span>
}
resource <span class="hljs-string">"aws_vpc"</span> <span class="hljs-string">"example"</span> {
cidr_block = <span class="hljs-string">"10.0.0.0/16"</span>
}
</code></pre>
<p>Nothing unusual here. The complexity emerged with S3.</p>
<p>Unlike most AWS resources, S3 buckets live in a global namespace. A name that looks unique locally may already exist somewhere else in the world. This is where many Terraform beginners get stuck, and where Terraform’s behavior becomes educational.</p>
<p>An initial hardcoded bucket name predictably failed with a<code>BucketAlreadyExists</code> error. Terraform stopped execution, but it did not roll anything back. The VPC, if already created, remained. This is not a bug. It’s a design decision.</p>
<p>Terraform is declarative, not transactional.</p>
<h3 id="heading-solving-the-s3-naming-problem-properly">Solving the S3 Naming Problem Properly</h3>
<p>The correct solution is not to keep guessing bucket names. The correct solution is to design uniqueness into the configuration.</p>
<p>Terraform provides a clean way to do this using the <code>random</code> provider:</p>
<pre><code class="lang-bash">resource <span class="hljs-string">"random_id"</span> <span class="hljs-string">"suffix"</span> {byte_length = 4}
resource <span class="hljs-string">"aws_s3_bucket"</span> <span class="hljs-string">"example"</span> {
bucket = <span class="hljs-string">"aditya-tf-day3-<span class="hljs-variable">${random_id.suffix.hex}</span>"</span>
</code></pre>
<pre><code class="lang-bash">tags = {
    Name = <span class="hljs-string">"My bucket"</span>
    Environment = <span class="hljs-string">"Dev"</span>
    ManagedBy = <span class="hljs-string">"Terraform"</span>
    }
}
</code></pre>
<p>This approach guarantees global uniqueness while keeping names readable and predictable. The important detail is that <code>random_id</code> is stable once created. Terraform does not regenerate it on every apply. That stability is what allows consistent infrastructure over time.</p>
<p>Adding this resource also surfaced another subtle concept: provider management. Introducing <code>random_id</code> required running <code>terraform init -upgrade</code> to sync the dependency lock file. Terraform is strict here by design, and that strictness pays off in long-term reproducibility.</p>
<h3 id="heading-state-is-the-real-engine">State Is the Real Engine</h3>
<p>One of the most important lessons from Day 3 was how Terraform state behaves during partial failures.</p>
<p>If Terraform successfully creates a VPC but fails to create an S3 bucket, the state file reflects exactly that. On the next run, Terraform does not “start over.” It resumes. The plan only includes what is missing or drifted.</p>
<p>The same applies to updates. When tags were added to the existing VPC, Terraform detected the difference and planned an in-place update. No recreation. No downtime. Just a controlled change reflected cleanly in state after apply.</p>
<p>This is why <code>terraform plan</code> is non-negotiable. It is not a suggestion. It is the single most accurate description of what Terraform is about to do.</p>
<h3 id="heading-destruction-is-just-another-state-transition">Destruction Is Just Another State Transition</h3>
<p>Running <code>terraform destroy</code> is not a special mode. It is simply another plan, with the desired end state being “nothing.” Terraform destroys only what exists in state, in a safe order, with explicit confirmation.</p>
<p>That predictability is what makes Terraform safe to use at scale.</p>
<p><a target="_blank" href="https://github.com/ars0a/30-Days-Of-Terraform.git"><strong>GitHub - ars0a/30-Days-Of-Terraform: This repo contains my journey where I learn Terraform from…</strong><br />*This repo contains my journey where I learn Terraform from scratch and share my progress every single day. Each day has…*github.com</a></p>
<h3 id="heading-takeaway">Takeaway</h3>
<p>Day 3 reinforced a critical truth: Terraform is less about writing <code>.tf</code> files and more about understanding state-driven change.</p>
<p>By working through real failures, name collisions, provider mismatches, and in-place updates, the infrastructure stopped feeling magical and started feeling mechanical — in the best possible way. Once you trust the plan, respect the state, and design for uniqueness and clarity, Terraform becomes a reliable system rather than a fragile tool.</p>
<p>That shift in mindset is where real DevOps learning begins.</p>
<iframe src="https://www.youtube.com/embed/09HQ_R1P7Lw?feature=oembed" width="700" height="393"></iframe>]]></content:encoded></item><item><title><![CDATA[Day 2 Of 30 Days Of AWS Terraform: Providers and Version Management]]></title><description><![CDATA[Introduction
Continuing my journey into the #30DaysOfAWSTerraform challenge, Day 2 shifted from conceptual foundations into the first real Terraform building blocks — providers, configuration blocks, and declaring resources.
If you are new to this se...]]></description><link>https://blog.adityarajsingh.in/day-2-of-30-days-of-aws-terraform-providers-and-version-management-0838c4863cdf</link><guid isPermaLink="true">https://blog.adityarajsingh.in/day-2-of-30-days-of-aws-terraform-providers-and-version-management-0838c4863cdf</guid><category><![CDATA[AWS]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[Devops]]></category><category><![CDATA[#IaC]]></category><category><![CDATA[Infrastructure as code]]></category><dc:creator><![CDATA[Aditya Raj Singh]]></dc:creator><pubDate>Fri, 06 Feb 2026 16:12:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1770124254152/49264aeb-d163-4cca-b6eb-d4d924274f54.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-introduction">Introduction</h3>
<p>Continuing my journey into the #30DaysOfAWSTerraform challenge, Day 2 shifted from conceptual foundations into the first <em>real</em> Terraform building blocks — <strong>providers, configuration blocks, and declaring resources</strong>.</p>
<p>If you are new to this series, Day 1 focused on understanding what. Terraform is, Infrastructure as Code, and the basic Terraform workflow.</p>
<p>Day 2 moves one level deeper. This is where Terraform stops being just configuration files and starts behaving like a system that expects discipline.</p>
<h3 id="heading-terraform-providers-how-terraform-talks-to-the-cloud">Terraform Providers: How Terraform Talks to the Cloud</h3>
<p>Terraform does not communicate with AWS, Azure, or any cloud platform directly.<br />It relies on <strong>providers</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770124249002/917de8e2-d805-493d-800e-73955939cfb6.png" alt class="image--center mx-auto" /></p>
<p><em>Providers translate Terraform configuration into cloud API calls.</em></p>
<p>A provider is essentially a plugin that understands how to translate Terraform configuration into cloud-specific API calls. When we say Terraform is cloud-agnostic, providers are the reason why.</p>
<p><em>Without a provider</em> — Terraform is just a language and a workflow.<br /><em>With a provider</em> — Terraform becomes capable of creating real infrastructure.</p>
<h3 id="heading-a-simple-provider-configuration-looks-like-this">A simple provider configuration looks like this</h3>
<pre><code class="lang-bash">provider <span class="hljs-string">"aws"</span> {
region = <span class="hljs-string">"us-east-1"</span>
}
</code></pre>
<p>This tells Terraform which cloud platform to talk to and where. At this point, Terraform still doesn’t know <em>which version</em> of the provider it should trust. That question becomes important very quickly.</p>
<h3 id="heading-terraform-core-version-vs-provider-version">Terraform Core Version vs Provider Version</h3>
<p>One of the first confusing things on Day 2 is realizing that <strong>Terraform itself has a version</strong>, and <strong>providers also have their own versions</strong>.</p>
<p>They are related, but they are not the same.</p>
<ul>
<li><p><strong>Terraform core version</strong><br />  This is the Terraform CLI you install on your machine. It controls how Terraform parses files, builds dependency graphs, and executes plans.</p>
</li>
<li><p><strong>Provider version</strong><br />  This controls how Terraform interacts with a specific platform, such as AWS APIs.</p>
</li>
</ul>
<blockquote>
<p>Terraform core is the engine.<br />Providers are the plugins.</p>
</blockquote>
<p>This separation gives flexibility, but it also introduces risk if versions are not managed properly.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770124250528/0184090f-61d4-4136-b798-a798cb8d514a.png" alt class="image--center mx-auto" /></p>
<p>Terraform Core and providers are versioned independently.</p>
<h3 id="heading-why-versioning-matters-in-terraform">Why Versioning Matters in Terraform</h3>
<p>A natural question at this stage is:<br /><strong>Why not just use the latest version of everything?</strong></p>
<p>The answer becomes clear when you think about Infrastructure as Code seriously.</p>
<p>Cloud APIs evolve. Providers evolve. Behavior changes. If Terraform silently upgrades a provider, your infrastructure could behave differently without you changing a single line of code.</p>
<p>That breaks one of the core promises of IaC: <strong>predictability</strong>.</p>
<p>Terraform’s solution to this problem is explicit version control.</p>
<h3 id="heading-version-constraints-making-infrastructure-predictable">Version Constraints: Making Infrastructure Predictable</h3>
<p>Terraform allows you to define version constraints using the <code>terraform</code> block.</p>
<pre><code class="lang-bash">terraform {required_version = <span class="hljs-string">"&gt;= 1.0.0"</span>
required_providers {
aws = {
    <span class="hljs-built_in">source</span> = <span class="hljs-string">"hashicorp/aws"</span>
    version = <span class="hljs-string">"~&gt; 6.0"</span>
      }
    }
}
</code></pre>
<p>This block does several important things at once:</p>
<ul>
<li><p>Ensures Terraform runs only on compatible CLI versions</p>
</li>
<li><p>Locks the provider to a safe version range</p>
</li>
<li><p>Prevents accidental upgrades that could introduce breaking changes</p>
</li>
</ul>
<p>Terraform evaluates these constraints during <code>terraform init</code>.<br />If versions don’t match, Terraform refuses to proceed.</p>
<p>This may feel strict at first, but it’s intentional.</p>
<h3 id="heading-understanding-version-constraint-operators">Understanding Version Constraint Operators</h3>
<p>Terraform supports several operators for version constraints, but the most commonly used ones are:</p>
<ul>
<li><p><code>=</code> → exact version</p>
</li>
<li><p><code>&gt;=</code> → minimum acceptable version</p>
</li>
<li><p><code>~&gt;</code> → pessimistic constraint</p>
</li>
</ul>
<p>The <code>~&gt;</code> operator is especially important:</p>
<p>version = "~&gt; 5.0"</p>
<p>This allows updates within the same major version (for example, <code>5.1</code>, <code>5.2</code>) but blocks breaking changes like <code>6.0</code>.</p>
<p>In practice, this strikes a balance between:</p>
<ul>
<li><p>Stability</p>
</li>
<li><p>Receiving safe improvements and fixes</p>
</li>
</ul>
<p>This is why you’ll often see <code>~&gt;</code> recommended in real Terraform codebases.</p>
<h3 id="heading-resource-blocks-where-infrastructure-is-actually-defined">Resource Blocks: Where Infrastructure Is Actually Defined</h3>
<p>Once the provider is configured and version constraints are in place, Terraform is finally ready to describe real infrastructure. This is done using <strong>resource blocks</strong>.</p>
<p>A resource block defines <strong>what Terraform should create</strong> and <strong>what the desired state should look like</strong>. Terraform does not create anything immediately; it records intent and acts only when the workflow is executed.</p>
<pre><code class="lang-bash">resource <span class="hljs-string">"aws_instance"</span> <span class="hljs-string">"example"</span> {ami = <span class="hljs-string">"ami-0c02fb55956c7d316"</span> <span class="hljs-comment"># Amazon Linux 2 (us-east-1)instance_type = "t2.micro"</span>
</code></pre>
<pre><code class="lang-bash">tags = {Name = <span class="hljs-string">"Terraform-Day2-EC2"</span>Environment = <span class="hljs-string">"Learning"</span>}}
</code></pre>
<p>In this block:</p>
<ul>
<li><p><code>aws_s3_bucket</code> is the resource type provided by the AWS provider</p>
</li>
<li><p><code>example</code> is a local name Terraform uses to track this resource</p>
</li>
<li><p>The configuration inside the block describes the desired state</p>
</li>
</ul>
<p>What’s important is the dependency chain here. Resource blocks rely on providers. Without the AWS provider being configured and versioned correctly, Terraform would not know how to interpret or create this resource.</p>
<p>This is where the earlier discussion about providers and versions becomes concrete.</p>
<blockquote>
<p>Providers define <strong>how</strong> Terraform talks to the cloud.</p>
<p>Resource blocks define <strong>what</strong> Terraform wants to exist.</p>
</blockquote>
<h3 id="heading-providers-and-modules-where-versions-belong">Providers and Modules: Where Versions Belong</h3>
<p>Once providers and versions enter the picture, another design question appears:<br /><strong>Should provider configuration live inside modules or outside?</strong></p>
<p>The recommended approach is:</p>
<ul>
<li><p>Provider configuration and version constraints live in the <strong>root module</strong></p>
</li>
<li><p>Reusable modules only declare which providers they require, without pinning versions</p>
</li>
</ul>
<p>Inside a module, you might see:</p>
<pre><code class="lang-bash">terraform {
        required_providers {
        aws = {<span class="hljs-built_in">source</span> = <span class="hljs-string">"hashicorp/aws"</span>
        }
    }
}
</code></pre>
<p>This keeps modules reusable across</p>
<blockquote>
<p>Different regions</p>
<p>Different accounts</p>
<p>Different provider versions</p>
</blockquote>
<p>Version control stays centralized and intentional.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770124252492/0deb8c57-49f1-4c64-b016-21ca7b172d5e.png" alt class="image--center mx-auto" /></p>
<p>Provider configuration belongs in the root module.</p>
<h3 id="heading-best-practices-that-start-making-sense-on-day-2">Best Practices That Start Making Sense on Day 2</h3>
<p>Day 2 makes several Terraform best practices feel logical rather than arbitrary:</p>
<ul>
<li><p>Pin provider versions early</p>
</li>
<li><p>Never rely on implicit “latest” versions</p>
</li>
<li><p>Keep provider configuration in the root module</p>
</li>
<li><p>Treat Terraform code like application code, not scripts</p>
</li>
<li><p>Let <code>terraform init</code> enforce consistency</p>
</li>
</ul>
<p>These practices matter more as infrastructure grows and more people interact with the same codebase.</p>
<h3 id="heading-what-changed-for-me-on-day-2">What Changed for Me on Day 2</h3>
<p>Day 1 made Terraform understandable.<br />Day 2 made Terraform serious.</p>
<p>Providers and version constraints introduced the idea that Infrastructure as Code is not just about automation, but about <strong>control and predictability over time</strong>.</p>
<p>Terraform started feeling less forgiving, and that’s exactly why it’s trusted in real systems.</p>
<h3 id="heading-closing-thoughts">Closing Thoughts</h3>
<p>Day 2 didn’t add much visible infrastructure, but it added something more important: structure.</p>
<p>Understanding providers and version management early prevents subtle problems later, when real resources and real costs are involved.</p>
<p>Huge thanks to <strong>Piyush Sachdeva</strong> for designing the challenge in a way that builds understanding before complexity.</p>
<iframe src="https://www.youtube.com/embed/JFiMmaktnuM?feature=oembed" width="700" height="393"></iframe>

<p>This challenge is slowly reinforcing the idea that good infrastructure isn’t just created, it’s <em>maintained intentionally</em>.</p>
]]></content:encoded></item><item><title><![CDATA[Day 1 of 30 Days of AWS Terraform: Understanding the Basics Before Writing Code]]></title><description><![CDATA[#30daysofawsterraform
Introduction
I’ve clicked buttons in the AWS console before. A lot of them. Sometimes things worked, sometimes they didn’t, and most of the time I couldn’t confidently explain why something was set up the way it was.
That’s one ...]]></description><link>https://blog.adityarajsingh.in/day-1-of-30-days-of-aws-terraform-understanding-the-basics-before-writing-code</link><guid isPermaLink="true">https://blog.adityarajsingh.in/day-1-of-30-days-of-aws-terraform-understanding-the-basics-before-writing-code</guid><category><![CDATA[AWS]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[Infrastructure as code]]></category><category><![CDATA[Devops]]></category><category><![CDATA[automation]]></category><dc:creator><![CDATA[Aditya Raj Singh]]></dc:creator><pubDate>Fri, 06 Feb 2026 10:12:27 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1770372621398/2b0e20a7-766c-455e-9611-ccd07a9b319c.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>#30daysofawsterraform</p>
<h3 id="heading-introduction">Introduction</h3>
<p>I’ve clicked buttons in the AWS console before. A lot of them.<br /> Sometimes things worked, sometimes they didn’t, and most of the time I couldn’t confidently explain <em>why</em> something was set up the way it was.</p>
<p>That’s one of the reasons I decided to start the #<strong>30DaysOfAWSTerraform</strong> challenge.</p>
<p>Day 1 wasn’t about deploying servers or writing complex code. It was about slowing down and understanding what Infrastructure as Code really means, and why tools like Terraform exist in the first place. This post is a reflection of what clicked for me on Day 1, and what still feels a little unclear.</p>
<h3 id="heading-what-is-terraform">What Is Terraform?</h3>
<p>Terraform is a tool that helps create and manage cloud infrastructure using code instead of manual steps in the cloud console.</p>
<p>Instead of repeatedly clicking through AWS to create servers, networks, or databases, we write configuration files that describe the infrastructure we want. Terraform then reads this code and takes care of creating or updating resources for us.</p>
<p>This approach reduces manual effort and makes infrastructure easier to manage, especially as systems grow.</p>
<h3 id="heading-infrastructure-as-code-iac">Infrastructure as Code (IaC)</h3>
<p>This way of managing infrastructure is known as <strong>Infrastructure as Code (IaC)</strong>.</p>
<p>IaC means treating infrastructure the same way we treat application code:</p>
<ul>
<li><p><em>It can be stored in version control</em></p>
</li>
<li><p><em>It can be reused</em></p>
</li>
<li><p><em>It can be recreated whenever needed</em></p>
</li>
</ul>
<p>If something breaks or needs to be rebuilt, the same code can be used again instead of setting everything up manually from scratch. This makes infrastructure more consistent and reliable.</p>
<h3 id="heading-cloud-agnostic-nature-of-terraform">Cloud-Agnostic Nature of Terraform</h3>
<p>One interesting thing about Terraform is that it is <strong>cloud-agnostic</strong>.</p>
<p>This means Terraform itself is not limited to AWS. It can also work with Azure, GCP, and many other platforms. Terraform does this using <strong>providers</strong>.</p>
<p>A provider tells Terraform which cloud platform it should communicate with. For example, the AWS provider allows Terraform to interact with AWS services. Providers act as a bridge between Terraform and the cloud.</p>
<h3 id="heading-how-terraform-thinks-and-works">How Terraform Thinks and Works</h3>
<p>Terraform follows a <strong>declarative approach</strong>. Instead of telling Terraform how to build infrastructure step by step, we describe the final state we want.</p>
<p>We define what resources should exist, and Terraform figures out how to reach that state.</p>
<p>Terraform also maintains a <strong>state file</strong>, which keeps track of the resources it manages. This state helps Terraform understand what already exists and what needs to change. The internal working of the state file is still a bit confusing to me, but I expect it will become clearer as I move forward in this challenge.</p>
<h3 id="heading-terraform-workflow-the-core-commands">Terraform Workflow (The Core Commands)</h3>
<p>Every Terraform project follows a basic workflow:</p>
<blockquote>
<p><strong>terraform init</strong></p>
</blockquote>
<p>This command initializes the project. It downloads required provider plugins and prepares the directory for Terraform to work.</p>
<blockquote>
<p><strong>terraform plan</strong></p>
</blockquote>
<p>This command shows what Terraform <em>will do</em> before making any real changes. Think of it as a dry run or preview.</p>
<blockquote>
<p><strong>terraform apply</strong></p>
</blockquote>
<p>This command actually creates or updates infrastructure based on the configuration files.</p>
<p>Even though Day 1 does not involve creating real infrastructure, understanding this workflow is essential because it is used in every Terraform project.</p>
<pre><code class="lang-bash">provider <span class="hljs-string">"aws"</span> 
{region = <span class="hljs-string">"us-east-1"</span>}
</code></pre>
<p>This small piece of code tells Terraform:</p>
<ul>
<li><p><em>Use AWS as the cloud provider</em></p>
</li>
<li><p><em>Deploy resources in the specified region</em></p>
</li>
</ul>
<p>At this stage, the goal is not to write complex code but to understand how Terraform configurations are structured.</p>
<h3 id="heading-understanding-the-terraform-state-file">Understanding the Terraform State File</h3>
<p>Terraform maintains a <strong>state file</strong>, which keeps track of the resources it manages. This file helps Terraform understand:</p>
<ul>
<li><p><em>What already exists</em></p>
</li>
<li><p><em>What needs to change</em></p>
</li>
<li><p><em>What needs to be created or destroyed</em></p>
</li>
</ul>
<p>One thing that is still slightly confusing is how Terraform internally compares the state file with real infrastructure. This is something I expect to understand better as the challenge progresses.</p>
<p>At this stage, I don’t fully understand how Terraform compares the state file with real infrastructure, and that’s okay. I’m treating this as something that will make sense once I start breaking and fixing things in later days.</p>
<h3 id="heading-how-terraform-fits-together"><strong>How Terraform Fits Together</strong></h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770124243603/0ca42475-5530-4c57-a0b9-7c9a34e28350.png" alt /></p>
<p>How Terraform works with AWS</p>
<blockquote>
<p>This diagram shows how Terraform uses providers to communicate with AWS and manage infrastructure using code.</p>
</blockquote>
<h3 id="heading-key-learnings-from-day-1">Key Learnings from Day 1</h3>
<ul>
<li><p><em>Terraform allows infrastructure to be managed using code</em></p>
</li>
<li><p><em>Infrastructure as Code improves consistency and repeatability</em></p>
</li>
<li><p><em>Providers connect Terraform with cloud platforms</em></p>
</li>
<li><p><em>The</em> <code>*init → plan → apply*</code> <em>workflow is fundamental</em></p>
</li>
<li><p><em>Day 1 is about understanding concepts, not deploying resources</em></p>
</li>
</ul>
<h3 id="heading-conclusion">Conclusion</h3>
<p>Day 1 was less about doing and more about thinking. No infrastructure was created, but the foundation was set.</p>
<p>If you’re also starting with Terraform or thinking about Infrastructure as Code, I’d genuinely recommend spending time on these basics before rushing ahead. I’m sharing this journey publicly to stay consistent, learn from others, and hopefully connect with people who are on a similar path.</p>
<p>Thanks to <strong>Piyush Sachdeva</strong> for creating this challenge and pushing the community toward hands-on, structured learning.</p>
<iframe src="https://www.youtube.com/embed/s5fwSG_00P8?feature=oembed" width="700" height="393"></iframe>

<p>If you’re following the challenge too, I’d love to hear how Day 1 felt for you.</p>
]]></content:encoded></item><item><title><![CDATA[Microservices vs Monoliths: If Everyone Is “Going Back,” What Does That Mean for DevOps?]]></title><description><![CDATA[For a long time, microservices were treated almost like a rite of passage. If a company wanted to be taken seriously in modern engineering circles, it needed services, containers, and a distributed architecture. Monoliths, on the other hand, were fra...]]></description><link>https://blog.adityarajsingh.in/microservices-vs-monoliths-if-everyone-is-going-back-what-does-that-mean-for-devops-dc55a6ad5556</link><guid isPermaLink="true">https://blog.adityarajsingh.in/microservices-vs-monoliths-if-everyone-is-going-back-what-does-that-mean-for-devops-dc55a6ad5556</guid><category><![CDATA[Microservices]]></category><category><![CDATA[Devops]]></category><category><![CDATA[monolithic architecture]]></category><dc:creator><![CDATA[Aditya Raj Singh]]></dc:creator><pubDate>Thu, 05 Feb 2026 15:11:15 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1770124202022/b2f7be4b-e2b9-4136-a3bf-1a4d5d1d8d4e.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>For a long time, microservices were treated almost like a rite of passage. If a company wanted to be taken seriously in modern engineering circles, it needed services, containers, and a distributed architecture. Monoliths, on the other hand, were framed as something you eventually “outgrew.”</p>
<p>But lately, the conversation has shifted. More teams are openly talking about moving <em>away</em> from microservices and back toward monolithic or modular monolithic systems. That raises an uncomfortable question, especially for anyone learning DevOps:</p>
<p>If microservices are so important for DevOps, why are some companies abandoning them? And if monoliths are making a comeback, does DevOps become less relevant?</p>
<blockquote>
<p>The short answer is no.<br />The long answer is more interesting.</p>
</blockquote>
<h3 id="heading-why-microservices-became-so-closely-tied-to-devops">Why Microservices Became So Closely Tied to DevOps</h3>
<p>Microservices didn’t rise because monoliths were inherently flawed. They rose because <strong>certain problems started appearing at scale</strong>, and existing architectures struggled to handle them.</p>
<p>As teams grew, a single codebase became a coordination nightmare. Small changes required full application deployments. Release cycles slowed down, and failures became high-risk because everything was tightly coupled. From a DevOps perspective, this meant longer pipelines, bigger blast radiuses, and more stressful releases.</p>
<p>Microservices aligned perfectly with DevOps goals. They allowed teams to deploy independently, scale only what needed scaling, and recover from failures without impacting the entire system. Infrastructure could be automated around smaller, well-defined units, and ownership became clearer.</p>
<p>In large organizations with dozens or hundreds of engineers, this shift wasn’t optional. It was the only way to keep delivering quickly without breaking everything.</p>
<h3 id="heading-the-problems-microservices-actually-solve-and-the-ones-they-dont">The Problems Microservices Actually Solve (And the Ones They Don’t)</h3>
<p>At their best, microservices solve real, concrete problems. They reduce deployment bottlenecks. They allow teams to move independently. They make scaling more efficient and failures easier to isolate. For DevOps teams, this enables continuous delivery at scale and better control over reliability.</p>
<p>But microservices also introduce a different kind of complexity.</p>
<p>Every service needs a pipeline. Every service needs monitoring. Services need to find and talk to each other reliably. Logs and metrics are no longer centralized by default. Costs rise, not just in cloud spend, but in cognitive load.</p>
<p>This is where many teams ran into trouble. They adopted microservices early, expecting speed, but ended up spending most of their time managing the platform instead of delivering features.</p>
<h3 id="heading-why-some-teams-are-moving-back-to-monoliths">Why Some Teams Are Moving Back to Monoliths</h3>
<p>When companies talk about “going back to monoliths,” they’re usually not returning to the old, tightly coupled messes of the past. What they’re actually choosing is a <strong>modular monolith</strong>: one deployable unit with strong internal boundaries.</p>
<p>This approach works well when teams are small or medium-sized. It reduces operational overhead, simplifies deployments, and dramatically lowers infrastructure costs. Debugging is easier. Observability is simpler. On-call rotations are less exhausting.</p>
<p>In these environments, microservices weren’t solving a problem. They were creating one.</p>
<p>This isn’t a rejection of microservices as a concept. It’s a recognition that architecture should match <strong>scale, not ambition</strong>.</p>
<h3 id="heading-so-does-devops-matter-less-without-microservices">So… Does DevOps Matter Less Without Microservices?</h3>
<p>This is the wrong question, and it’s an easy trap to fall into.</p>
<p>DevOps does not exist because of microservices.<br />DevOps exists because software needs to be delivered reliably and repeatedly.</p>
<p>When systems move back to monoliths, DevOps doesn’t disappear. It shifts.</p>
<p>Instead of managing dozens of services, DevOps focuses on improving build times, making deployments safer, managing environments, controlling costs, and improving observability at the application level. Release strategies like blue-green or canary deployments become even more important because failures affect a larger surface area.</p>
<p>In some ways, DevOps becomes more visible in monolithic systems because mistakes are harder to hide.</p>
<h3 id="heading-devops-focus-shift-monolithic-vs-microservices-architecture">DevOps Focus Shift: Monolithic vs Microservices Architecture</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770124199890/e3ea1b79-16d1-4b77-a89c-4388d905ab0d.png" alt="How DevOps responsibilities shift between monolithic and microservices architectures while the core principles remain the same." /></p>
<p><em>How DevOps responsibilities shift between monolithic and microservices architectures while the core principles remain the same.</em></p>
<h3 id="heading-how-devops-responsibilities-change-with-architecture">How DevOps Responsibilities Change with Architecture</h3>
<p>In microservice-heavy systems, DevOps work leans toward platform engineering. Kubernetes, service meshes, distributed tracing, and standardized pipelines dominate the landscape. The challenge is taming complexity without slowing teams down.</p>
<p>In monolithic systems, the focus shifts to stability and efficiency. Faster builds, safer releases, environment consistency, and cost control become central. The work is less flashy, but no less critical.</p>
<p>The goal doesn’t change. Only the tools and emphasis do.</p>
<h3 id="heading-the-real-lesson-behind-the-trend">The Real Lesson Behind the Trend</h3>
<p>The microservices-to-monolith conversation isn’t about right or wrong architecture. It’s about maturity.</p>
<p>Early-stage teams benefit from simplicity. Large organizations need decoupling. Most systems live somewhere in between. The mistake many teams made was adopting microservices as a default instead of a response to real constraints.</p>
<p>DevOps maturity isn’t measured by how distributed your system is. It’s measured by how well you can change it safely.</p>
<h3 id="heading-takeaway">Takeaway</h3>
<p>Microservices are powerful when the problem demands them. Monoliths are powerful when simplicity matters more than scale. DevOps thrives in both environments, because its core purpose never changes: shorten feedback loops, reduce risk, and keep systems reliable.</p>
<p>Architecture should follow needs, not trends.<br />DevOps isn’t about choosing sides — it’s about making whatever you choose work well.</p>
]]></content:encoded></item><item><title><![CDATA[How I Fell Into the Cloud (and What AWS Has Taught Me So Far)]]></title><description><![CDATA[I didn’t plan to get into DevOps. It just happened somewhere between writing small apps and wondering how those apps actually survive in the real world. Code felt like the whole story until the day I tried deploying something for the first time.
The ...]]></description><link>https://blog.adityarajsingh.in/how-i-fell-into-the-cloud-and-what-aws-has-taught-me-so-far-b936627d0866</link><guid isPermaLink="true">https://blog.adityarajsingh.in/how-i-fell-into-the-cloud-and-what-aws-has-taught-me-so-far-b936627d0866</guid><category><![CDATA[Cloud Computing]]></category><category><![CDATA[Devops]]></category><category><![CDATA[AWS]]></category><dc:creator><![CDATA[Aditya Raj Singh]]></dc:creator><pubDate>Tue, 03 Feb 2026 13:21:16 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1770124246129/b643fec3-1d27-4099-afaa-e74099237108.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I didn’t plan to get into DevOps. It just happened somewhere between writing small apps and wondering how those apps actually survive in the real world. Code felt like the whole story until the day I tried deploying something for the first time.</p>
<h3 id="heading-the-rabbit-hole-begins">The Rabbit Hole Begins</h3>
<p>The first time I opened the AWS console, it felt like stepping into a control room with too many switches. EC2, IAM, VPC, S3, RDS… all these services with serious names and no hand-holding. Overwhelming, but also weirdly exciting.</p>
<p>So I started with the simplest possible thing: one EC2 instance and a tiny app. When it finally worked, something clicked. The world of software doesn’t end at code. It begins at deployment.</p>
<h3 id="heading-what-aws-actually-taught-me">What AWS Actually Taught Me</h3>
<p>Tutorials tell you where to click. AWS teaches you patience and curiosity. It forces you to care about networking, security, storage, and the consequences of misconfigurations. It rewires how you think.</p>
<h4 id="heading-ec2-servers-arent-just-machines">EC2: Servers Aren’t Just Machines</h4>
<p>Launching an EC2 instance sounds straightforward until you misconfigure a security group and spend an hour wondering why you can’t SSH into your own server. That’s how I learned about inbound/outbound rules, key pairs, and why opening port 22 isn’t just a “checkbox.”</p>
<p><em>Small mental model that saved me later:</em></p>
<pre><code class="lang-bash">[ Your Laptop ] ---&gt; [ Internet ] ---&gt; [ EC2 Instance ]
(SG allows only port 22)
</code></pre>
<h4 id="heading-iam-the-gatekeeper-you-cant-ignore">IAM: The Gatekeeper You Can’t Ignore</h4>
<p>IAM humbled me quickly. The first time I deployed a pipeline, it failed with a single word: AccessDenied. No explanation, no mercy. Later I discovered that IAM isn’t just about granting permissions, it’s about being precise. The “least privilege” principle exists for a reason, and “AdministratorAccess” is not the solution (even when you’re tired).</p>
<p>Three concepts that changed how I read AWS:</p>
<p><em>Users vs Roles<br />Policies vs Permissions<br />Why "AdministratorAccess" is not the solution (even if you're tired)</em></p>
<h4 id="heading-s3-simplicity-with-hidden-depth">S3: Simplicity With Hidden Depth</h4>
<p>S3 looks harmless at first glance: drop files into a bucket and call it a day. Then you stumble onto bucket policies, object ACLs, static hosting, lifecycle rules, versioning, and replication. Suddenly S3 stops feeling like Dropbox and starts feeling like infrastructure.</p>
<h4 id="heading-vpc-the-invisible-skeleton">VPC: The Invisible Skeleton</h4>
<p>Networking was the moment I realized the cloud isn’t magic. It’s just layers of routing and boundaries you can no longer ignore. Subnets, route tables, NAT gateways, internet gateways — none of it sounds exciting until your private subnet can’t reach the internet and your app quietly breaks.</p>
<pre><code class="lang-markdown">[VPC]|
|-- 
Public Subnet (EC2 + IGW)
|
|-- 
Private Subnet (RDS + NAT)
</code></pre>
<p><em>Once I understood that, half of my networking confusion evaporated.</em></p>
<h3 id="heading-the-real-lesson">The Real Lesson</h3>
<p>AWS didn’t just teach me services. It taught me how software leaves your laptop and becomes something people can actually use. It taught me that failures are part of the journey, and most of the learning hides inside those failures: SSH timeouts, IAM errors, containers refusing to talk, private subnets with no exit route, pipelines that fail without context.</p>
<p>And weirdly, that’s what made it fun.</p>
<p>I didn’t fall into the cloud because it was trendy. I fell in because it changed how I see software: not as files, but as systems that breathe, break, and evolve.</p>
<p>I’m still learning. Still deploying. Still debugging. And that’s the story I’ll keep telling here.</p>
]]></content:encoded></item></channel></rss>