Hybrid Cloud: A Lab Concept for a Simple AWS ↔ On Premise Bridge

Most tutorials assume you’ll expose services from your premise through a public static IP. But that’s risky: small connections are easy to DDoS and lack redundancy.
Instead of publishing my on-premise LAN directly, I decided to operate public IPs from AWS and route the traffic through a site-to-site VPN back to my own infrastructure.

  • This allows me to use AWS Elastic IPs as the public face of my services,
  • keep all real workloads running safely on my local network,
  • add or move services without changing public DNS.

As a result, AWS is the IP gateway through which public traffic arrives at the on-premise network where all the workloads are deployed. In my case, I’ve opted for a WireGuard tunnel which joins these networks, so packets can travel in both directions as if they were on one extended LAN.

Networking pieces

PurposeSubnetNotes
AWS VPC172.31.0.0/16Where the EC2 and WireGuard gateway live
WireGuard10.8.0.0/24Point-to-point overlay between AWS and home
Home LAN10.10.0.0/24Where servers and clients actually reside

Routing logic

  • On the AWS side, the VPC route table points 0.0.0.0/0 to the WireGuard interface.
    That means any public traffic that reaches AWS can be forwarded through the tunnel.
  • On the on-premise router, a static route for 172.31.0.0/16 points to the local WireGuard peer.
    That gives your LAN full reachability to the cloud.
[Interface]
Address = 10.8.0.2/24
ListenPort = 51820
PrivateKey = <redacted>

# Firewall rules for routing between interfaces
PostUp = ufw allow in on wg0 to any
PostUp = ufw route allow in on wg0 out on ens5
PostUp = ufw route allow in on ens5 out on wg0
PostUp = ufw route allow in on wg0 out on ens6
PostUp = ufw route allow in on ens6 out on wg0
PreDown = ufw route delete allow in on wg0 out on ens5
PreDown = ufw route delete allow in on ens5 out on wg0
PreDown = ufw route delete allow in on wg0 out on ens6
PreDown = ufw route delete allow in on ens6 out on wg0
PreDown = ufw delete allow in on wg0 to any

[Peer]
PublicKey = <redacted>
PersistentKeepalive = 25
AllowedIPs = 10.8.0.0/24, 10.10.0.0/24
Endpoint = <redacted>:51820
[Peer]
AllowedIPs = 10.8.0.0/24, 172.31.0.0/16

When the tunnel comes up:

  • AWS sees 10.10.0.0/24 as a reachable subnet through the WireGuard peer.
  • Your LAN knows to reach 172.31.0.0/16 through its WireGuard endpoint.
    Packets flow through AWS’s Elastic IPs → WireGuard gateway → your LAN router → internal hosts.

Connecting my on-premise network to AWS this way turned out to be an oddly liberating exercise. For the cost of a tiny EC2 instance and some routing rules, I now have a hybrid environment where my local machines appear to live in the cloud.

While it’s not a replacement for the fancy enterprise solutions like AWS Site-to-Site VPN or Transit Gateway—there’s no high availability, no fancy metrics—it works perfectly for my labs, testing, or small projects.

Thanks for reading!

Today I Learned: How to open etcd ports on Firewalld

Today I learned how to properly open etcd ports using firewalld. Here’s a quick guide:

FirewallD has a couple of predefined services with various ports. Find available services on FirewallD:

sudo firewall-cmd --get-services | grep etcd

You’ll find two services: etcd-client (port 2379/tcp) and etcd-server (port 2380/tcp).

Identify Active Firewall Zones:

sudo firewall-cmd --get-active-zones 

Example output:

FedoraServer (default)
  interfaces: enp0s20f2

Add these 2 services to your zone:

sudo firewall-cmd --zone=FedoraServer --add-service=etcd-client --permanent
sudo firewall-cmd --zone=FedoraServer --add-service=etcd-server --permanent
sudo firewall-cmd --reload

P.S. Make sure to check your active zone!

Verify the open services:

sudo firewall-cmd --list-services
etcd-client etcd-server

Today I learned: How to forward a port using ufw

For forwarding a port using ufw it’s necessary to operate on iptables rules defined in /etc/ufw/before.rules

Add the following rules to before.rules

*nat
:PREROUTING ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
-A PREROUTING -p tcp --dport 80 -j DNAT --to-destination 11.11.11.11:80
-A POSTROUTING ! -o lo -j MASQUERADE
# don't delete the 'COMMIT' line or these rules won't be processed
COMMIT

-A POSTROUTING ! -o lo -j MASQUERADE allows the traffic to be discerned as though not originating from a nat. ! -o lo prevents the lo interface to be masqueraded and break DNS resolving.

Today I learned: Fedora and HAProxy SELinux

I encountered a problem with HAProxy and SELinux. It seemed like the server wasn’t found and I was getting 503. On inspecting SELinux logs I realized certain changes must be made when working on a SELinux environment. I had no issues on other systems except RedHat family OSes.

  • haproxy_connect_any is the name of the SELinux boolean variable.
  • -P specifies that the change should be persistent across reboots.
  • 1 is the value being set, which means enabling the permission or behavior associated with the haproxy_connect_any boolean.

This configuration was necessary for me when using Certbot with HAProxy on Fedora. I was able to use them separately, and yet I couldn’t have the Certbot server defined in HAProxy on a different port(e.g. port 380) for a more complex configuration.

setsebool -P haproxy_connect_any 1

This is necessary because HAProxy requires more permissions.

sesearch -A | grep haproxy_connect_any
allow haproxy_t packet_type:packet recv; [ haproxy_connect_any ]:True
allow haproxy_t packet_type:packet send; [ haproxy_connect_any ]:True
allow haproxy_t port_type:tcp_socket name_bind; [ haproxy_connect_any ]:True
allow haproxy_t port_type:tcp_socket name_connect; [ haproxy_connect_any ]:True
allow haproxy_t port_type:tcp_socket { recv_msg send_msg }; [ haproxy_connect_any ]:True

Today I learned: How to add port 80 and 443 to Firewalld

Firstly we need to find the active zones of Firewalld

sudo firewall-cmd --get-active-zones

Then we execute the commands to add the services permanently then we reload:

sudo firewall-cmd --zone=public --add-service=http --permanent
sudo firewall-cmd --zone=public --add-service=https --permanent
sudo firewall-cmd --reload

We may also do it in such a manner:

sudo firewall-cmd --zone=public --add-port=80/tcp --permanent
sudo firewall-cmd --zone=public --add-port=443/tcp --permanent
sudo firewall-cmd --reload

Today I learned: SSH permissions on Linux and ssh-agent

To ensure proper SSH functionality it’s necessary to setup proper permissions. The .ssh directory in home must be 700. While authorized_keys and private keys should be 600. Public keys should be 644.

To start ssh-agent you must execute:

eval $(ssh-agent -s)

To add a key to the agent use:

ssh-add path/to/key.

Today I learned: Firewalld Masquerade & Docker

While using Docker on Fedora 34 I encountered an issue where my containers would not communicate properly. So I had a Docker Compose configuration with an internal network and a default bridge network. While I could ping the various servers from inside the containers, connections to various ports were failing. So ICMP traffic was up and running while the rest didn’t work. That is an obvious firewall configuration problem. Fedora 34 uses Firewalld as its firewall, while underneath it’s really iptables. Solving this particular problem requires enabling masquerading. First we would need to find out the active firewall zones:

sudo firewall-cmd --get-active-zones

FedoraServer

interfaces: eth0

docker

interfaces: br-0eb49bac0303 docker0

Bash response

In my case I am interested in the FedoraServer zone:

sudo firewall-cmd --zone=FedoraServer --add-masquerade --permanent
sudo firewall-cmd --reload

Then I would recommend restarting the Docker daemon:

sudo systemctl restart docker

This should do the trick!

Today I learned: Activating Varnish cache for Magento 2

Today I will do my best to explain how to configure Varnish cache for Magento 2.4.2.

I used the official Docker image for Varnish and built a simple Dockerfile along with a Magento recommended ENV value:

FROM varnish:6.6.0-1
COPY default.vcl /etc/varnish
ENV VARNISH_SIZE=2G

The default.vcl file is provided below:

vcl 4.0;

backend default {
  .host = "webserver";
  .port = "80";
  .first_byte_timeout = 600s;
}

As you can see it’s a very simple configuration for bootstrapping the service. The host in this case is the DNS name of your Magento server(e.g. Apache or NGINX). For this experiment we are “baking” the config into the Dockerfile. Once we have our Magento up and running we may connect to Varnish and access our site.

But to actually use Varnish we will have to configure it in the admin as well as adjust the default.vcl configuration file. So we have to log in into the admin panel. Then access Stores > Settings Configuration > Advanced > System > Full Page Cache. Once you access that page you must select Varnish Cache as the Caching Application. Then in Varnish Configuration you must set your server host and port. Afterwards press Export VCL for Varnish 6. Makes sure to save your changes. Once you download the vcl file we will have to adjust it.

First of all I should mention that starting with Magento 2.4.2 the webroot is supposed to be configured to be the pub folder. This is relevant for configuring Varnish because the base configuration generated by the admin panel includes the pub folder in various paths. The problem is since the content is served from pub, Varnish has no clue whatsoever what pub is. From its point of view it sees only the root. So we must remove all pub references from the file otherwise you will get a 503 error because it can’t access the health_check.php file. So just remove the pub references from the config. For example instead of:

backend default {
    .host = "webserver";
    .port = "80";
    .first_byte_timeout = 600s;
    .probe = {
        .url = "/pub/health_check.php";
        .timeout = 2s;
        .interval = 5s;
        .window = 10;
        .threshold = 5;
   }
}

Write in the following manner:

backend default {
    .host = "webserver";
    .port = "80";
    .first_byte_timeout = 600s;
    .probe = {
        .url = "/health_check.php";
        .timeout = 2s;
        .interval = 5s;
        .window = 10;
        .threshold = 5;
   }
}

Once you do that you should restart Varnish and the site should be available. Thanks for reading.