Publish "New Home Network Architecture" #1
@@ -1,6 +1,6 @@
|
|||||||
---
|
---
|
||||||
title: "New Home Network Architecture"
|
title: "New Home Network Architecture"
|
||||||
date: "2025-11-19T11:30:00-07:00"
|
date: "2025-11-20T07:30:00-07:00"
|
||||||
description: "The architecture of my current home network"
|
description: "The architecture of my current home network"
|
||||||
summary: "I decided to rearchitect my home network, and this is the result."
|
summary: "I decided to rearchitect my home network, and this is the result."
|
||||||
---
|
---
|
||||||
@@ -43,9 +43,10 @@ result.
|
|||||||

|

|
||||||
|
|
||||||
I had VLANs for everything coming from the VM hosts to the physical switches.
|
I had VLANs for everything coming from the VM hosts to the physical switches.
|
||||||
Traffic would go in and out and in again so everything would route through the
|
Traffic would loop through the tiers (in, out, and back in), routing
|
||||||
tiers like they're supposed to. I had more VMs acting as routers than I had
|
everything like it's supposed to, but this redundancy introduced unecessary
|
||||||
VMs doing productive activity.
|
complexity. I had more VMs acting as routers than I had VMs doing productive
|
||||||
|
activity.
|
||||||
|
|
||||||
When I decided that I would like to start using IPv6 in my network, everything
|
When I decided that I would like to start using IPv6 in my network, everything
|
||||||
doubled. I kept the IPv4 and IPv6 traffic on separate VLANs, and had separate
|
doubled. I kept the IPv4 and IPv6 traffic on separate VLANs, and had separate
|
||||||
@@ -65,7 +66,7 @@ came up with several principles:
|
|||||||
responsibility for security on the network, it tends to get complicated. You
|
responsibility for security on the network, it tends to get complicated. You
|
||||||
make physical design choices around isolating packets on the right segments
|
make physical design choices around isolating packets on the right segments
|
||||||
and getting them to the right firewalls, where the network should focus on
|
and getting them to the right firewalls, where the network should focus on
|
||||||
moving traffic through it as quickly as possible. This goal of simply
|
moving traffic through it as quickly as possible. This principle of simply
|
||||||
getting packets to where they need to be is how the Internet scales so well.
|
getting packets to where they need to be is how the Internet scales so well.
|
||||||
We can keep physical LANs simple and efficient like this, and leave the
|
We can keep physical LANs simple and efficient like this, and leave the
|
||||||
security concerns to the endpoints. The endpoints need to be secure anyways,
|
security concerns to the endpoints. The endpoints need to be secure anyways,
|
||||||
@@ -124,7 +125,7 @@ improvements.
|
|||||||
to make maintenance more transparent with the ability to fail over.
|
to make maintenance more transparent with the ability to fail over.
|
||||||
- **Virtual Machines**. Most of my workloads were set up as virtual machines.
|
- **Virtual Machines**. Most of my workloads were set up as virtual machines.
|
||||||
I wanted to migrate to [Kubernetes](https://kubernetes.io/) as everything
|
I wanted to migrate to [Kubernetes](https://kubernetes.io/) as everything
|
||||||
I was running could be run there, plus many other benefits.
|
I was running could be run there, along with many other benefits.
|
||||||
- **NAT64**. Here I was running IPv6 to get away from needing NAT, but I still
|
- **NAT64**. Here I was running IPv6 to get away from needing NAT, but I still
|
||||||
needed NAT. This setup was mostly working fine, but there were a few small
|
needed NAT. This setup was mostly working fine, but there were a few small
|
||||||
irritations:
|
irritations:
|
||||||
@@ -169,13 +170,13 @@ The key changes are:
|
|||||||
- My recursive DNS servers providing DNS lookups for the network are on two
|
- My recursive DNS servers providing DNS lookups for the network are on two
|
||||||
anycast addresses. Each edge router runs an instance and advertises one of
|
anycast addresses. Each edge router runs an instance and advertises one of
|
||||||
the addresses; this is so I can "bootstrap" the network from the edge
|
the addresses; this is so I can "bootstrap" the network from the edge
|
||||||
routers. I also run the DNS service under Kubernetes, this advertises the
|
routers. I also run the DNS service under Kubernetes, which advertises the
|
||||||
same anycast addresses using ordinary `LoadBalancer` services.
|
same anycast addresses using ordinary `LoadBalancer` services.
|
||||||
- **IS-IS and BGP**. I took a few passes at getting this right. I first tried
|
- **IS-IS and BGP**. I took a few passes at getting this right. I first tried
|
||||||
to move fully from IS-IS to BGP only. This meant setting up peering
|
to move fully from IS-IS to BGP only. This meant setting up peering
|
||||||
using IPv6 link local addresses, which worked, but it was a bit flaky under
|
using IPv6 link local addresses, which worked, but it was a bit flaky under
|
||||||
[FRR](https://frrouting.org/). I settled on using IS-IS on the fabric
|
[FRR](https://frrouting.org/). I settled on using IS-IS on the fabric
|
||||||
interfaces only to exhange the IPv6 loopback addresses of each node. I use
|
interfaces only to exchange the IPv6 loopback addresses of each node. I use
|
||||||
the globally routable loopback addresses for the BGP peering, which is much
|
the globally routable loopback addresses for the BGP peering, which is much
|
||||||
easier in practice. All of the other routes (access subnets, Kubernetes
|
easier in practice. All of the other routes (access subnets, Kubernetes
|
||||||
networks, anycast addresses, defaults from the edge routers) are exchanged
|
networks, anycast addresses, defaults from the edge routers) are exchanged
|
||||||
@@ -187,10 +188,10 @@ The key changes are:
|
|||||||
- **BGP Extended-Nexthop**. An added bonus to using BGP the way that I am is
|
- **BGP Extended-Nexthop**. An added bonus to using BGP the way that I am is
|
||||||
that I could make use of the [BGP extended-nexthop](https://datatracker.ietf.org/doc/html/rfc8950)
|
that I could make use of the [BGP extended-nexthop](https://datatracker.ietf.org/doc/html/rfc8950)
|
||||||
capability. The old network with only IS-IS still required me to define IPv4
|
capability. The old network with only IS-IS still required me to define IPv4
|
||||||
subnets on the switching fabrics, nodes would use IPv4 addresses as the next
|
subnets on the switching fabrics, nodes used IPv4 addresses as the next hop
|
||||||
hop gateway addresses for IPv4 routes. With the extended-nexthop capability
|
gateway addresses for IPv4 routes. With the extended-nexthop capability in
|
||||||
in BGP, it uses the IPv6 link-local addresses for the next hop under both
|
BGP, it uses the IPv6 link-local addresses for the next hop under both IPv4
|
||||||
IPv4 and IPv6.
|
and IPv6.
|
||||||
|
|
||||||
### High Availability
|
### High Availability
|
||||||
|
|
||||||
@@ -227,12 +228,14 @@ advertisements to the subnets. Both routers are configured to advertise their
|
|||||||
presence, as well as the subnet prefixes and DNS information. Client machines
|
presence, as well as the subnet prefixes and DNS information. Client machines
|
||||||
will pick these up, and then have both routers as their default gateways.
|
will pick these up, and then have both routers as their default gateways.
|
||||||
|
|
||||||
For IPv4, I need to run VRRP to share a `".1"` address on the subnet that DHCP
|
While IPv6 configuration is seamless, IPv4 relies on VRRP to share a `".1"`
|
||||||
advertises as the default gateway. This works, however much less elegantly
|
default gateway address, which, though functional, lacks the elegance of
|
||||||
than the configuration with IPv6.
|
IPv6's stateless design.
|
||||||
|
|
||||||
## It Works
|
## It Works
|
||||||
|
|
||||||
After I got this all in place, it was finally possible to build myself a
|
After I got this all in place, it was finally possible to build myself a
|
||||||
working Kubernetes cluster and migrate all of my old services over to it.
|
working Kubernetes cluster and migrate all of my old services over to it. The
|
||||||
I'll get into that adventure in the next series of articles.
|
transition to Kubernetes not only streamlined service management but also laid
|
||||||
|
the foundation for future scalability and automation. I'll get into that
|
||||||
|
adventure in the next series of articles.
|
||||||
|
|||||||
Reference in New Issue
Block a user