edits to "New Home Network Architecture"
This commit is contained in:
@@ -1,6 +1,6 @@
|
||||
---
|
||||
title: "New Home Network Architecture"
|
||||
date: "2025-11-19T11:30:00-07:00"
|
||||
date: "2025-11-20T07:30:00-07:00"
|
||||
description: "The architecture of my current home network"
|
||||
summary: "I decided to rearchitect my home network, and this is the result."
|
||||
---
|
||||
@@ -43,9 +43,10 @@ result.
|
||||

|
||||
|
||||
I had VLANs for everything coming from the VM hosts to the physical switches.
|
||||
Traffic would go in and out and in again so everything would route through the
|
||||
tiers like they're supposed to. I had more VMs acting as routers than I had
|
||||
VMs doing productive activity.
|
||||
Traffic would loop through the tiers (in, out, and back in), routing
|
||||
everything like it's supposed to, but this redundancy introduced unecessary
|
||||
complexity. I had more VMs acting as routers than I had VMs doing productive
|
||||
activity.
|
||||
|
||||
When I decided that I would like to start using IPv6 in my network, everything
|
||||
doubled. I kept the IPv4 and IPv6 traffic on separate VLANs, and had separate
|
||||
@@ -65,7 +66,7 @@ came up with several principles:
|
||||
responsibility for security on the network, it tends to get complicated. You
|
||||
make physical design choices around isolating packets on the right segments
|
||||
and getting them to the right firewalls, where the network should focus on
|
||||
moving traffic through it as quickly as possible. This goal of simply
|
||||
moving traffic through it as quickly as possible. This principle of simply
|
||||
getting packets to where they need to be is how the Internet scales so well.
|
||||
We can keep physical LANs simple and efficient like this, and leave the
|
||||
security concerns to the endpoints. The endpoints need to be secure anyways,
|
||||
@@ -124,7 +125,7 @@ improvements.
|
||||
to make maintenance more transparent with the ability to fail over.
|
||||
- **Virtual Machines**. Most of my workloads were set up as virtual machines.
|
||||
I wanted to migrate to [Kubernetes](https://kubernetes.io/) as everything
|
||||
I was running could be run there, plus many other benefits.
|
||||
I was running could be run there, along with many other benefits.
|
||||
- **NAT64**. Here I was running IPv6 to get away from needing NAT, but I still
|
||||
needed NAT. This setup was mostly working fine, but there were a few small
|
||||
irritations:
|
||||
@@ -169,13 +170,13 @@ The key changes are:
|
||||
- My recursive DNS servers providing DNS lookups for the network are on two
|
||||
anycast addresses. Each edge router runs an instance and advertises one of
|
||||
the addresses; this is so I can "bootstrap" the network from the edge
|
||||
routers. I also run the DNS service under Kubernetes, this advertises the
|
||||
routers. I also run the DNS service under Kubernetes, which advertises the
|
||||
same anycast addresses using ordinary `LoadBalancer` services.
|
||||
- **IS-IS and BGP**. I took a few passes at getting this right. I first tried
|
||||
to move fully from IS-IS to BGP only. This meant setting up peering
|
||||
using IPv6 link local addresses, which worked, but it was a bit flaky under
|
||||
[FRR](https://frrouting.org/). I settled on using IS-IS on the fabric
|
||||
interfaces only to exhange the IPv6 loopback addresses of each node. I use
|
||||
interfaces only to exchange the IPv6 loopback addresses of each node. I use
|
||||
the globally routable loopback addresses for the BGP peering, which is much
|
||||
easier in practice. All of the other routes (access subnets, Kubernetes
|
||||
networks, anycast addresses, defaults from the edge routers) are exchanged
|
||||
@@ -187,10 +188,10 @@ The key changes are:
|
||||
- **BGP Extended-Nexthop**. An added bonus to using BGP the way that I am is
|
||||
that I could make use of the [BGP extended-nexthop](https://datatracker.ietf.org/doc/html/rfc8950)
|
||||
capability. The old network with only IS-IS still required me to define IPv4
|
||||
subnets on the switching fabrics, nodes would use IPv4 addresses as the next
|
||||
hop gateway addresses for IPv4 routes. With the extended-nexthop capability
|
||||
in BGP, it uses the IPv6 link-local addresses for the next hop under both
|
||||
IPv4 and IPv6.
|
||||
subnets on the switching fabrics, nodes used IPv4 addresses as the next hop
|
||||
gateway addresses for IPv4 routes. With the extended-nexthop capability in
|
||||
BGP, it uses the IPv6 link-local addresses for the next hop under both IPv4
|
||||
and IPv6.
|
||||
|
||||
### High Availability
|
||||
|
||||
@@ -227,12 +228,14 @@ advertisements to the subnets. Both routers are configured to advertise their
|
||||
presence, as well as the subnet prefixes and DNS information. Client machines
|
||||
will pick these up, and then have both routers as their default gateways.
|
||||
|
||||
For IPv4, I need to run VRRP to share a `".1"` address on the subnet that DHCP
|
||||
advertises as the default gateway. This works, however much less elegantly
|
||||
than the configuration with IPv6.
|
||||
While IPv6 configuration is seamless, IPv4 relies on VRRP to share a `".1"`
|
||||
default gateway address, which, though functional, lacks the elegance of
|
||||
IPv6's stateless design.
|
||||
|
||||
## It Works
|
||||
|
||||
After I got this all in place, it was finally possible to build myself a
|
||||
working Kubernetes cluster and migrate all of my old services over to it.
|
||||
I'll get into that adventure in the next series of articles.
|
||||
working Kubernetes cluster and migrate all of my old services over to it. The
|
||||
transition to Kubernetes not only streamlined service management but also laid
|
||||
the foundation for future scalability and automation. I'll get into that
|
||||
adventure in the next series of articles.
|
||||
|
||||
Reference in New Issue
Block a user