I have recently had a chance to revisit my homelab network setup which means playing around with jails, vnet, and backups. I also refreshed my network setup and got a hefty used desktop to use as router. It's a bit overkill with an intel i5 6500 cpu, 8GB of DDR4 ram, and 500GB of spinning disk HDD, and a 4-port intel i350 network card. but it was also the perfect opportunity to play around with OPNSense.
Beyond the basic setup of OPNSense, I wanted to have a separate physical port which is isolated from my home network. This will plug right into my homelab server's 2nd NIC to be used by VMs that intend to be public.
Overview
There are 2 machines in play. One is the homelab server named Topoli, and the other my OPNSense router named Pashmak.
Server has 3 intel NICs, 1 is for IPMI,
and the other two are named igb0
and igb1
. Router has 5 NICs, igb0
to igb3
(intel i350 4 port card) and one em0
(from
the motherboard). I have so far configured igb0
as WAN, and LAN interface is a bridge
that includes igb1
, igb2
, and igb3
on
the router.
On the server I also run many jails
with vnet which are bridged to igb0
.
Physically, Server's IPMI and igb0
are
connected to LAN bridge on the router, and igb1
on server is connected to em0
on the router. I have named em0
to be "PUBLIC" (you might see that reference
later).
To setup OPNSense, read their fantastic documentation.
vm-bhyve
on
FreeBSD
I use vm-bhyve
to manage my VMs on a
FreeBSD host. I have a few VMs that need to be connected to my LAN, and
one VM which is going to be public. I have opted to run OpenBSD in the public VM mostly
because I've been meaning to play around with it. It's also purported to
be the most secure operating system out of the box, which helps.
I've come to really enjoy working with OpenBSD so far. I highly encourage everyone to at least give it a shot. I may at some point end up installing it on my daily machine.
To separate the networks on the VMs, first create two bridges which
vm-bhyve
will use in /etc/rc.conf
.
Please note that I have vnet jails competing for attention when it
comes to these bridges. vm-bhyve
can
automatically create the bridges if you so desire and if bridges are
only used by vm-bhyve
, that will work just
fine.
cloned_interfaces="bridge0 bridge1"
ifconfig_bridge0_name="private_if"
create_args_bridge0="addm igb0"
ifconfig_bridge1_name="public_if"
create_args_bridge1="addm igb1"
Then have vm-bhyve
use each bridge for
different VMs.
vm switch create -t manual -b private_if private_bridge
vm switch create -t manual -b public_if public_bridge
Now you can configure each VM to use either of the bridges.
$ vm configure myvm
...
network0_switch="private_bridge"
...
OPNSense
Create a new interface (e.g. named "PUBLIC") from the physical
network interface (e.g. em0
). Assign a
static IP (e.g. 10.10.0.1/24) which is in a different subnet than your
LAN and enable DHCP like so.
[x] Enable DHCP server on the PUBLIC interface
Range: 10.10.0.100 to 10.10.0.254
DNS Servers: <your favorite DNS server>
You may also provide an alternate domain name. Notice I have added external DNS servers. This is to protect my internal network and avoid giving away names and IPs needlessly. Since this network path is not intended to communicate with my internal network, there is no need for me to provide DNS services either. Some call it defense in depth and others security through obscurity.
Firewalling is the next issue. I have setup the following rules in Firewall > Rules > PUBLIC. (open to suggestions regarding the rules).
Action | Protocol | Source | Port | Destination | Port | Gateway | Schedule |
---|---|---|---|---|---|---|---|
Block in | IPv4 | PUBLIC net | * | This Firewall | * | * | * |
Block out | IPv4 | * | * | ! WAN net | * | * | * |
Pass in | IPv4 | * | * | * | * | * | * |
Pass out | IPv4 | * | * | * | * | * | * |
This set of rules blocks connection from PUBLIC subnet to the firewall and local networks and allows everything else.
The last thing remaining is to forward ports from incoming WAN connections to the VM host.
Multiple NICs and DNS
When requesting 2 IP leases from your DHCP server (OPNSense in my
case), the same hostname is sent. Therefore, a situation arises where
dig myhost
can yield 2 different IP addresses which are, in
the above case, on two separate subnets. Having firewall setup to not
allow crosstalk between the two makes it difficult because it can add
delay and frustration when you can't access the server because the DNS
server is sending the wrong IP.
To alleviate this, I have modified /etc/dhclient.conf
(see dhclient(5)
manpage, examples section) to send a
different hostname on the public igb1
interface when requesting an IP.
interface "igb1" {
send host-name "pubtopoli";
}
Now requests to server's hostname return the private subnet and ones
to pubtopoli
return the public subnet. I
have no need to ever request the public subnet, but it shows up properly
on OPNSense leases.