Proxmox Containers: Manage network interfaces file yourself

Proxmox will manage the network interfaces file (/etc/network/interfaces on Debian) for you, on containers. This is sometimes annoying – for example when you want to add static routes with a post-up.

If you simply touch /etc/network/.pve-ignore.interfaces inside the container then it will let you manage it yourself and will thus obey any post-up lines that you add.

Hetzner: Routing IPv6 through a firewall such as pfSense

Hetzner offer very well specified physical servers at extremely low prices. I’ve used them for many years and they’ve proved to be extremely reliable. With each server, Hetzner will give you a single IPv4 IP and a /64 IPv6 subnet. You can also run virtualization software such as Proxmox and it’s often desirable to run a firewall such as pfSense on a virtual machine to protect the other virtual machines.

All good in principal, but the /64 IPv6 subnet has caused some confusion. Surely you need some more address space to be able to route the /64 subnet? It turns out, no. Hetzner don’t use NDP or proper IPv6 routing… they seem to just deliver the address space to the server (probably via static NDP entries mapping your /64 to your server’s MAC address). This actually works to our advantage because you do not need to assign the physical server any IPv6 addresses in the issued /64.

Broadly, the setup looks like this:

  • Physical server does not have an IPv6 address assigned to its physical interface
  • Physical server has IPv6 forwarding turned on
  • Proxmox (thus the physical server) has a private IPv6 address assigned to the bridge (vmbr) interface that it shares with pfSense
  • pfSense WAN interface has another private IPv6 address in the same subnet as the vmbr assigned to it
  • pfSense “LAN” interface has an address from your public /64 assigned to it
  • pfSense uses SLACC to assign IPs in your /64 to the VMs behind it
  • Physical server has a route to your assigned /64, via the private IP you assigned to your pfSense WAN interface
  • Physical server has a default IPv6 route to fe80::1

Here’s a picture where the assigned /64 is 2a01:4f8:66b:12d9::/64 and the private IPv6 /64 used between Proxmox and pfSense has been chosen fairly randomly using this:

Hetzner IPv6 Routing

Here’s the relevant parts of the network config on the physical Proxmox server:

  1. auto vmbr0
  2. iface vmbr0 inet static
  3. address 10.69.69.1
  4. netmask 255.255.255.240
  5. ovs_type OVSBridge
  6. post-up route -A inet6 add default gw fe80::1 dev enp0s31f6
  7. post-up route -A inet6 add 2a01:4f8:66b:12d9::/64 gw fda2:5d88:d5a3:1d4d::2
  8. post-up echo 1 > /proc/sys/net/ipv4/ip_forward
  9. post-up echo 1 > /proc/sys/net/ipv6/conf/all/forwarding
  10.  
  11. iface vmbr0 inet6 static
  12. address fda2:5d88:d5a3:1d4d::1
  13. netmask 64

If you have any questions, leave a comment.

 

 

Chassis clustering a Juniper SRX firewall via a switch

Intro

It is recommended that clustered SRX devices are directly connected. To do this, you need to run 2 cables, one for the control plane and the other for the fabric. This is sometimes not easy (or cheap) in a data centre environment where the firewalls are in different racks – especially given that the control link must be copper, on most SRX devices, and is thus limited to 100m.

You can also cluster SRX devices by connecting the links into a switch. A common use for this would be to cluster 2 firewalls, each in different racks, via your core switching chassis cluster.

tldr; (sorry, it’s still quite long)

You’ll need to read the chassis cluster guide. Here’s the one for the SRX 300, 320, 340, 345, 550 and 1150. On pages 44 and 45 you will see diagrams of how the devices must be connected. Most SRX devices enforce the use of a particular port for the control plane. When clustered, the control port will be renamed to something like fxp1. The fabric can usually be any port you like.

Connect the control and fabric ports of each SRX device into your switch.

The switch ports need to be configured like so:

  • MTU 8980
  • Access port (no VLAN tagging)
  • A unique VLAN – control and fabric need their own VLAN (e.g. control = 701, fabric = 702). The VLAN should have only 2 ports in it (e.g. firewall 1 control port and firewall 2 control port)
  • IGMP snooping turned off
  • CDP/LLDP/other junk turned off

You must first delete the configuration for the control interface, on each firewall, if it exists. If you don’t do this, you’ll be stuck in a strange state when the firewalls come back up as they will error when loading the configuration. If you can, you may as well delete all interfaces:

  1. edit
  2. delete interfaces
  3. commit

Log into each firewall via its console port. On firewall 1:

  1. set chassis cluster cluster-id 1 node 0 reboot

On firewall 2:

  1. set chassis cluster cluster-id 1 node 1 reboot

Wait for the firewalls to finish rebooting. Check the status of the cluster like so:

  1. show chassis cluster status

One node should be primary and the other secondary. Make sure you wait for all the “Monitor-failures” to clear before continuing.

Now you can work solely on the primary node… so you can log out of the secondary. You’ll need to assign the physical ports that you connected up for the fabric to the interfaces fab0 and fab1. Note that the ports on the secondary device will have been re-numbered. That is to say the on-board ports will no longer be ge-0/0/something, but will rather be something like ge-5/0/something. The number prefix depends on the model of SRX and, specifically, how many PIM slots it has. You’ll need to read the chassis clustering guide to work out what to do for your model.

  1. set interfaces fab0 fabric-options member-interfaces ge-0/0/2
  2. set interfaces fab1 fabric-options member-interfaces ge-5/0/2
  3. commit

Check the full cluster status:

  1. run show chassis cluster interfaces

You should see both control and fabric as Up.

Config for Juniper EX Series Switches

The below is the config for an EX series virtual chassis (VC). It’s simpler than if you had unclustered switches as you don’t need to worry about carrying VLANs between switches. If you don’t have a VC, you’ll need to do a little more on top of this.

  1. vlans {
  2. VLAN701 {
  3. description fw_control_link;
  4. vlan-id 701;
  5. }
  6. VLAN702 {
  7. description fw_fabric_link;
  8. vlan-id 702;
  9. }
  10. }
  11. protocols {
  12. igmp-snooping {
  13. vlan VLAN701 {
  14. disable;
  15. }
  16. vlan VLAN702 {
  17. disable;
  18. }
  19. }
  20. lldp {
  21. interface ge-0/0/17.0 {
  22. disable;
  23. }
  24. interface ge-4/0/17.0 {
  25. disable;
  26. }
  27. interface ge-0/0/18.0 {
  28. disable;
  29. }
  30. interface ge-4/0/18.0 {
  31. disable;
  32. }
  33. }
  34. }
  35. interfaces {
  36. ge-0/0/17 {
  37. description FW-01_Control_Link;
  38. mtu 8980;
  39. unit 0 {
  40. family ethernet-switching {
  41. port-mode access;
  42. vlan {
  43. members VLAN701;
  44. }
  45. }
  46. }
  47. }
  48. ge-0/0/18 {
  49. description FW-01_Fabric_Link;
  50. mtu 8980;
  51. unit 0 {
  52. family ethernet-switching {
  53. port-mode access;
  54. vlan {
  55. members VLAN702;
  56. }
  57. }
  58. }
  59. }
  60. ge-4/0/17 {
  61. description FW-02_Control_Link;
  62. mtu 8980;
  63. unit 0 {
  64. family ethernet-switching {
  65. port-mode access;
  66. vlan {
  67. members VLAN701;
  68. }
  69. }
  70. }
  71. }
  72. ge-4/0/18 {
  73. description FW-02_Fabric_Link;
  74. mtu 8980;
  75. unit 0 {
  76. family ethernet-switching {
  77. port-mode access;
  78. vlan {
  79. members VLAN702;
  80. }
  81. }
  82. }
  83. }
  84. }

Debugging

Check the status of nodes in the cluster:

  1. show chassis cluster status

Find out which interfaces are in the cluster:

  1. show chassis cluster interfaces

This will show you if data is being sent/received over the control and fabric links:

  1. show chassis cluster statistics

Check if the arp table has entries for the other firewall (i.e. they have layer 2 connectivity):

  1. show arp | match fxp

Configuring Node Specific Things

When you change the configuration on one node, it will be automatically applied on the other nodes. However, you will want some settings that are specific to a single node – for example hostname and management IP. You can set these settings into groups <nodename>, e.g. groups node0.

You’ll also need to set apply-groups “${node}” in order to have the node specific configuration apply to the right nodes.

Example config below for configuring hostname and management IP:

  1. groups {
  2. node0 {
  3. system {
  4. host-name fw-01;
  5. }
  6. interfaces {
  7. fxp0 {
  8. unit 0 {
  9. family inet {
  10. address 192.168.1.1/24;
  11. }
  12. }
  13. }
  14. }
  15. }
  16. node1 {
  17. system {
  18. host-name fw-02;
  19. }
  20. interfaces {
  21. fxp0 {
  22. unit 0 {
  23. family inet {
  24. address 192.168.1.2/24;
  25. }
  26. }
  27. }
  28. }
  29. }
  30. }
  31. apply-groups "${node}";

Changing the outgoing SMTP (sending) IP address in Postfix

This is far easier than I thought it’d be. I had to change it to get around some blacklisting my primary IP obtained after an unfortunate spamming incident from a compromised user.

Just add the following to your postfix’s main.cf and restart Postfix:

  1. smtp_bind_address=1.2.3.4

Where 1.2.3.4 is your new outgoing IP address.

How to detect Sonos devices on your network

How to detect Sonos devices on your network

A potential project requires me to detect Sonos devices on a network and find their IP addresses. Since Sonos only has 2 Mac Address OUIs an ARP scanner seemed the best way to do this. As such, I rightfully reinvented the wheel and wrote a slightly glorified ARP scanner with detection for Sonos OUIs. Sadly, PHP wasn’t quite suitable for this due to it’s lack of AF_PACKET sockets so it’s in C.

It’ll run on Linux and scan a network when given an interface name. Tweaks more than welcome as GitHub Pull Requests.

Code’s on my GitHub at https://github.com/phil-lavin/sonos-detector.