Juniper SRX 345

Chassis clustering a Juniper SRX firewall via a switch

Posted by

Intro

It is recommended that clustered SRX devices are directly connected. To do this, you need to run 2 cables, one for the control plane and the other for the fabric. This is sometimes not easy (or cheap) in a data centre environment where the firewalls are in different racks – especially given that the control link must be copper, on most SRX devices, and is thus limited to 100m.

You can also cluster SRX devices by connecting the links into a switch. A common use for this would be to cluster 2 firewalls, each in different racks, via your core switching chassis cluster.

tldr; (sorry, it’s still quite long)

You’ll need to read the chassis cluster guide. Here’s the one for the SRX 300, 320, 340, 345, 550 and 1150. On pages 44 and 45 you will see diagrams of how the devices must be connected. Most SRX devices enforce the use of a particular port for the control plane. When clustered, the control port will be renamed to something like fxp1. The fabric can usually be any port you like.

Connect the control and fabric ports of each SRX device into your switch.

The switch ports need to be configured like so:

  • MTU 8980
  • Access port (no VLAN tagging)
  • A unique VLAN – control and fabric need their own VLAN (e.g. control = 701, fabric = 702). The VLAN should have only 2 ports in it (e.g. firewall 1 control port and firewall 2 control port)
  • IGMP snooping turned off
  • CDP/LLDP/other junk turned off

You must first delete the configuration for the control interface, on each firewall, if it exists. If you don’t do this, you’ll be stuck in a strange state when the firewalls come back up as they will error when loading the configuration. If you can, you may as well delete all interfaces:

edit
delete interfaces
commit

Log into each firewall via its console port. On firewall 1:

set chassis cluster cluster-id 1 node 0 reboot

On firewall 2:

set chassis cluster cluster-id 1 node 1 reboot

Wait for the firewalls to finish rebooting. Check the status of the cluster like so:

show chassis cluster status

One node should be primary and the other secondary. Make sure you wait for all the “Monitor-failures” to clear before continuing.

Now you can work solely on the primary node… so you can log out of the secondary. You’ll need to assign the physical ports that you connected up for the fabric to the interfaces fab0 and fab1. Note that the ports on the secondary device will have been re-numbered. That is to say the on-board ports will no longer be ge-0/0/something, but will rather be something like ge-5/0/something. The number prefix depends on the model of SRX and, specifically, how many PIM slots it has. You’ll need to read the chassis clustering guide to work out what to do for your model.

set interfaces fab0 fabric-options member-interfaces ge-0/0/2
set interfaces fab1 fabric-options member-interfaces ge-5/0/2
commit

Check the full cluster status:

run show chassis cluster interfaces

You should see both control and fabric as Up.

Config for Juniper EX Series Switches

The below is the config for an EX series virtual chassis (VC). It’s simpler than if you had unclustered switches as you don’t need to worry about carrying VLANs between switches. If you don’t have a VC, you’ll need to do a little more on top of this.

vlans {
    VLAN701 {
        description fw_control_link;
        vlan-id 701;
    }
    VLAN702 {
        description fw_fabric_link;
        vlan-id 702;
    }
}
protocols {
    igmp-snooping {
        vlan VLAN701 {
            disable;
        }
        vlan VLAN702 {
            disable;
        }
    }
    lldp {
        interface ge-0/0/17.0 {
            disable;
        }
        interface ge-4/0/17.0 {
            disable;
        }
        interface ge-0/0/18.0 {
            disable;
        }
        interface ge-4/0/18.0 {
            disable;
        }
    }
}
interfaces {
    ge-0/0/17 {
        description FW-01_Control_Link;
        mtu 8980;
        unit 0 {
            family ethernet-switching {
                port-mode access;
                vlan {
                    members VLAN701;
                }
            }
        }
    }
    ge-0/0/18 {
        description FW-01_Fabric_Link;
        mtu 8980;
        unit 0 {
            family ethernet-switching {
                port-mode access;
                vlan {
                    members VLAN702;
                }
            }
        }
    }
    ge-4/0/17 {
        description FW-02_Control_Link;
        mtu 8980;
        unit 0 {
            family ethernet-switching {
                port-mode access;
                vlan {
                    members VLAN701;
                }
            }
        }
    }
    ge-4/0/18 {
        description FW-02_Fabric_Link;
        mtu 8980;
        unit 0 {
            family ethernet-switching {
                port-mode access;
                vlan {
                    members VLAN702;
                }
            }
        }
    }
}

Debugging

Check the status of nodes in the cluster:

show chassis cluster status

Find out which interfaces are in the cluster:

show chassis cluster interfaces

This will show you if data is being sent/received over the control and fabric links:

show chassis cluster statistics

Check if the arp table has entries for the other firewall (i.e. they have layer 2 connectivity):

show arp | match fxp

Configuring Node Specific Things

When you change the configuration on one node, it will be automatically applied on the other nodes. However, you will want some settings that are specific to a single node – for example hostname and management IP. You can set these settings into groups <nodename>, e.g. groups node0.

You’ll also need to set apply-groups “${node}” in order to have the node specific configuration apply to the right nodes.

Example config below for configuring hostname and management IP:

groups {
    node0 {
        system {
            host-name fw-01;
        }
        interfaces {
            fxp0 {
                unit 0 {
                    family inet {
                        address 192.168.1.1/24;
                    }
                }
            }
        }
    }
    node1 {
        system {
            host-name fw-02;
        }
        interfaces {
            fxp0 {
                unit 0 {
                    family inet {
                        address 192.168.1.2/24;
                    }
                }
            }
        }
    }
}
apply-groups "${node}";

Leave a Reply

Your email address will not be published. Required fields are marked *