Maxing out the spec of a HPE ML350 Gen9 server

Posted by

Intro

I recently did a post on the ML350 Gen9 as a home server. This server was incredibly configurable when originally purchased from HP and upgrading it isn’t quite as simple as just adding more parts. Sometimes you’ll need to upgrade existing parts to support the new parts.

This post talks about how you would spec this server to the max. Unless you have money to burn, you probably won’t do all of these things but hopefully it’ll provide some guidance for the particular thing you were googling for when you found this post.

The HP ML350 Gen9 user manual is a good source of information for this stuff.

Maxing the Power

I mention power first as it’s a key consideration of upgrading everything else. You will need to make sure you have enough power (watts) and connectors to support your upgrades.

The ML350 Gen9 usually has a power backplane which supports up to 2 PSUs. PSUs are normally 500w (HP part numbers 723595-101 and 754377-001). You can upgrade these to 800w PSUs (HP part number 723600-101).

In order to get even more power, you can upgrade the power backplane to a 4 PSU model (HP part number 802988-B21). Picture below. You’ll obviously also need more PSUs if you want to fully populate this. You will need to do this if you want to run more than 3 drive bays or 2 GPUs as the 2 PSU backplane doesn’t have enough connectors.

Remember to account for redundancy as well as power. If you have 2x 500w PSUs, you have 500w of power as you should account for losing one of those PSUs. If you have 4x 500w PSUs, you probably have 1000w of power assuming your PSUs are connected to two redundant power sources.

That said, redundancy isn’t essential. You can get 3200W of power if you install 4 PSUs and aren’t bothered about redundancy. You may need to do this if you want to run lots of high powered GPUs.

Maxing the Fans

A single CPU ML350 Gen9 needs 3 fans (slots 2-4) whereas dual CPU needs 4 fans (slots 1-4).

The ML350 Gen9 supports a redundant fan configuration. By default, the server will shut down if a fan fails. You can disable this in BIOS, though it’s not advised as overheating may occur. Adding another 3 fans (slots 6-8) for single CPU and 4 fans (slots 5-8) makes the fans redundant. This allows cooling to still occur when a fan fails and, as such, the server won’t shut down.

To upgrade to redundant fans, you technically need the HP redundant fan kit (HP part number 725878-B21). This includes 4 fans and 2 PCI-E air baffles. The air baffles clip onto the main air baffle, that sits over the CPUs, and keep the air flowing over the PCI-E cards when the case is open for a fan change. Realistically, you can get away with just the fans (HP part numbers 768954-001 and 780976-001) as long as you’re not running super-hot PCI-E cards and don’t take too long to change the fans. That said, check for fan kits first as they may be cheaper than 4 individual fans, if you can find one.

Maxing the CPUs

ML350 Gen9 supports up to two E5-2600 series v3 or v4 processors. The biggest CPUs you can get are a pair of 22 core E5-2699 v4s. Practically, E5-2699 v4 seem very rare and expensive. Best bang for your buck is a pair of 14 core E5-2690 v4s.

To install 2x CPUs, you’ll need another heatsink (HP part number 769018-001) and another fan (HP part numbers 768954-001 and 780976-001) as you need at least 4 fans for a dual CPU configuration.

You will also need to either distribute your RAM evenly across the slots for both CPUs or buy more RAM. If you have an uneven number of DIMMs, you’ll need to make it even such that you can evenly split it across both CPUs.

Check the power draw of your CPUs on the Intel Website. You may need bigger PSUs if you want to run bigger CPUs alongside other power hungry components. See Maxing the Power section above for info.

Maxing the Memory

Your CPU dictates the maximum memory you can have. The table below shows the finer details.

The table states that you can have 1TB of RAM if you have E5-2600 v4 CPUs and use 16x 64GB quad-rank LRDIMMs. That said, the motherboard has 24 slots and the Kingston website claims you can have up to 1.5TB if you use 24x 64GB LRDIMMs and 3TB if you use 24x 128GB LRDIMMs. How true this is, I don’t know.

To max the memory, you will also need 2x E5-2600 v4 series CPUs. See Maxing the CPUs section above for more info.

You must also consider power consumption. You should budget about 3w of power per 8GB of RAM. So if you have 1TB of RAM, you should budget 384W for the memory alone. Combined with the 270w draw of a pair of E5-2690 v4s, you will need bigger PSUs to handle this much RAM. See Maxing the PSUs section above for more info.

Maxing the Disks

As mentioned in my previous post, disk configuration on the ML350 Gen9 is a complex topic. Check that post if you want all of the details for SFF and LFF chassis.

The SFF chassis supports the most disks. You can have six bays of eight 2.5″ drives (48 total). At time of writing, the largest SSD you can practically get is 8TB. This would give you up to 384TB of SSD capacity, depending on your RAID level.

To add more disks to the SFF chassis you will need additional drive cages and backplanes to make your chassis up to 6. Picture below. HPE part numbers seem to be 747592-001 and 780971-001.

You will also need 2x mini-sas 12GB/s cables per backplane. If you’re running more than one drive cage, you will also need a 12GB/s SAS expander. HPE part number is 761879-001. Picture below. The SAS expander cannot be connected to H240ar or P440ar controller cards (daughter board type controllers) – your controller must be a HP PCI-E type.

If you want 3 drive bays, you will need a single SAS expander and either a H440 PCI-E RAID Controller (HP part numbers 726823-001 749797-001) or a H240 PCI-E HBA (HP part numbers 779134-001 761873-B21 726907-B21).

If you want more than 3 drive bays (in SFF chassis), you will need two SAS expander cards and either two H440 or H240 controllers, or a single P840 RAID controller (HP part number 761880-001) if you need a single RAID array for all drives.

Each drive backplane needs a power connector. The chassis normally comes with a 2 PSU power backplane with enough connectors for 3 drive backplanes. To get power for 6 drive backplanes, you’ll need to upgrade the power backplane. See Maxing the Power section above for info.

An SSD usually consumes about 5.5-6w. Check the datasheet for your particular drives. As such, if you want to run 48 of them you’ll need 288W of power. This probably means you’ll need to upgrade your PSUs. See Maxing the Power section above for info.

Maxing the PCI-Express Cards (including GPUs)

The ML350 Gen9 has nine PCI-E slots. Table below shows their specifications:

To use more than 4 slots (or to have more than one PCI-E 16x GPU) you’ll need 2 CPUs. See Maxing the CPUs section above for info.

There are four PCI-E 16x slots which can all support GPUs, as long as you have 2 CPUs. Each GPU typically needs a power connector. The chassis normally comes with a 2 PSU power backplane with enough connectors for 2 GPUs. To get power for more than 2 GPUs, you will need to upgrade the power backplane to one which supports 4 PSUs. See Maxing the Power section above for info.

You must also account for the power draw of your GPUs. The RTX 4090, for example, has an official power consumption of 450w. Running 4 of these is nearly 2000w of power. If you had 4x 800w PSUs you could support this load but would lose redundancy. See Maxing the Power section above for info.

Maxing the Networking

At the time of writing, you can get dual port PCI-E 16x 100gbit/s network cards. You could run 4 of these in the ML350 Gen9. You’ll need 2 CPUs. See Maxing the PCI-Express Cards section above for info.

Note: PCIe 3.0 x16 bus can supply a maximum bandwidth of 128Gb/s so you could really only use 1 of these ports, per card, at the full 100gbit/s.

These cards seem to consume up to 25W each and must be accounted for in your power budget. See Maxing the Power section above if you need to upgrade your CPUs.

Leave a Reply

Your email address will not be published. Required fields are marked *