Proxmox: Shrinking disk of an LVM backed container

First, shut down the container and ensure it’s not running.

List out the LVM logical volumes:

  1. lvdisplay | grep "LV Path\|LV Size"

Choose the disk you want to resize – you’ll see they’re named by container ID and disk number. For example, /dev/vg0/vm-205-disk-1.

Check and fix the filesystem, just incase:

  1. e2fsck -fy /dev/vg0/vm-205-disk-1

Resize the file system. It is advisable, at this point, to set this to 1GB smaller than you actually want it (e.g. 7G when you want 8G):

  1. resize2fs /dev/vg0/vm-205-disk-1 7G

Resize the LVM LV to the actual size you want it to be:

  1. lvreduce -L 8G /dev/vg0/vm-205-disk-1

Resize the file system to fill the LVM LV:

  1. resize2fs /dev/vg0/vm-205-disk-1

Finally, edit the configuration for the container such that Proxmox reports the correct size for the disk. You will find this at /etc/pve/lxc/205.conf where 205 is your container ID.

You can now start your container and check the disks’ sizes:

  1. df -h

Proxmox Containers: Manage network interfaces file yourself

Proxmox will manage the network interfaces file (/etc/network/interfaces on Debian) for you, on containers. This is sometimes annoying – for example when you want to add static routes with a post-up.

If you simply touch /etc/network/.pve-ignore.interfaces inside the container then it will let you manage it yourself and will thus obey any post-up lines that you add.

SOLVED – “mount error(13): Permission denied” when doing cifs mount on LXC container (Proxmox)

When trying to do a command like this on a system running inside an LXC container on Proxmox:

  1. mount -t cifs '\\\downloads' -o username=myuser,password=mypass /mnt/downloads


Linux threw the error mount error(13): Permission denied. `tcpdump` showed that no traffic was leaving the container and `strace` didn’t throw up a lot of useful info.

dmesg said this:

  1. [171150.670602] audit: type=1400 audit(1471291773.083:167): apparmor="DENIED" operation="mount" info="failed type match" error=-13 profile="lxc-container-default" name="/run/shm/" pid=59433 comm="mount" flags="rw, nosuid, nodev, noexec, remount, relatime"

This reddit post finally yielded the answer. You need to edit /etc/apparmor.d/lxc/lxc-default and below the last deny mount line, add this:

  1. allow mount fstype=cifs,

The final config file will look something like this:

  1. # Do not load this file. Rather, load /etc/apparmor.d/lxc-containers, which
  2. # will source all profiles under /etc/apparmor.d/lxc
  4. profile lxc-container-default flags=(attach_disconnected,mediate_deleted) {
  5. #include <abstractions/lxc/container-base>
  7. # the container may never be allowed to mount devpts. If it does, it
  8. # will remount the host's devpts. We could allow it to do it with
  9. # the newinstance option (but, right now, we don't).
  10. deny mount fstype=devpts,
  11. allow mount fstype=cifs,
  12. }

Now restart apparmour:

  1. systemctl restart apparmor.service

Shut down your VM and start it again.

Your mount command might well work now. If not, check logs again to be sure it’s not a secondary problem (e.g. incorrect hashing algorithm).

HP ProLiant MicroServer Gen8 as a home server

I picked up a HPE ProLiant Gen8 G1610T with the intention of turning it into a Virtual Machine host to run a number of things, most prevalent a Plex media server. I then got a bit carried away and did a few upgrades to it. Details are below:

The Base Server

HP regularly have cashback deals on these servers. That makes them ludicrously cheap. I got mine on eBuyer for £114.98 after cash back. They come, stock, configured as follows:

  • CPU: Intel Celeron G1610T (Dual Core) @ 2.3 GHz
  • RAM: 4GB PC3-12800 Unbuffered ECC
  • Network: 2 x 1Gbit interfaces
  • Remote Management: iLO 4 Essentials presented as a separate (third) 1Gbit network interface
  • RAID controller: HP Dynamic Smart Array B120i with 4 removable (not hotswap) front bays. It’s fake RAID, but it supports 0, 1 and 10
  • Expansion card slots: 1 x PCI-E 16 slot
  • PSU: 1 x 200W. Bit of a shame there’s no option to have a redundant pair… but there’s just not enough space
HPE ProLiant Gen8 G1610T

HPE ProLiant Gen8 G1610T

Doing the Work

All of the work was ludicrously easy to do. The motherboard is on a tray that slides out the back of the server. You just need to unplug the 4 cables first.

There’s a tool on the front of the server which can be used to remove the heatsink from the CPU… and a standard flat-head screwdriver will do also.

HPE ProLiant Gen8 G1610T Without Case

HPE ProLiant Gen8 G1610T Without Case

HPE ProLiant Gen8 G1610T Motherboard Removal

HPE ProLiant Gen8 G1610T Motherboard Removal

CPU Upgrade

The CPU that ships with the server is absolutely fine for most workloads. It benches at 2349 on CPU Benchmark. It supports ECC RAM, which makes it great for a file server, and packs enough punch to run Plex with a single transcoding channel. It supports VT-x making it good for a low-usage virtualization server, however it doesn’t support VT-d so you cannot pass the disk controller straight through to a VM.

The stock CPU is 45w and can, apparently, be swapped for any low power Sandybridge i3, i5 or i7. I tried an Ivybridge I5 but this didn’t work particularly well.

I swapped mine for an Intel Xeon E3-1240 V2 (quad core with HT) @ 3.40GHz. This is actually 69w but the board handles it fine. I turned off auto-management of the chassis cooling fan and instead opted to have it run on full power to help circumvent any heat related issues. This CPU benchmarks at a whopping 9264 – almost 4x more powerful than the stock. It also supports VT-d.

RAM Upgrade

4GB is a bit low, by today’s standards. I opted to upgrade to 16GB (2x8GB) of Kingston PC3-12800 Unbuffered ECC RAM.

If you swap out the CPU for a Sandybridge i3/i5/i7, I understand you still need to use ECC RAM despite the CPU not explicitly supporting this.

HPE ProLiant Gen8 G1610T RAM Upgrade

HPE ProLiant Gen8 G1610T RAM Upgrade

Real RAID Controller

I used the spare PCI-E slot to run a 3WARE 9650SE-8LPML RAID controller, with write cache and BBU. This can be passed directly to a VM, using the VT-d functionality of the CPU. The HBA cable inside the Microserver plugs directly into this, so installation was simple. This gives phenomenal disk write performance as well as the ability to hot swap hard drives, which the native “fake” RAID doesn’t support.

3WARE 9650SE-8LPML RAID controller with BBU in HPE ProLiant Gen8 G1610T

3WARE 9650SE-8LPML RAID controller with BBU in HPE ProLiant Gen8 G1610T


An SSD can be mounted in the optical drive bay that comes in the server. Many people that I saw online were using gaffa tape to attach this. I used a cheap 9.5mm SATA hard disk caddy to mount a 1TB Samsung SSD. There is a spare SATA port on the board that this can plug into and there is a 4 pin floppy disk power connector that it can be powered off.

These caddys turn SATA into slimline SATA, thus to do this, you’ll need a slimline SATA adaptor which takes its power from a 4 pin floppy disk connector. These are, in this day and age, like rocking horse shit. You may need to butcher one together using a few other readily available connectors and some heat shrink.

In order to boot from this SSD, you need to change the RAID mode of the on-board controller to AHCI SATA. This presents a bit of a problem if you’re using it with other drives, as it’ll only see this SSD. Apparently you can change it to Legacy SATA which will allow you to access the other drives, but not in RAID. If you don’t want to boot from the SSD, you don’t need to change anything.

SSD in 9.5mm caddy in HPE ProLiant Gen8 G1610T

SSD in 9.5mm caddy in HPE ProLiant Gen8 G1610T

Spinning Disks

I simply fitted 4x 3TB disks into the bays at the front. Because of the real RAID controller, these have become hot-swappable… unlike if they were on the stock controller. I opted for faster 7200 RPM disks, in the hope of getting a bit more speed out of them. These are configured in RAID5, giving just over 8TB total storage.

Front Disk Slots of HPE ProLiant Gen8 G1610T

Front Disk Slots of HPE ProLiant Gen8 G1610T

iLO 4 Advanced

The iLO that ships with the server isn’t great. It doesn’t do remote management, remote media, etc. If you look hard enough (or not very hard at all), you might be able to find a way to upgrade it to iLO 4 advanced. This supports full remote administration.


HP iLO 4


The server comes with 2 x 10/100/1000 network interfaces, plus a separate 10/100/1000 interface for the ILO. I have used an LACP bond to provide additional throughput and redundancy across the primary pair of NICs.


I have used Proxmox as the virtualization platform. It’s free and fairly feature-some. Having used it for a few years in production environments, at work, it’s proved to be reliable and useful.

What Now?

I use the box to run various VMs including a NAS running Samba, Plex, Asterisk, a Radius server and some home automation stuff. It’s been reliable and the server is hardly sweating yet.

Resolving poor network throughput performance on pfSense running on Proxmox

There exists a bug in the FreeBSD VirtIO network drivers that massively degrades network throughput on a pfSense server. VirtIO is the interface of choice for Proxmox users and this problem can become troublesome.

The solution is to disable Hardware Checksum Offloading in pfSense. This is in System -> Advanced -> Networking tab. Tick the Disable hardware checksum offload box. You now need to reboot pfSense for this to take effect.