Proxmox VE is based on the famous Debian Linux distribution. That means that you have access to the whole world of Debian packages, and the base system is well documented. The Debian Administrator's Handbook is available online, and provides a comprehensive introduction to the Debian operating system (see [Hertzog13]).
A standard Proxmox VE installation uses the default repositories from Debian, so you get bug fixes and security updates through that channel. In addition, we provide our own package repository to roll out all Proxmox VE related packages. This includes updates to some Debian packages when necessary.
We also deliver a specially optimized Linux kernel, where we enable all required virtualization and container features. That kernel includes drivers for ZFS, and several hardware drivers. For example, we ship Intel network card drivers to support their newest hardware.
The following sections will concentrate on virtualization related topics. They either explains things which are different on Proxmox VE, or tasks which are commonly used on Proxmox VE. For other topics, please refer to the standard Debian documentation.
System requirements
For production servers, high quality server equipment is needed. Keep in mind, if you run 10 Virtual Servers on one machine and you then experience a hardware failure, 10 services are lost. Proxmox VE supports clustering, this means that multiple Proxmox VE installations can be centrally managed thanks to the included cluster functionality.
Proxmox VE can use local storage (DAS), SAN, NAS and also distributed storage (Ceph RBD). For details see chapter storage.
Minimum requirements, for evaluation
-
CPU: 64bit (Intel EMT64 or AMD64)
-
RAM: 1 GB RAM
-
Hard drive
-
One NIC
Recommended system requirements
-
CPU: 64bit (Intel EMT64 or AMD64), Multi core CPU recommended
-
RAM: 8 GB is good, more is better
-
Hardware RAID with batteries protected write cache (BBU) or flash based protection
-
Fast hard drives, best results with 15k rpm SAS, Raid10
-
At least two NIC´s, depending on the used storage technology you need more
Getting Help
Proxmox VE Wiki
The primary source of information is the Proxmox VE wiki. It combines the reference documentaion with user contributed content.
Community Support Forum
Proxmox VE itself is fully open source, so we always encourage our users to discuss and share their knowledge using the Community Support Forum. The forum is fully moderated by the Proxmox support team, and has a quite large user base around the whole world. Needless to say that such a large forum is a great place to get information.
Mailing Lists
This is a fast way to communicate via email with the Proxmox VE community
-
Mailing list for users: PVE User List
The primary communication channel for developers is:
-
Mailing list for developer: PVE development discussion
Commercial Support
Proxmox Server Solutions Gmbh also offers a commercial support channel. Proxmox VE server subscriptions can be ordered online, see Proxmox VE Shop. For all details see Proxmox VE Subscription Service Plans. Please contact the Proxmox sales team for commercial support requests or volume discounts.
Bug Tracker
We also run a public a public bug tracker at https://bugzilla.proxmox.com. If you ever detect a bug, you can file an bug entry there. This makes it easy to track the bug status, and you will get notified as soon as the bug is fixed.
Package Repositories
All Debian based systems use APT as package management tool. The list of repositories is defined in /etc/apt/sources.list and .list files found inside /etc/apt/sources.d/. Updates can be installed directly using apt-get, or via the GUI.
Apt sources.list files list one package repository per line, with the most preferred source listed first. Empty lines are ignored, and a # character anywhere on a line marks the remainder of that line as a comment. The information available from the configured sources is acquired by apt-get update.
deb http://ftp.debian.org/debian jessie main contrib
# security updates
deb http://security.debian.org jessie/updates main contrib
In addition, Proxmox VE provides three different package repositories.
Proxmox VE Enterprise Repository
This is the default, stable and recommended repository, available for
all Proxmox VE subscription users. It contains the most stable packages,
and is suitable for production use. You need a valid subscription key
to access this repository. The pve-enterprise
repository is enabled
by default:
deb https://enterprise.proxmox.com/debian jessie pve-enterprise
|
You can disable this repository by commenting out the above line
using a # (at the start of the line). This prevents error messages
if you do not have a subscription key. Please configure the
pve-no-subscription repository in that case. |
As soon as updates are available, the root@pam
user is notified via
email about the available new packages. On the GUI, the change-log of
each package can be viewed (if available), showing all details of the
update. So you will never miss important security fixes.
Proxmox VE No-Subscription Repository
As the name suggests, you do not need a subscription key to access this repository. It can be used for testing and non-production use. Its not recommended to run on production servers, as these packages are not always heavily tested and validated.
We recommend to configure this repository in /etc/apt/sources.list.
deb http://ftp.debian.org/debian jessie main contrib
# PVE pve-no-subscription repository provided by proxmox.com,
# NOT recommended for production use
deb http://download.proxmox.com/debian jessie pve-no-subscription
# security updates
deb http://security.debian.org jessie/updates main contrib
Proxmox VE Test Repository
Finally, there is a repository called pvetest
. This one contains the
latest packages and is heavily used by developers to test new
features. As usual, you can configure this using
/etc/apt/sources.list by adding the following line:
pvetest
deb http://download.proxmox.com/debian jessie pvetest
|
the pvetest repository should (as the name implies) only be used
for testing new features or bug fixes. |
Installing Proxmox VE
Proxmox VE ships as a set of Debian packages, so you can simply install it on top of a normal Debian installation. After configuring the repositories, you need to run:
apt-get update apt-get install proxmox-ve
While this looks easy, it presumes that you have correctly installed the base system, and you know how you want to configure and use the local storage. Network configuration is also completely up to you.
In general, this is not trivial, especially when you use LVM or ZFS. This is why we provide an installation CD-ROM for Proxmox VE. That installer just ask you a few questions, then partitions the local disk(s), installs all required packages, and configures the system including a basic network setup. You can get a fully functional system within a few minutes, including the following:
-
Complete operating system (Debian Linux, 64-bit)
-
Partition the hard drive with ext4 (alternative ext3 or xfs) or ZFS
-
Proxmox VE Kernel with LXC and KVM support
-
Complete toolset
-
Web based management interface
|
By default, the complete server is used and all existing data is removed. |
Using the Proxmox VE Installation CD-ROM
Please insert the installation CD-ROM, then boot from that drive. Immediately afterwards you can choose the following menu options:
- Install Proxmox VE
-
Start normal installation.
- Install Proxmox VE (Debug mode)
-
Start installation in debug mode. It opens a shell console at several installation steps, so that you can debug things if something goes wrong. Please press
CTRL-D
to exit those debug consoles and continue installation. This option is mostly for developers and not meant for general use. - Rescue Boot
-
This option allows you to boot an existing installation. It searches all attached hard disks, and if it finds an existing installation, boots directly into that disk using the existing Linux kernel. This can be useful if there are problems with the boot block (grub), or the BIOS is unable to read the boot block from the disk.
- Test Memory
-
Runs memtest86+. This is useful to check if your memory if functional and error free.
You normally select Install Proxmox VE to start the installation.
After that you get prompted to select the target hard disk(s). The
Options
button lets you select the target file system, which
defaults to ext4
. The installer uses LVM if you select ext3,
ext4 or xfs as file system, and offers additional option to
restrict LVM space (see below)
If you have more than one disk, you can also use ZFS as file system.
ZFS supports several software RAID levels, so this is specially useful
if you do not have a hardware RAID controller. The Options
button
lets you select the ZFS RAID level, and you can choose disks there.
The next pages just asks for basic configuration options like time zone and keyboard layout. You also need to specify your email address and select a superuser password.
The last step is the network configuration. Please note that you can use either IPv4 or IPv6 here, but not both. If you want to configure a dual stack node, you can easily do that after installation.
If you press Next
now, installation starts to format disks, and
copies packages to the target. Please wait until that is finished,
then reboot the server.
Further configuration is done via the Proxmox web interface. Just point your browser to the IP address given during installation (https://youripaddress:8006). Proxmox VE is tested for IE9, Firefox 10 and higher, and Google Chrome.
Advanced LVM configuration options
The installer creates a Volume Group (VG) called pve
, and additional
Logical Volumes (LVs) called root
, data
and swap
. The size of
those volumes can be controlled with:
-
hdsize
-
Defines the total HD size to be used. This way you can save free space on the HD for further partitioning (i.e. for an additional PV and VG on the same hard disk that can be used for LVM storage).
-
swapsize
-
To define the size of the
swap
volume. Default is the same size as installed RAM, with 4GB minimum andhdsize/8
as maximum. -
maxroot
-
The
root
volume size. Theroot
volume stores the whole operation system. -
maxvz
-
Define the size of the
data
volume, which is mounted at /var/lib/vz. -
minfree
-
To define the amount of free space left in LVM volume group
pve
. 16GB is the default if storage available > 128GB,hdsize/8
otherwise.LVM requires free space in the VG for snapshot creation (not required for lvmthin snapshots).
ZFS Performance Tips
ZFS uses a lot of memory, so it is best to add additional 8-16GB RAM if you want to use ZFS.
ZFS also provides the feature to use a fast SSD drive as write cache. The write cache is called the ZFS Intent Log (ZIL). You can add that after installation using the following command:
zpool add <pool-name> log </dev/path_to_fast_ssd>
System Software Updates
We provide regular package updates on all repositories. You can install those update using the GUI, or you can directly run the CLI command apt-get:
apt-get update
apt-get dist-upgrade
|
The apt package management system is extremely flexible and
provides countless of feature - see man apt-get or [Hertzog13] for
additional information. |
You should do such updates at regular intervals, or when we release versions with security related fixes. Major system upgrades are announced at the Forum. Those announcement also contain detailed upgrade instructions.
|
We recommend to run regular upgrades, because it is important to get the latest security updates. |
Network Configuration
Proxmox VE uses a bridged networking model. Each host can have up to 4094 bridges. Bridges are like physical network switches implemented in software. All VMs can share a single bridge, as if virtual network cables from each guest were all plugged into the same switch. But you can also create multiple bridges to separate network domains.
For connecting VMs to the outside world, bridges are attached to physical network cards. For further flexibility, you can configure VLANs (IEEE 802.1q) and network bonding, also known as "link aggregation". That way it is possible to build complex and flexible virtual networks.
Debian traditionally uses the ifup and ifdown commands to configure the network. The file /etc/network/interfaces contains the whole network setup. Please refer to to manual page (man interfaces) for a complete format description.
|
Proxmox VE does not write changes directly to /etc/network/interfaces. Instead, we write into a temporary file called /etc/network/interfaces.new, and commit those changes when you reboot the node. |
It is worth mentioning that you can directly edit the configuration file. All Proxmox VE tools tries hard to keep such direct user modifications. Using the GUI is still preferable, because it protect you from errors.
Naming Conventions
We currently use the following naming conventions for device names:
-
Ethernet devices: eth[N], where 0 ≤ N (
eth0
,eth1
, …) -
Bridge names: vmbr[N], where 0 ≤ N ≤ 4094 (
vmbr0
-vmbr4094
) -
Bonds: bond[N], where 0 ≤ N (
bond0
,bond1
, …) -
VLANs: Simply add the VLAN number to the device name, separated by a period (
eth0.50
,bond1.30
)
This makes it easier to debug networks problems, because the device names implies the device type.
Default Configuration using a Bridge
The installation program creates a single bridge named vmbr0
, which
is connected to the first ethernet card eth0
. The corresponding
configuration in /etc/network/interfaces looks like this:
auto lo
iface lo inet loopback
iface eth0 inet manual
auto vmbr0
iface vmbr0 inet static
address 192.168.10.2
netmask 255.255.255.0
gateway 192.168.10.1
bridge_ports eth0
bridge_stp off
bridge_fd 0
Virtual machines behave as if they were directly connected to the physical network. The network, in turn, sees each virtual machine as having its own MAC, even though there is only one network cable connecting all of these VMs to the network.
Routed Configuration
Most hosting providers do not support the above setup. For security reasons, they disable networking as soon as they detect multiple MAC addresses on a single interface.
|
Some providers allows you to register additional MACs on there management interface. This avoids the problem, but is clumsy to configure because you need to register a MAC for each of your VMs. |
You can avoid the problem by "routing" all traffic via a single interface. This makes sure that all network packets use the same MAC address.
A common scenario is that you have a public IP (assume 192.168.10.2 for this example), and an additional IP block for your VMs (10.10.10.1/255.255.255.0). We recommend the following setup for such situations:
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address 192.168.10.2
netmask 255.255.255.0
gateway 192.168.10.1
post-up echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp
auto vmbr0
iface vmbr0 inet static
address 10.10.10.1
netmask 255.255.255.0
bridge_ports none
bridge_stp off
bridge_fd 0
Masquerading (NAT) with iptables
In some cases you may want to use private IPs behind your Proxmox host’s true IP, and masquerade the traffic using NAT:
auto lo
iface lo inet loopback
auto eth0
#real IP adress
iface eth0 inet static
address 192.168.10.2
netmask 255.255.255.0
gateway 192.168.10.1
auto vmbr0
#private sub network
iface vmbr0 inet static
address 10.10.10.1
netmask 255.255.255.0
bridge_ports none
bridge_stp off
bridge_fd 0
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eth0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eth0 -j MASQUERADE
Logical Volume Manager (LVM)
Most people install Proxmox VE directly on a local disk. The Proxmox VE installation CD offers several options for local disk management, and the current default setup uses LVM. The installer let you select a single disk for such setup, and uses that disk as physical volume for the Volume Group (VG) pve. The following output is from a test installation using a small 8GB disk:
# pvs
PV VG Fmt Attr PSize PFree
/dev/sda3 pve lvm2 a-- 7.87g 876.00m
# vgs
VG #PV #LV #SN Attr VSize VFree
pve 1 3 0 wz--n- 7.87g 876.00m
The installer allocates three Logical Volumes (LV) inside this VG:
# lvs
LV VG Attr LSize Pool Origin Data% Meta%
data pve twi-a-tz-- 4.38g 0.00 0.63
root pve -wi-ao---- 1.75g
swap pve -wi-ao---- 896.00m
- root
-
Formatted as ext4, and contains the operation system.
- swap
-
Swap partition
- data
-
This volume uses LVM-thin, and is used to store VM images. LVM-thin is preferable for this task, because it offers efficient support for snapshots and clones.
Hardware
We highly recommend to use a hardware RAID controller (with BBU) for such setups. This increases performance, provides redundancy, and make disk replacements easier (hot-pluggable).
LVM itself does not need any special hardware, and memory requirements are very low.
Bootloader
We install two boot loaders by default. The first partition contains the standard GRUB boot loader. The second partition is an EFI System Partition (ESP), which makes it possible to boot on EFI systems.
ZFS on Linux
ZFS is a combined file system and logical volume manager designed by Sun Microsystems. Starting with Proxmox VE 3.4, the native Linux kernel port of the ZFS file system is introduced as optional file-system and also as an additional selection for the root file-system. There is no need for manually compile ZFS modules - all packages are included.
By using ZFS, its possible to achieve maximal enterprise features with low budget hardware, but also high performance systems by leveraging SSD caching or even SSD only setups. ZFS can replace cost intense hardware raid cards by moderate CPU and memory load combined with easy management.
-
Easy configuration and management with Proxmox VE GUI and CLI.
-
Reliable
-
Protection against data corruption
-
Data compression on file-system level
-
Snapshots
-
Copy-on-write clone
-
Various raid levels: RAID0, RAID1, RAID10, RAIDZ-1, RAIDZ-2 and RAIDZ-3
-
Can use SSD for cache
-
Self healing
-
Continuous integrity checking
-
Designed for high storage capacities
-
Protection against data corruption
-
Asynchronous replication over network
-
Open Source
-
Encryption
-
…
Hardware
ZFS depends heavily on memory, so you need at least 8GB to start. In practice, use as much you can get for your hardware/budget. To prevent data corruption, we recommend the use of high quality ECC RAM.
If you use a dedicated cache and/or log disk, you should use a enterprise class SSD (e.g. Intel SSD DC S3700 Series). This can increase the overall performance significantly.
|
Do not use ZFS on top of hardware controller which has it’s own cache management. ZFS needs to directly communicate with disks. An HBA adapter is the way to go, or something like LSI controller flashed in IT mode. |
If you are experimenting with an installation of Proxmox VE inside a VM (Nested Virtualization), don’t use virtio for disks of that VM, since they are not supported by ZFS. Use IDE or SCSI instead (works also with virtio SCSI controller type).
Installation as root file system
When you install using the Proxmox VE installer, you can choose ZFS for the root file system. You need to select the RAID type at installation time:
RAID0
|
Also called striping. The capacity of such volume is the sum of the capacity of all disks. But RAID0 does not add any redundancy, so the failure of a single drive makes the volume unusable. |
RAID1
|
Also called mirroring. Data is written identically to all disks. This mode requires at least 2 disks with the same size. The resulting capacity is that of a single disk. |
RAID10
|
A combination of RAID0 and RAID1. Requires at least 4 disks. |
RAIDZ-1
|
A variation on RAID-5, single parity. Requires at least 3 disks. |
RAIDZ-2
|
A variation on RAID-5, double parity. Requires at least 4 disks. |
RAIDZ-3
|
A variation on RAID-5, triple parity. Requires at least 5 disks. |
The installer automatically partitions the disks, creates a ZFS pool called rpool, and installs the root file system on the ZFS subvolume rpool/ROOT/pve-1.
Another subvolume called rpool/data is created to store VM images. In order to use that with the Proxmox VE tools, the installer creates the following configuration entry in /etc/pve/storage.cfg:
zfspool: local-zfs
pool rpool/data
sparse
content images,rootdir
After installation, you can view your ZFS pool status using the zpool command:
# zpool status
pool: rpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sda2 ONLINE 0 0 0
sdb2 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0
errors: No known data errors
The zfs command is used configure and manage your ZFS file systems. The following command lists all file systems after installation:
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 4.94G 7.68T 96K /rpool
rpool/ROOT 702M 7.68T 96K /rpool/ROOT
rpool/ROOT/pve-1 702M 7.68T 702M /
rpool/data 96K 7.68T 96K /rpool/data
rpool/swap 4.25G 7.69T 64K -
Bootloader
The default ZFS disk partitioning scheme does not use the first 2048 sectors. This gives enough room to install a GRUB boot partition. The Proxmox VE installer automatically allocates that space, and installs the GRUB boot loader there. If you use a redundant RAID setup, it installs the boot loader on all disk required for booting. So you can boot even if some disks fail.
|
It is not possible to use ZFS as root partition with UEFI boot. |
ZFS Administration
This section gives you some usage examples for common tasks. ZFS itself is really powerful and provides many options. The main commands to manage ZFS are zfs and zpool. Both commands comes with great manual pages, worth to read:
# man zpool
# man zfs
To create a new pool, at least one disk is needed. The ashift should have the same sector-size (2 power of ashift) or larger as the underlying disk.
zpool create -f -o ashift=12 <pool> <device>
To activate the compression
zfs set compression=lz4 <pool>
Minimum 1 Disk
zpool create -f -o ashift=12 <pool> <device1> <device2>
Minimum 2 Disks
zpool create -f -o ashift=12 <pool> mirror <device1> <device2>
Minimum 4 Disks
zpool create -f -o ashift=12 <pool> mirror <device1> <device2> mirror <device3> <device4>
Minimum 3 Disks
zpool create -f -o ashift=12 <pool> raidz1 <device1> <device2> <device3>
Minimum 4 Disks
zpool create -f -o ashift=12 <pool> raidz2 <device1> <device2> <device3> <device4>
It is possible to use a dedicated cache drive partition to increase the performance (use SSD).
As <device> it is possible to use more devices, like it’s shown in "Create a new pool with RAID*".
zpool create -f -o ashift=12 <pool> <device> cache <cache_device>
It is possible to use a dedicated cache drive partition to increase the performance(SSD).
As <device> it is possible to use more devices, like it’s shown in "Create a new pool with RAID*".
zpool create -f -o ashift=12 <pool> <device> log <log_device>
If you have an pool without cache and log. First partition the SSD in 2 partition with parted or gdisk
|
Always use GPT partition tables (gdisk or parted). |
The maximum size of a log device should be about half the size of physical memory, so this is usually quite small. The rest of the SSD can be used to the cache.
zpool add -f <pool> log <device-part1> cache <device-part2>
zpool replace -f <pool> <old device> <new-device>
Activate E-Mail Notification
ZFS comes with an event daemon, which monitors events generated by the ZFS kernel module. The daemon can also send E-Mails on ZFS event like pool errors.
To activate the daemon it is necessary to edit /etc/zfs/zed.d/zed.rc with your favored editor, and uncomment the ZED_EMAIL_ADDR setting:
ZED_EMAIL_ADDR="root"
Please note Proxmox VE forwards mails to root to the email address configured for the root user.
|
the only settings that is required is ZED_EMAIL_ADDR. All other settings are optional. |
Limit ZFS memory usage
It is good to use maximal 50 percent (which is the default) of the system memory for ZFS ARC to prevent performance shortage of the host. Use your preferred editor to change the configuration in /etc/modprobe.d/zfs.conf and insert:
options zfs zfs_arc_max=8589934592
This example setting limits the usage to 8GB.
|
If your root fs is ZFS you must update your initramfs every time this value changes.
|
SWAP on ZFS on Linux may generate some troubles, like blocking the server or generating a high IO load, often seen when starting a Backup to an external Storage.
We strongly recommend to use enough memory, so that you normally do not run into low memory situations. Additionally, you can lower the swappiness value. A good value for servers is 10:
sysctl -w vm.swappiness=10
To make the swappiness persistence, open /etc/sysctl.conf with an editor of your choice and add the following line:
vm.swappiness = 10
Value | Strategy |
---|---|
|
The kernel will swap only to avoid an out of memory condition |
|
Minimum amount of swapping without disabling it entirely. |
|
This value is sometimes recommended to improve performance when sufficient memory exists in a system. |
|
The default value. |
|
The kernel will swap aggressively. |