NAME

pvesm - Proxmox VE Storage Manager

SYNOPSYS

pvesm <COMMAND> [ARGS] [OPTIONS]

pvesm add <type> <storage> [OPTIONS]

Create a new storage.

<type> (dir | drbd | glusterfs | iscsi | iscsidirect | lvm | lvmthin | nfs | rbd | sheepdog | zfs | zfspool)

Storage type.

<storage> string

The storage identifier.

-authsupported string

Authsupported.

-base string

Base volume. This volume is automatically activated.

-blocksize string

block size

-comstar_hg string

host group for comstar views

-comstar_tg string

target group for comstar views

-content string

Allowed content types.

Note the value rootdir is used for Containers, and value images for VMs.
-disable boolean

Flag to disable the storage.

-export string

NFS export path.

-format string

Default image format.

-iscsiprovider string

iscsi provider

-krbd boolean

Access rbd through krbd kernel module.

-maxfiles integer (0 - N)

Maximal number of backup files per VM. Use 0 for unlimted.

-monhost string

Monitors daemon ips.

-nodes string

List of cluster node names.

-nowritecache boolean

disable write caching on the target

-options string

NFS mount options (see man nfs)

-path string

File system path.

-pool string

Pool.

-portal string

iSCSI portal (IP or DNS name with optional port).

-redundancy integer (1 - 16) (default=2)

The redundancy count specifies the number of nodes to which the resource should be deployed. It must be at least 1 and at most the number of nodes in the cluster.

-saferemove boolean

Zero-out data when removing LVs.

-saferemove_throughput string

Wipe throughput (cstream -t parameter value).

-server string

Server IP or DNS name.

-server2 string

Backup volfile server IP or DNS name.

Note Requires option(s): server
-shared boolean

Mark storage as shared.

-sparse boolean

use sparse volumes

-target string

iSCSI target.

-thinpool string

LVM thin pool LV name.

-transport (rdma | tcp | unix)

Gluster transport: tcp or rdma

-username string

RBD Id.

-vgname string

Volume group name.

-volume string

Glusterfs Volume.

pvesm alloc <storage> <vmid> <filename> <size> [OPTIONS]

Allocate disk images.

<storage> string

The storage identifier.

<vmid> integer (1 - N)

Specify owner VM

<filename> string

The name of the file to create.

<size> \d+[MG]?

Size in kilobyte (1024 bytes). Optional suffixes M (megabyte, 1024K) and G (gigabyte, 1024M)

-format (qcow2 | raw | subvol)

no description available

Note Requires option(s): size

pvesm free <volume> [OPTIONS]

Delete volume

<volume> string

Volume identifier

-storage string

The storage identifier.

pvesm glusterfsscan <server>

Scan remote GlusterFS server.

<server> string

no description available

pvesm help [<cmd>] [OPTIONS]

Get help about specified command.

<cmd> string

Command name

-verbose boolean

Verbose output format.

pvesm iscsiscan -portal <string> [OPTIONS]

Scan remote iSCSI server.

-portal string

no description available

pvesm list <storage> [OPTIONS]

List storage content.

<storage> string

The storage identifier.

-content string

Only list content of this type.

-vmid integer (1 - N)

Only list images for this VM

pvesm lvmscan

List local LVM volume groups.

pvesm lvmthinscan <vg>

List local LVM Thin Pools.

<vg> [a-zA-Z0-9\.\+\_][a-zA-Z0-9\.\+\_\-]+

no description available

pvesm nfsscan <server>

Scan remote NFS server.

<server> string

no description available

pvesm path <volume>

Get filesystem path for specified volume

<volume> string

Volume identifier

pvesm remove <storage>

Delete storage configuration.

<storage> string

The storage identifier.

pvesm set <storage> [OPTIONS]

Update storage configuration.

<storage> string

The storage identifier.

-blocksize string

block size

-comstar_hg string

host group for comstar views

-comstar_tg string

target group for comstar views

-content string

Allowed content types.

Note the value rootdir is used for Containers, and value images for VMs.
-delete string

A list of settings you want to delete.

-digest string

Prevent changes if current configuration file has different SHA1 digest. This can be used to prevent concurrent modifications.

-disable boolean

Flag to disable the storage.

-format string

Default image format.

-krbd boolean

Access rbd through krbd kernel module.

-maxfiles integer (0 - N)

Maximal number of backup files per VM. Use 0 for unlimted.

-nodes string

List of cluster node names.

-nowritecache boolean

disable write caching on the target

-options string

NFS mount options (see man nfs)

-pool string

Pool.

-redundancy integer (1 - 16) (default=2)

The redundancy count specifies the number of nodes to which the resource should be deployed. It must be at least 1 and at most the number of nodes in the cluster.

-saferemove boolean

Zero-out data when removing LVs.

-saferemove_throughput string

Wipe throughput (cstream -t parameter value).

-server string

Server IP or DNS name.

-server2 string

Backup volfile server IP or DNS name.

Note Requires option(s): server
-shared boolean

Mark storage as shared.

-sparse boolean

use sparse volumes

-transport (rdma | tcp | unix)

Gluster transport: tcp or rdma

-username string

RBD Id.

pvesm status [OPTIONS]

Get status for all datastores.

-content string

Only list stores which support this content type.

-enabled boolean (default=0)

Only list stores which are enabled (not disabled in config).

-storage string

Only list status for specified storage

-target string

If target is different to node, we only lists shared storages which content is accessible on this node and the specified target node.

pvesm zfsscan

Scan zfs pool list on local node.

DESCRIPTION

The Proxmox VE storage model is very flexible. Virtual machine images can either be stored on one or several local storages, or on shared storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may configure as many storage pools as you like. You can use all storage technologies available for Debian Linux.

One major benefit of storing VMs on shared storage is the ability to live-migrate running machines without any downtime, as all nodes in the cluster have direct access to VM disk images. There is no need to copy VM image data, so live migration is very fast in that case.

The storage library (package libpve-storage-perl) uses a flexible plugin system to provide a common interface to all storage types. This can be easily adopted to include further storage types in future.

Storage Types

There are basically two different classes of storage types:

Block level storage

Allows to store large raw images. It is usually not possible to store other files (ISO, backups, ..) on such storage types. Most modern block level storage implementations support snapshots and clones. RADOS, Sheepdog and DRBD are distributed systems, replicating storage data to different nodes.

File level storage

They allow access to a full featured (POSIX) file system. They are more flexible, and allows you to store any content type. ZFS is probably the most advanced system, and it has full support for snapshots and clones.

Table 1. Available storage types
Description PVE type Level Shared Snapshots Stable

ZFS (local)

zfspool

file

no

yes

yes

Directory

dir

file

no

no

yes

NFS

nfs

file

yes

no

yes

GlusterFS

glusterfs

file

yes

no

yes

LVM

lvm

block

no

no

yes

LVM-thin

lvmthin

block

no

yes

yes

iSCSI/kernel

iscsi

block

yes

no

yes

iSCSI/libiscsi

iscsidirect

block

yes

no

yes

Ceph/RBD

rbd

block

yes

yes

yes

Sheepdog

sheepdog

block

yes

yes

beta

DRBD9

drbd

block

yes

yes

beta

ZFS over iSCSI

zfs

block

yes

yes

yes

Tip It is possible to use LVM on top of an iSCSI storage. That way you get a shared LVM storage.

Thin provisioning

A number of storages, and the Qemu image format qcow2, support thin provisioning. With thin provisioning activated, only the blocks that the guest system actually use will be written to the storage.

Say for instance you create a VM with a 32GB hard disk, and after installing the guest system OS, the root filesystem of the VM contains 3 GB of data. In that case only 3GB are written to the storage, even if the guest VM sees a 32GB hard drive. In this way thin provisioning allows you to create disk images which are larger than the currently available storage blocks. You can create large disk images for your VMs, and when the need arises, add more disks to your storage without resizing the VMs filesystems.

All storage types which have the Snapshots feature also support thin provisioning.

Caution If a storage runs full, all guests using volumes on that storage receives IO error. This can cause file system inconsistencies and may corrupt your data. So it is advisable to avoid over-provisioning of your storage resources, or carefully observe free space to avoid such conditions.

Storage Configuration

All Proxmox VE related storage configuration is stored within a single text file at /etc/pve/storage.cfg. As this file is within /etc/pve/, it gets automatically distributed to all cluster nodes. So all nodes share the same storage configuration.

Sharing storage configuration make perfect sense for shared storage, because the same shared storage is accessible from all nodes. But is also useful for local storage types. In this case such local storage is available on all nodes, but it is physically different and can have totally different content.

Storage Pools

Each storage pool has a <type>, and is uniquely identified by its <STORAGE_ID>. A pool configuration looks like this:

<type>: <STORAGE_ID>
        <property> <value>
        <property> <value>
        ...

The <type>: <STORAGE_ID> line starts the pool definition, which is then followed by a list of properties. Most properties have values, but some of them come with reasonable default. In that case you can omit the value.

To be more specific, take a look at the default storage configuration after installation. It contains one special local storage pool named local, which refers to the directory /var/lib/vz and is always available. The Proxmox VE installer creates additional storage entries depending on the storage type chosen at installation time.

Default storage configuration (/etc/pve/storage.cfg)
dir: local
        path /var/lib/vz
        content iso,vztmpl,backup

# default image store on LVM based installation
lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

# default image store on ZFS based installation
zfspool: local-zfs
        pool rpool/data
        sparse
        content images,rootdir

Common Storage Properties

A few storage properties are common among different storage types.

nodes

List of cluster node names where this storage is usable/accessible. One can use this property to restrict storage access to a limited set of nodes.

content

A storage can support several content types, for example virtual disk images, cdrom iso images, container templates or container root directories. Not all storage types support all content types. One can set this property to select for what this storage is used for.

images

KVM-Qemu VM images.

rootdir

Allow to store container data.

vztmpl

Container templates.

backup

Backup files (vzdump).

iso

ISO images

shared

Mark storage as shared.

disable

You can use this flag to disable the storage completely.

maxfiles

Maximal number of backup files per VM. Use 0 for unlimted.

format

Default image format (raw|qcow2|vmdk)

Warning It is not advisable to use the same storage pool on different Proxmox VE clusters. Some storage operation need exclusive access to the storage, so proper locking is required. While this is implemented within a cluster, it does not work between different clusters.

Volumes

We use a special notation to address storage data. When you allocate data from a storage pool, it returns such a volume identifier. A volume is identified by the <STORAGE_ID>, followed by a storage type dependent volume name, separated by colon. A valid <VOLUME_ID> looks like:

local:230/example-image.raw
local:iso/debian-501-amd64-netinst.iso
local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz
iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61

To get the filesystem path for a <VOLUME_ID> use:

pvesm path <VOLUME_ID>

Volume Ownership

There exists an ownership relation for image type volumes. Each such volume is owned by a VM or Container. For example volume local:230/example-image.raw is owned by VM 230. Most storage backends encodes this ownership information into the volume name.

When you remove a VM or Container, the system also removes all associated volumes which are owned by that VM or Container.

Using the Command Line Interface

It is recommended to familiarize yourself with the concept behind storage pools and volume identifiers, but in real life, you are not forced to do any of those low level operations on the command line. Normally, allocation and removal of volumes is done by the VM and Container management tools.

Nevertheless, there is a command line tool called pvesm (Proxmox VE storage manager), which is able to perform common storage management tasks.

Examples

Add storage pools

pvesm add <TYPE> <STORAGE_ID> <OPTIONS>
pvesm add dir <STORAGE_ID> --path <PATH>
pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT>
pvesm add lvm <STORAGE_ID> --vgname <VGNAME>
pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET>

Disable storage pools

pvesm set <STORAGE_ID> --disable 1

Enable storage pools

pvesm set <STORAGE_ID> --disable 0

Change/set storage options

pvesm set <STORAGE_ID> <OPTIONS>
pvesm set <STORAGE_ID> --shared 1
pvesm set local --format qcow2
pvesm set <STORAGE_ID> --content iso

Remove storage pools. This does not delete any data, and does not disconnect or unmount anything. It just removes the storage configuration.

pvesm remove <STORAGE_ID>

Allocate volumes

pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>]

Allocate a 4G volume in local storage. The name is auto-generated if you pass an empty string as <name>

pvesm alloc local <VMID> '' 4G

Free volumes

pvesm free <VOLUME_ID>
Warning This really destroys all volume data.

List storage status

pvesm status

List storage contents

pvesm list <STORAGE_ID> [--vmid <VMID>]

List volumes allocated by VMID

pvesm list <STORAGE_ID> --vmid <VMID>

List iso images

pvesm list <STORAGE_ID> --iso

List container templates

pvesm list <STORAGE_ID> --vztmpl

Show filesystem path for a volume

pvesm path <VOLUME_ID>

Directory Backend

Storage pool type: dir

Proxmox VE can use local directories or locally mounted shares for storage. A directory is a file level storage, so you can store any content type like virtual disk images, containers, templates, ISO images or backup files.

Note You can mount additional storages via standard linux /etc/fstab, and then define a directory storage for that mount point. This way you can use any file system supported by Linux.

This backend assumes that the underlying directory is POSIX compatible, but nothing else. This implies that you cannot create snapshots at the storage level. But there exists a workaround for VM images using the qcow2 file format, because that format supports snapshots internally.

Tip Some storage types do not support O_DIRECT, so you can’t use cache mode none with such storages. Simply use cache mode writeback instead.

We use a predefined directory layout to store different content types into different sub-directories. This layout is used by all file level storage backends.

Table 2. Directory layout
Content type Subdir

VM images

images/<VMID>/

ISO images

template/iso/

Container templates

template/cache

Backup files

dump/

Configuration

This backend supports all common storage properties, and adds an additional property called path to specify the directory. This needs to be an absolute file system path.

Configuration Example (/etc/pve/storage.cfg)
dir: backup
        path /mnt/backup
        content backup
        maxfiles 7

Above configuration defines a storage pool called backup. That pool can be used to store up to 7 backups (maxfiles 7) per VM. The real path for the backup files is /mnt/backup/dump/….

File naming conventions

This backend uses a well defined naming scheme for VM images:

vm-<VMID>-<NAME>.<FORMAT>
<VMID>

This specifies the owner VM.

<NAME>

This can be an arbitrary name (ascii) without white spaces. The backend uses disk-[N] as default, where [N] is replaced by an integer to make the name unique.

<FORMAT>

Species the image format (raw|qcow2|vmdk).

When you create a VM template, all VM images are renamed to indicate that they are now read-only, and can be uses as a base image for clones:

base-<VMID>-<NAME>.<FORMAT>
Note Such base images are used to generate cloned images. So it is important that those files are read-only, and never get modified. The backend changes the access mode to 0444, and sets the immutable flag (chattr +i) if the storage supports that.

Storage Features

As mentioned above, most file systems do not support snapshots out of the box. To workaround that problem, this backend is able to use qcow2 internal snapshot capabilities.

Same applies to clones. The backend uses the qcow2 base image feature to create clones.

Table 3. Storage features for backend dir
Content types Image formats Shared Snapshots Clones

images rootdir vztempl iso backup

raw qcow2 vmdk subvol

no

qcow2

qcow2

Examples

Please use the following command to allocate a 4GB image on storage local:

# pvesm alloc local 100 vm-100-disk10.raw 4G
Formatting '/var/lib/vz/images/100/vm-100-disk10.raw', fmt=raw size=4294967296
successfully created 'local:100/vm-100-disk10.raw'
Note The image name must conform to above naming conventions.

The real file system path is shown with:

# pvesm path local:100/vm-100-disk10.raw
/var/lib/vz/images/100/vm-100-disk10.raw

And you can remove the image with:

# pvesm free local:100/vm-100-disk10.raw

NFS Backend

Storage pool type: nfs

The NFS backend is based on the directory backend, so it shares most properties. The directory layout and the file naming conventions are the same. The main advantage is that you can directly configure the NFS server properties, so the backend can mount the share automatically. There is no need to modify /etc/fstab. The backend can also test if the server is online, and provides a method to query the server for exported shares.

Configuration

The backend supports all common storage properties, except the shared flag, which is always set. Additionally, the following properties are used to configure the NFS server:

server

Server IP or DNS name. To avoid DNS lookup delays, it is usually preferrable to use an IP address instead of a DNS name - unless you have a very reliable DNS server, or list the server in the local /etc/hosts file.

export

NFS export path (as listed by pvesm nfsscan).

You can also set NFS mount options:

path

The local mount point (defaults to /mnt/pve/<STORAGE_ID>/).

options

NFS mount options (see man nfs).

Configuration Example (/etc/pve/storage.cfg)
nfs: iso-templates
        path /mnt/pve/iso-templates
        server 10.0.0.10
        export /space/iso-templates
        options vers=3,soft
        content iso,vztmpl
Tip After an NFS request times out, NFS request are retried indefinitely by default. This can lead to unexpected hangs on the client side. For read-only content, it is worth to consider the NFS soft option, which limits the number of retries to three.

Storage Features

NFS does not support snapshots, but the backend use qcow2 features to implement snapshots and cloning.

Table 4. Storage features for backend nfs
Content types Image formats Shared Snapshots Clones

images rootdir vztempl iso backup

raw qcow2 vmdk subvol

yes

qcow2

qcow2

Examples

You can get a list of exported NFS shares with:

# pvesm nfsscan <server>

GlusterFS Backend

Storage pool type: glusterfs

GlusterFS is a salable network file system. The system uses a modular design, runs on commodity hardware, and can provide a highly available enterprise storage at low costs. Such system is capable of scaling to several petabytes, and can handle thousands of clients.

Note After a node/brick crash, GlusterFS does a full rsync to make sure data is consistent. This can take a very long time with large files, so this backend is not suitable to store large VM images.

Configuration

The backend supports all common storage properties, and adds the following GlusterFS specific options:

server

GlusterFS volfile server IP or DNS name.

server2

Backup volfile server IP or DNS name.

volume

GlusterFS Volume.

transport

GlusterFS transport: tcp, unix or rdma

Configuration Example (/etc/pve/storage.cfg)
glusterfs: Gluster
        server 10.2.3.4
        server2 10.2.3.5
        volume glustervol
        content images,iso

File naming conventions

The directory layout and the file naming conventions are inhertited from the dir backend.

Storage Features

The storage provides a file level interface, but no native snapshot/clone implementation.

Table 5. Storage features for backend glusterfs
Content types Image formats Shared Snapshots Clones

images vztempl iso backup

raw qcow2 vmdk

yes

qcow2

qcow2

Local ZFS Pool Backend

Storage pool type: zfspool

This backend allows you to access local ZFS pools (or ZFS filesystems inside such pools).

Configuration

The backend supports the common storage properties content, nodes, disable, and the following ZFS specific properties:

pool

Select the ZFS pool/filesystem. All allocations are done within that pool.

blocksize

Set ZFS blocksize parameter.

sparse

Use ZFS thin-provisioning. A sparse volume is a volume whose reservation is not equal to the volume size.

Configuration Example (/etc/pve/storage.cfg)
zfspool: vmdata
        pool tank/vmdata
        content rootdir,images
        sparse

File naming conventions

The backend uses the following naming scheme for VM images:

vm-<VMID>-<NAME>      // normal VM images
base-<VMID>-<NAME>    // template VM image (read-only)
subvol-<VMID>-<NAME>  // subvolumes (ZFS filesystem for containers)
<VMID>

This specifies the owner VM.

<NAME>

This scan be an arbitrary name (ascii) without white spaces. The backend uses disk[N] as default, where [N] is replaced by an integer to make the name unique.

Storage Features

ZFS is probably the most advanced storage type regarding snapshot and cloning. The backend uses ZFS datasets for both VM images (format raw) and container data (format subvol). ZFS properties are inherited from the parent dataset, so you can simply set defaults on the parent dataset.

Table 6. Storage features for backend zfs
Content types Image formats Shared Snapshots Clones

images rootdir

raw subvol

no

yes

yes

Examples

It is recommended to create and extra ZFS filesystem to store your VM images:

# zfs create tank/vmdata

To enable compression on that newly allocated filesystem:

# zfs set compression=on tank/vmdata

You can get a list of available ZFS filesystems with:

# pvesm zfsscan

LVM Backend

Storage pool type: lvm

LVM is a thin software layer on top of hard disks and partitions. It can be used to split available disk space into smaller logical volumes. LVM is widely used on Linux and makes managing hard drives easier.

Another use case is to put LVM on top of a big iSCSI LUN. That way you can easily manage space on that iSCSI LUN, which would not be possible otherwise, because the iSCSI specification does not define a management interface for space allocation.

Configuration

The LVM backend supports the common storage properties content, nodes, disable, and the following LVM specific properties:

vgname

LVM volume group name. This must point to an existing volume group.

base

Base volume. This volume is automatically activated before accessing the storage. This is mostly useful when the LVM volume group resides on a remote iSCSI server.

saferemove

Zero-out data when removing LVs. When removing a volume, this makes sure that all data gets erased.

saferemove_throughput

Wipe throughput (cstream -t parameter value).

Configuration Example (/etc/pve/storage.cfg)
lvm: myspace
        vgname myspace
        content rootdir,images

File naming conventions

The backend use basically the same naming conventions as the ZFS pool backend.

vm-<VMID>-<NAME>      // normal VM images

Storage Features

LVM is a typical block storage, but this backend does not support snapshot and clones. Unfortunately, normal LVM snapshots are quite inefficient, because they interfere all writes on the whole volume group during snapshot time.

One big advantage is that you can use it on top of a shared storage, for example an iSCSI LUN. The backend itself implement proper cluster wide locking.

Tip The newer LVM-thin backend allows snapshot and clones, but does not support shared storage.
Table 7. Storage features for backend lvm
Content types Image formats Shared Snapshots Clones

images rootdir

raw

possible

no

no

Examples

List available volume groups:

# pvesm lvmscan

LVM thin Backend

Storage pool type: lvmthin

LVM normally allocates blocks when you create a volume. LVM thin pools instead allocates blocks when they are written. This behaviour is called thin-provisioning, because volumes can be much larger than physically available space.

You can use the normal LVM command line tools to manage and create LVM thin pools (see man lvmthin for details). Assuming you already have a LVM volume group called pve, the following commands create a new LVM thin pool (size 100G) called data:

lvcreate -L 100G -n data pve
lvconvert --type thin-pool pve/data

Configuration

The LVM thin backend supports the common storage properties content, nodes, disable, and the following LVM specific properties:

vgname

LVM volume group name. This must point to an existing volume group.

thinpool

The name of the LVM thin pool.

Configuration Example (/etc/pve/storage.cfg)
lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

File naming conventions

The backend use basically the same naming conventions as the ZFS pool backend.

vm-<VMID>-<NAME>      // normal VM images

Storage Features

LVM thin is a block storage, but fully supports snapshots and clones efficiently. New volumes are automatically initialized with zero.

It must be mentioned that LVM thin pools cannot be shared across multiple nodes, so you can only use them as local storage.

Table 8. Storage features for backend lvmthin
Content types Image formats Shared Snapshots Clones

images rootdir

raw

no

yes

yes

Examples

List available LVM thin pools on volume group pve:

# pvesm lvmthinscan pve

Open-iSCSI initiator

Storage pool type: iscsi

iSCSI is a widely employed technology used to connect to storage servers. Almost all storage vendors support iSCSI. There are also open source iSCSI target solutions available, e.g. OpenMediaVault, which is based on Debian.

To use this backend, you need to install the open-iscsi package. This is a standard Debian package, but it is not installed by default to save resources.

# apt-get install open-iscsi

Low-level iscsi management task can be done using the iscsiadm tool.

Configuration

The backend supports the common storage properties content, nodes, disable, and the following iSCSI specific properties:

portal

iSCSI portal (IP or DNS name with optional port).

target

iSCSI target.

Configuration Example (/etc/pve/storage.cfg)
iscsi: mynas
     portal 10.10.10.1
     target iqn.2006-01.openfiler.com:tsn.dcb5aaaddd
     content none
Tip If you want to use LVM on top of iSCSI, it make sense to set content none. That way it is not possible to create VMs using iSCSI LUNs directly.

File naming conventions

The iSCSI protocol does not define an interface to allocate or delete data. Instead, that needs to be done on the target side and is vendor specific. The target simply exports them as numbered LUNs. So Proxmox VE iSCSI volume names just encodes some information about the LUN as seen by the linux kernel.

Storage Features

iSCSI is a block level type storage, and provides no management interface. So it is usually best to export one big LUN, and setup LVM on top of that LUN. You can then use the LVM plugin to manage the storage on that iSCSI LUN.

Table 9. Storage features for backend iscsi
Content types Image formats Shared Snapshots Clones

images none

raw

yes

no

no

Examples

Scan a remote iSCSI portal, and returns a list of possible targets:

pvesm iscsiscan -portal <HOST[:PORT]>

User Mode iSCSI Backend

Storage pool type: iscsidirect

This backend provides basically the same functionality as the Open-iSCSI backed, but uses a user-level library (package libiscsi2) to implement it.

It should be noted that there are no kernel drivers involved, so this can be viewed as performance optimization. But this comes with the drawback that you cannot use LVM on top of such iSCSI LUN. So you need to manage all space allocations at the storage server side.

Configuration

The user mode iSCSI backend uses the same configuration options as the Open-iSCSI backed.

Configuration Example (/etc/pve/storage.cfg)
iscsidirect: faststore
     portal 10.10.10.1
     target iqn.2006-01.openfiler.com:tsn.dcb5aaaddd

Storage Features

Note This backend works with VMs only. Containers cannot use this driver.
Table 10. Storage features for backend iscsidirect
Content types Image formats Shared Snapshots Clones

images

raw

yes

no

no

Ceph RADOS Block Devices (RBD)

Storage pool type: rbd

Ceph is a distributed object store and file system designed to provide excellent performance, reliability and scalability. RADOS block devices implement a feature rich block level storage, and you get the following advantages:

  • thin provisioning

  • resizable volumes

  • distributed and redundant (striped over multiple OSDs)

  • full snapshot and clone capabilities

  • self healing

  • no single point of failure

  • scalable to the exabyte level

  • kernel and unser space implementation available

Note For smaller deployments, it is also possible to run Ceph services directly on your Proxmox VE nodes. Recent hardware has plenty of CPU power and RAM, so running storage services and VMs on same node is possible.

Configuration

This backend supports the common storage properties nodes, disable, content, and the following rbd specific properties:

monhost

List of monitor daemon IPs.

pool

Ceph pool name.

username

RBD user Id.

krbd

Access rbd through krbd kernel module. This is required if you want to use the storage for containers.

Configuration Example (/etc/pve/storage.cfg)
rbd: ceph3
        monhost 10.1.1.20 10.1.1.21 10.1.1.22
        pool ceph3
        content images
        username admin
Tip You can use the rbd utility to do low-level management tasks.

Authentication

If you use cephx authentication, you need to copy the keyfile from Ceph to Proxmox VE host.

Create the directory /etc/pve/priv/ceph with

mkdir /etc/pve/priv/ceph

Then copy the keyring

scp <cephserver>:/etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/<STORAGE_ID>.keyring

The keyring must be named to match your <STORAGE_ID>. Copying the keyring generally requires root privileges.

Storage Features

The rbd backend is a block level storage, and implements full snapshot and clone functionality.

Table 11. Storage features for backend rbd
Content types Image formats Shared Snapshots Clones

images rootdir

raw

yes

yes

yes

Copyright © 2007-2016 Proxmox Server Solutions GmbH

This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more details.

You should have received a copy of the GNU Affero General Public License along with this program. If not, see http://www.gnu.org/licenses/