kowabunga.cloud.ceph role – Install, configure, and start Ceph storage cluster.
Note
This role is part of the kowabunga.cloud collection (version 0.1.0).
It is not included in ansible-core
.
To check whether it is installed, run ansible-galaxy collection list
.
To install it use: ansible-galaxy collection install kowabunga.cloud
.
To use it in a playbook, specify: kowabunga.cloud.ceph
.
Entry point main
– Install, configure, and start Ceph storage cluster.
Synopsis
Install, configure, and start Ceph storage cluster.
Parameters
Parameter |
Comments |
---|---|
List of client keyrings to be supported. Must be hand over to client applications further on. Typical client would be libvirt or Kubernetes, trying to access Ceph RBD block devices. Default: |
|
Map of client capabilities. Refer to https://docs.ceph.com/en/latest/cephfs/client-auth/ for help. |
|
Ceph client name |
|
Defines whether support for CephFS must be enabled. Only usefull if you intend to provide filesystem-as-a-service (e.g. with Kylo). Choices:
|
|
List of Ceph filesystem to be created on the storage cluster. Requires Default: |
|
Name of one of the previously created OSD pool which will serve to host filesystem’s data. |
|
Defines whether this filesystem is to be set as default one. Choices:
|
|
Filesystem target type. Choices:
|
|
Name of one of the previously created OSD pool which will serve to host filesystem’s metadata. |
|
Filesystem name. No whitespace or special characters are allowed. |
|
Ceph cluster filesystem ID. Must be unique across your entire network (usually the case, unless you have multiple Ceph clusters). Consists of a UUID (can be generated through ‘uuigen’ command). |
|
Name of the group of machine that are part of the Ceph cluster |
|
Defines local directory (relative to playbook execution one) where to store generated keyrings. Keyring files will be generated once (and for all) per regional cluster. Generated Keyring files then be further deployed on cluster peers. Once generated, keep files under source control. |
|
Password for the ‘admin’ user. Can be used to connect on Ceph Web client on port 8080. Recommended to be safe and encrypted into Ansible Vault or SOPS. Defaults to encrypted |
|
Defines whether Ceph Manager component must be deployed on node. Ceph manager provides monitoring and serviceability features. It is recommended to enable on at least one cluster node. A group of 2 (or 3) is recommended for high-availability (active-passive). Choices:
|
|
Defines whether Ceph monitor component must be deployed on node. Ceph monitors implement API and takes the clients workload. At least one cluster node must have a monitor present. A group of 3 is recommended for load balancing and high-availability. Having more than 3 is usually not useful. Choices:
|
|
Network address for Ceph monitor to listen to. Private LAN one if unspecified. |
|
Defines whether Ceph OSD components must be deployed on node. An OSD (Object Storage Daemon) targets an atomic cluster storage entity (usually a disk or a partition). There are as many OSD instances as disks to be part of the cluster. Enabled, unless the node does not feature any data disk to be part of the cluster. Choices:
|
|
Maximum number of PG (Placement Groups) per OSD. Default: |
|
List of storage pools to be created on the Ceph cluster. Refer to https://linuxkidd.com/ceph/pgcalc.html for per-pool PG value calculation. Values of at least 256 (or 512) are recommended for multi-nodes cluster, 128 for single-node. Warning: pg_autoscaler is enabled by default, see https://docs.ceph.com/en/latest/rados/operations/placement-groups Default: |
|
Data compression settings. |
|
Compression algorithm to be used. Choices:
|
|
Compression mode, tradeoff between storage size and CPU usage. Choices:
|
|
Pool name No whitespace or special characters are allowed. |
|
Number of PGs to be allocated to the pool. |
|
Type of storage pool Use ‘rbd’ for block devices 0r ‘fs’ for filesystems. Choices:
|
|
Defines pool data replication factor. Data fragments are copied over multiple OSDs for redundancy. The bigger, the more resilient your cluster is to failures but the least usable space you’ll get. |
|
Minimum replicas to be alive for the cluster to be safe. |
|
Target replica count. |
|
List of instance OSDs definitions. Each disk/partition from the host must be declared in this list. Default: |
|
Unique Linux special device file representing the disk/partition to be mapped by the OSD. Example: /dev/disk/by-id/nvme-SAMSUNG_MZQL21T9HCJR-00A07_S64GNS0X101300 WARNING: Device WILL be formatted for Ceph usage. |
|
OSD unique identifier across the whole Ceph cluster. Usually iterates incrementally over disks and instances (e.g. 0 for first disk of for first instance, 1 for second disk of first instance, …) |
|
Weight of the OSD in Ceph crush map. Value will determine the object placement and priority. The bigger the value, the more chances one disk has to be elected to store data fragments. Usually defines as disk size in TB (e.g. a 1.92 TB SSD would be assigned a weight of 1.92). Can be overriden to enforce placement (if you have faster disks than others for example). |