{"id":2575,"date":"2020-10-06T11:13:18","date_gmt":"2020-10-06T09:13:18","guid":{"rendered":"https:\/\/tech.lobobrothers.com\/proxmox-and-ceph-from-0-to-100-part-iii\/"},"modified":"2025-02-06T15:09:32","modified_gmt":"2025-02-06T14:09:32","slug":"proxmox-and-ceph-from-0-to-100-part-iii","status":"publish","type":"post","link":"https:\/\/tech.lobobrothers.com\/en\/proxmox-and-ceph-from-0-to-100-part-iii\/","title":{"rendered":"Proxmox and Ceph from 0 to 100 Part III"},"content":{"rendered":"\t\t<div data-elementor-type=\"wp-post\" data-elementor-id=\"2575\" class=\"elementor elementor-2575 elementor-1701\" data-elementor-post-type=\"post\">\n\t\t\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-65d6287 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"65d6287\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-7082610a\" data-id=\"7082610a\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-7a7d7b76 elementor-widget elementor-widget-text-editor\" data-id=\"7a7d7b76\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><\/p>\n<h2>Installing and configuring CEPH in Proxmox<\/h2>\n<p>In the next post, Proxmox and Ceph from 0 to 100 part III, we will discuss the installation and configuration of Ceph in Proxmox.<\/p>\n<p>First of all we have to know that Ceph is a free, scalable, high performance and robust distributed file system, designed to have no single point of failure, error free and fault tolerant. In an infrastructure we have to know the elements that make up a Ceph cluster:<\/p>\n<p style=\"padding-left: 40px;\"><strong>Cluster monitors<\/strong> (ceph-mon) that are in charge of maintaining and managing the activity in the nodes, monitoring the manager service, object storage server and metadata server components in order to achieve the Ceph objective. In any production cluster, the minimum is to have three monitors, therefore in our lab, the three nodes will be Node Monitors.<\/p>\n<p style=\"padding-left: 40px;\"><strong>Manager<\/strong>, in charge of managing space utilization, metrics and cluster status. At least 2 of the nodes must have this role.<\/p>\n<p style=\"padding-left: 40px;\"><strong>OSD (object storage daemon)<\/strong>, responsible for storage, duplication and restoration. As with the monitors, a minimum of 3 is recommended.<\/p>\n<p style=\"padding-left: 40px;\"><strong>Metadata server<\/strong>, which stores metadata and allows basic POSIX filesystem commands. It would allow us to create a CephFS.<\/p>\n<p>Ceph would not be possible without its <strong>CRUSH<\/strong> algorithm, which determines how to store and retrieve data by calculating data storage locations, i.e., it requires a map of your cluster which contains a list of OSDs and rules for how it should replicate data, using the map to store and retrieve data pseudo-randomly in OSDs with an even distribution of data across the cluster.<\/p>\n<p>Regarding the requirements, we have to take into account:<\/p>\n<p style=\"padding-left: 40px;\">Create one OSD per disk<\/p>\n<p style=\"padding-left: 40px;\">Assign a subprocess per OSD, that is, a thread.<\/p>\n<p style=\"padding-left: 40px;\">Size RAM at a minimum ratio of 1 GB per TB of disk storage on the OSD node.<\/p>\n<p style=\"padding-left: 40px;\">For production 10 Gigabit cards available<\/p>\n<p>Having commented all the above, let&#8217;s move on to the installation and configuration. Once logged into our cluster, let&#8217;s start with PVE1 in the CEPH section, where a sign indicates that Ceph is not installed and if we would like to install it now.<\/p>\n<p><img decoding=\"async\" class=\"aligncenter wp-image-1708 size-large\" src=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-25-1024x399.png\" alt=\"proxmox install ceph\" width=\"800\" height=\"312\" srcset=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-25-1024x399.png 1024w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-25-300x117.png 300w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-25-768x299.png 768w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-25-1536x598.png 1536w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-25-700x272.png 700w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-25.png 1914w\" sizes=\"(max-width: 800px) 100vw, 800px\" \/><\/p>\n<p>We click on Install Ceph-nautilus and it will show us a small info about CEPH and a link to the documentation.<\/p>\n<p><img decoding=\"async\" class=\"aligncenter wp-image-1709 size-full\" src=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-24.png\" alt=\"proxmox setup ceph\" width=\"698\" height=\"512\" srcset=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-24.png 698w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-24-300x220.png 300w\" sizes=\"(max-width: 698px) 100vw, 698px\" \/><\/p>\n<p>We click on start installation and after a few seconds we will get.<\/p>\n<p><img decoding=\"async\" class=\"aligncenter wp-image-1710 size-full\" src=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-23.png\" alt=\"proxmox install ceph packages\" width=\"702\" height=\"514\" srcset=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-23.png 702w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-23-300x220.png 300w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-23-700x513.png 700w\" sizes=\"(max-width: 702px) 100vw, 702px\" \/><\/p>\n<p>We say &#8220;Y&#8221; and wait for it to finish and Installed ceph nautilus successfully, to click next.<\/p>\n<p>In the following screen you will find the configuration<\/p>\n<p><img decoding=\"async\" class=\"aligncenter wp-image-1712 size-full\" src=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-22.png\" alt=\"proxmox configure ceph network\" width=\"700\" height=\"509\" srcset=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-22.png 700w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-22-300x218.png 300w\" sizes=\"(max-width: 700px) 100vw, 700px\" \/><\/p>\n<p>In this first part we can call our attention, public network and cluster network, where the Ceph documentation itself tells us that using a public network and a cluster network would complicate the configuration of both hardware and software and usually does not have a significant impact on performance, so it is better to have a bond of cards so that the NICs are active \/ active and configure that both networks are the same network, ie, simply in the Public Network we select our internal interface, in this case the 10. 0.0.0.221\/24 and in Cluster Network &#8220;Same as Public Network&#8221;, but if we want to be fine and separate it in these 2 networks, the Cluster Network would be the OSD replication and the heartbeat traffic and in the Public Network the rest of Ceph traffic.<\/p>\n<p>In the replicas part we will configure the number of replicas that we will have for each object, the more replicas, the more space consumed but we will increase the number of allowed failures. Regarding the Minimum replicas it establishes the minimum number of replicas required for I\/O, that is to say, how many have to be OK to have access to the data, if we put 3 in Number of replicas and Minimum 3, as soon as a replica falls we will stop having access to the data therefore minimum we have to put one less so that everything continues working. Keep in mind when configuring that you can always increase the number of replicas later, but you will not be able to decrease, you will have to create a new pool, move the data and then delete the old one to be able to reduce the number of replicas.<\/p>\n<p>And finally as it is the first Ceph monitor it indicates us that more monitors are recommended, in fact as I say minimum 3 in production not to have problems, we give to create and in the following screen to Finish.<\/p>\n<p>Once this part is finished we go to the OSD section and click on Create: OSD to add our OSD.<\/p>\n<p><img decoding=\"async\" class=\"aligncenter wp-image-1717 size-large\" src=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-21-1024x396.png\" alt=\"proxmox osd ceph\" width=\"800\" height=\"309\" srcset=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-21-1024x396.png 1024w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-21-300x116.png 300w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-21-768x297.png 768w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-21-1536x594.png 1536w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-21-700x271.png 700w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-21.png 1911w\" sizes=\"(max-width: 800px) 100vw, 800px\" \/><\/p>\n<p><img decoding=\"async\" class=\"aligncenter wp-image-1718 size-full\" src=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-20.png\" alt=\"proxmox create osd ceph\" width=\"601\" height=\"273\" srcset=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-20.png 601w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-20-300x136.png 300w\" sizes=\"(max-width: 601px) 100vw, 601px\" \/><\/p>\n<p>It is noted that Ceph warns us that it is not compatible with Hardware Raid and a link with more details.<\/p>\n<p>In Disk we select the disk we are going to use in this case the one we have of 40 GB. Then we have 2 fields DB Disk and WAL Disk, before Ceph Luminous used Filestore as default storage for Ceph OSD. As of Ceph Nautilus Proxmox no longer supports the creation of OSD filestore, although it can still be created by console using the ceph-volume command. Now it uses Bluestore which needs both parameters, DB for the internal metadata and WAL for the internal journal or write-ahead, so it is recommended as we have indicated in several occasions to use SSD. We can configure the space letting it manage it automatically or setting an amount, taking into account that we will need for DB 10% of the OSD and for WAL 1% of the OSD and we can even select another different disk. Regarding Encrypt OSD as its name indicates if we want to encrypt them.<\/p>\n<p>In this lab we will leave it as it is in the image, configured according to the scenario and architecture that you are going to perform.<\/p>\n<p>We repeat the steps to install CEPH with PVE2 and PV3, with the only difference, that in configuration it will tell us that the Configuration already initialized.<\/p>\n<p><img decoding=\"async\" class=\"aligncenter wp-image-1722 size-full\" src=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-19.png\" alt=\"proxmox ceph configuration initialized\" width=\"706\" height=\"515\" srcset=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-19.png 706w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-19-300x219.png 300w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-19-700x511.png 700w\" sizes=\"(max-width: 706px) 100vw, 706px\" \/><\/p>\n<p>Click on Next and Finish. We go to PVE1\/Ceph\/Monitor and we click on Create<\/p>\n<p><img decoding=\"async\" class=\"aligncenter wp-image-1723 size-large\" src=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-18-1024x396.png\" alt=\"proxmox monitors and managers ceph\" width=\"800\" height=\"309\" srcset=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-18-1024x396.png 1024w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-18-300x116.png 300w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-18-768x297.png 768w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-18-1536x594.png 1536w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-18-700x271.png 700w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-18.png 1914w\" sizes=\"(max-width: 800px) 100vw, 800px\" \/><\/p>\n<p>Select PVE2 in the next screen and create<\/p>\n<p><img decoding=\"async\" class=\"aligncenter wp-image-1724 size-full\" src=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-17.png\" alt=\"proxmox create monitor ceph\" width=\"305\" height=\"125\" srcset=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-17.png 305w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-17-300x123.png 300w\" sizes=\"(max-width: 305px) 100vw, 305px\" \/><\/p>\n<p>Repeat for PVE3 and do the same in Create in the Manager section for both nodes.<\/p>\n<p><img decoding=\"async\" class=\"aligncenter wp-image-1727 size-full\" src=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-16.png\" alt=\"proxmox create manager ceph\" width=\"301\" height=\"123\" \/><\/p>\n<p>The result is as follows<\/p>\n<p><img decoding=\"async\" class=\"aligncenter wp-image-1729 size-large\" src=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-15-1024x394.png\" alt=\"proxmox status monitors and manager ceph\" width=\"800\" height=\"308\" srcset=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-15-1024x394.png 1024w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-15-300x116.png 300w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-15-768x296.png 768w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-15-1536x592.png 1536w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-15-700x270.png 700w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-15.png 1911w\" sizes=\"(max-width: 800px) 100vw, 800px\" \/><\/p>\n<p>Next we will have to add the OSD of PVE2 and PVE3 following the steps we did to add the OSD of PVE1 going to the Ceph\/OSD section of each node, having as a result<\/p>\n<p><img decoding=\"async\" class=\"aligncenter wp-image-1732 size-large\" src=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-14-1024x395.png\" alt=\"proxmox status osd ceph\" width=\"800\" height=\"309\" srcset=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-14-1024x395.png 1024w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-14-300x116.png 300w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-14-768x296.png 768w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-14-1536x592.png 1536w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-14-700x270.png 700w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-14.png 1912w\" sizes=\"(max-width: 800px) 100vw, 800px\" \/><\/p>\n<p>Now perhaps comes the most complicated part of CEPH, the crush rules to make osd groups and to be able to mount the pools, for example if we have normal disks, ssd and nvram we will have to create 3 rules or if we have all SSD but we want to make several pools if we have 50 disks selecting the osd for each pool.<\/p>\n<p>For it we will have to obtain the CRUSH Map decompile it, modify it and compile it again. To do this we go to the shell and we will have to write the following:<\/p>\n<p style=\"padding-left: 40px;\">To obtain it<\/p>\n<p style=\"padding-left: 80px;\">ceph osd getcrushmap -o {compiled-crushmap-filename}<\/p>\n<p style=\"padding-left: 80px;\">In other words,<\/p>\n<p style=\"padding-left: 80px;\">ceph osd getcrushmap -o cephrulescompiled<\/p>\n<p style=\"padding-left: 40px;\">To decompile it<\/p>\n<p style=\"padding-left: 80px;\">crushtool -d cephrulescompiled -o cephrules.txt<\/p>\n<p><img decoding=\"async\" class=\"aligncenter wp-image-1736 size-large\" src=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-12-1024x189.png\" alt=\"proxmox get ceph crush map\" width=\"800\" height=\"148\" srcset=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-12-1024x189.png 1024w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-12-300x55.png 300w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-12-768x142.png 768w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-12-700x129.png 700w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-12.png 1251w\" sizes=\"(max-width: 800px) 100vw, 800px\" \/><\/p>\n<p>We edit the cephrules.txt file and modify the following<\/p>\n<p><img decoding=\"async\" class=\"aligncenter wp-image-1738 size-full\" src=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-11.png\" alt=\"proxmox edit crush ceph map\" width=\"477\" height=\"84\" srcset=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-11.png 477w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-11-300x53.png 300w\" sizes=\"(max-width: 477px) 100vw, 477px\" \/><\/p>\n<p>For this other, for example<\/p>\n<p><img decoding=\"async\" class=\"aligncenter wp-image-1739 size-full\" src=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-10.png\" alt=\"proxmox crushmap devices\" width=\"540\" height=\"69\" srcset=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-10.png 540w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-10-300x38.png 300w\" sizes=\"(max-width: 540px) 100vw, 540px\" \/><\/p>\n<p>That is, we define a class. Now in the rules section<\/p>\n<p><img decoding=\"async\" class=\"aligncenter wp-image-1740 size-full\" src=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-9.png\" alt=\"proxmox crushmap rules\" width=\"541\" height=\"160\" srcset=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-9.png 541w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-9-300x89.png 300w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-9-539x160.png 539w\" sizes=\"(max-width: 541px) 100vw, 541px\" \/><\/p>\n<p>We create a new one with another name, another id and in the step take default we add our class hddpool1, looking like this<\/p>\n<p><img decoding=\"async\" class=\"aligncenter wp-image-1743 size-full\" src=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-7.png\" alt=\"proxmox crushmap add rule\" width=\"540\" height=\"302\" srcset=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-7.png 540w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-7-300x168.png 300w\" sizes=\"(max-width: 540px) 100vw, 540px\" \/><\/p>\n<p>The next step would be to recompile with<\/p>\n<p style=\"padding-left: 40px;\">crushtool -c cephrules.txt -o cephrulesnew<\/p>\n<p>And establish the new map<\/p>\n<p style=\"padding-left: 40px;\">ceph osd setcrushmap -i cephrulesnew<\/p>\n<p>Once this is done, if we go to Ceph\/OSD in the GUI we can see that in Category it will put hddpool1<\/p>\n<p><img decoding=\"async\" class=\"aligncenter wp-image-1744 size-large\" src=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-6-1024x397.png\" alt=\"proxmox osd view class on gui\" width=\"800\" height=\"310\" srcset=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-6-1024x397.png 1024w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-6-300x116.png 300w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-6-768x298.png 768w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-6-1536x596.png 1536w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-6-700x272.png 700w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-6.png 1910w\" sizes=\"(max-width: 800px) 100vw, 800px\" \/><\/p>\n<p>As I have mentioned it is perhaps the most complicated of Ceph although if you get confused editing because you are missing a { for example or by bad construction will not let you recompile, yes, if in a production environment you get confused in the OSD for example and if there may be problems.<\/p>\n<p>The next step would be to create the pools where the data of the virtual machines will be stored, we go to Ceph\/Pools and click on create<\/p>\n<p><img decoding=\"async\" class=\"aligncenter wp-image-1747 size-large\" src=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-5-1024x398.png\" alt=\"proxmox create pool ceph\" width=\"800\" height=\"311\" srcset=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-5-1024x398.png 1024w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-5-300x117.png 300w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-5-768x298.png 768w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-5-1536x597.png 1536w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-5-700x272.png 700w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-5.png 1918w\" sizes=\"(max-width: 800px) 100vw, 800px\" \/><\/p>\n<p>We have to give it a name &#8220;Pool1&#8221; to follow the nomenclature, the Size is the number of replicas as we saw before and the minimum size is the minimum number of replicas for I\/O. Now you may wonder what the replicas are based on, by Host or by OSD, that is if we put 3, we will have the original and two more, but in different Host or OSD. The answer to this question is in the same rules file we have created, in the types section.<\/p>\n<p><img decoding=\"async\" class=\"aligncenter wp-image-1751 size-full\" src=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-4.png\" alt=\"proxmox type buckets of ceph\" width=\"136\" height=\"195\" \/><\/p>\n<p>We have all these types and in the rule we define in which type it will be based on<\/p>\n<p><img decoding=\"async\" class=\"aligncenter wp-image-1752 size-full\" src=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-3.png\" alt=\"proxmox rule pool1 ceph\" width=\"325\" height=\"142\" srcset=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-3.png 325w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-3-300x131.png 300w\" sizes=\"(max-width: 325px) 100vw, 325px\" \/><\/p>\n<p>In this case we have defined it in host, therefore we will have the original and a copy in 2 different hosts.<\/p>\n<p>Next, in Crush rules we select the one we have created in the Map for Pool1, which we have called Pool1 and the pg_enum that in versions prior to Nautilus was the frustration of many, since choosing an incorrect value could not always be corrected since it could be increased but never decreased. This value was calculated with the formula (ODSs * 100)\/Replicas and we had to take into account that if we increased it later, we should also increase the value of pgp_num with the same value of pg_num by launching the following commands<\/p>\n<p style=\"padding-left: 40px;\">ceph osd pool set {nombre del pool} pg_num {new value}<\/p>\n<p style=\"padding-left: 40px;\">ceph osd pool set {nombre del pool} pgp_num {same value of pg_num}<\/p>\n<p>With Nautilus this problem disappears since it can be reduced and to forget about it we can activate the pg_autoescaler from the console with<\/p>\n<p style=\"padding-left: 40px;\">ceph mgr module enable pg_autoscaler<\/p>\n<p>and check the autoscale with<\/p>\n<p style=\"padding-left: 40px;\">ceph osd pool autoscale-status<\/p>\n<p>Finally, leave the &#8220;Add as Storage&#8221; checkbox checked to create the RDB type storage and if we launch the above command, we will get back<\/p>\n<p><img decoding=\"async\" class=\"aligncenter wp-image-1754 size-full\" src=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-2.png\" alt=\"proxmox status autoscale ceph\" width=\"794\" height=\"53\" srcset=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-2.png 794w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-2-300x20.png 300w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-2-768x51.png 768w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-2-700x47.png 700w\" sizes=\"(max-width: 794px) 100vw, 794px\" \/><\/p>\n<p>Notice in the AUTOESCALE parameter we have a warn, what does it mean, nothing to worry about, it is the default value in Nautilus for AUTOESCALE. We have 3 possible<\/p>\n<p style=\"padding-left: 40px;\"><strong>off<\/strong>: disables automatic scaling for this group. It is up to the administrator to choose an appropriate PG count for each group.<\/p>\n<p style=\"padding-left: 40px;\"><strong>on<\/strong>: enable automatic PG count adjustments for the given group.<\/p>\n<p style=\"padding-left: 40px;\"><strong>warn:<\/strong> Generate health alerts when the PG count should be adjusted.<\/p>\n<p>To change our pool1 to <strong>on<\/strong> mode and forget about it completely, we write the following in the shell:<\/p>\n<p style=\"padding-left: 40px;\">ceph osd pool set Pool1 pg_autoscale_mode on<\/p>\n<p>We do this for each pool we want, changing Pool1 for the name of the pool. These would be the console outputs.<\/p>\n<p><img decoding=\"async\" class=\"aligncenter wp-image-1758 size-full\" src=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-1.png\" alt=\"proxmox status autoscale pool\" width=\"784\" height=\"89\" srcset=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-1.png 784w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-1-300x34.png 300w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-1-768x87.png 768w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-ceph-1-700x79.png 700w\" sizes=\"(max-width: 784px) 100vw, 784px\" \/><\/p>\n<p>As we can see, the data has been changed and the pg_num has been lowered to 32 without any problem when previously it was not possible.<\/p>\n<p>If we go to the GUI in Data Center\/Storage we will see that we already have the Pool1 available for Disk Image and containers mounted on the 3 nodes.<\/p>\n<p><img decoding=\"async\" class=\"aligncenter wp-image-1761 size-large\" src=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-43-en-1024x396.png\" alt=\"proxmox see ceph storage\" width=\"800\" height=\"309\" srcset=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-43-en-1024x396.png 1024w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-43-en-300x116.png 300w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-43-en-768x297.png 768w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-43-en-1536x594.png 1536w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-43-en-700x270.png 700w, https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2020\/10\/proxmox-43-en.png 1915w\" sizes=\"(max-width: 800px) 100vw, 800px\" \/><\/p>\n<p>So far we have mounted our cluster with HA and Ceph, in the next post we will see how to create an HA group, create a vm, a container and we will test HA.<\/p>\n<p>I hope you liked it, enjoy life. If you want to purchase any of the licenses please contact us, we are a Proxmox partner.<\/p>\n<p><strong>Continue <a href=\"https:\/\/tech.lobobrothers.com\/en\/proxmox-and-ceph-from-0-to-100-part-iv\/\">Proxmox and Ceph from 0 to 100 Part IV<\/a><\/strong><\/p>\n<p>TL.<\/p>\n<p><\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-af6f5aa elementor-widget elementor-widget-heading\" data-id=\"af6f5aa\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">FAQS<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-5510119 elementor-widget elementor-widget-toggle\" data-id=\"5510119\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"toggle.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t<div class=\"elementor-toggle\">\n\t\t\t\t\t\t\t<div class=\"elementor-toggle-item\">\n\t\t\t\t\t<div id=\"elementor-tab-title-8911\" class=\"elementor-tab-title\" data-tab=\"1\" role=\"button\" aria-controls=\"elementor-tab-content-8911\" aria-expanded=\"false\">\n\t\t\t\t\t\t\t\t\t\t\t\t<span class=\"elementor-toggle-icon elementor-toggle-icon-left\" aria-hidden=\"true\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<span class=\"elementor-toggle-icon-closed\"><svg class=\"e-font-icon-svg e-fas-caret-right\" viewBox=\"0 0 192 512\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\"><path d=\"M0 384.662V127.338c0-17.818 21.543-26.741 34.142-14.142l128.662 128.662c7.81 7.81 7.81 20.474 0 28.284L34.142 398.804C21.543 411.404 0 402.48 0 384.662z\"><\/path><\/svg><\/span>\n\t\t\t\t\t\t\t\t<span class=\"elementor-toggle-icon-opened\"><\/span>\n\t\t\t\t\t\t\t\t\t\t\t\t\t<\/span>\n\t\t\t\t\t\t\t\t\t\t\t\t<a class=\"elementor-toggle-title\" tabindex=\"0\">What is Ceph and why integrate it with Proxmox?<\/a>\n\t\t\t\t\t<\/div>\n\n\t\t\t\t\t<div id=\"elementor-tab-content-8911\" class=\"elementor-tab-content elementor-clearfix\" data-tab=\"1\" role=\"region\" aria-labelledby=\"elementor-tab-title-8911\"><p>Ceph is a free, scalable, distributed file system designed to offer high performance and fault tolerance with no single points of failure. When integrated with Proxmox, you get robust distributed storage that improves the resilience and scalability of virtual machines and containers.<\/p>\n<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<div class=\"elementor-toggle-item\">\n\t\t\t\t\t<div id=\"elementor-tab-title-8912\" class=\"elementor-tab-title\" data-tab=\"2\" role=\"button\" aria-controls=\"elementor-tab-content-8912\" aria-expanded=\"false\">\n\t\t\t\t\t\t\t\t\t\t\t\t<span class=\"elementor-toggle-icon elementor-toggle-icon-left\" aria-hidden=\"true\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<span class=\"elementor-toggle-icon-closed\"><svg class=\"e-font-icon-svg e-fas-caret-right\" viewBox=\"0 0 192 512\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\"><path d=\"M0 384.662V127.338c0-17.818 21.543-26.741 34.142-14.142l128.662 128.662c7.81 7.81 7.81 20.474 0 28.284L34.142 398.804C21.543 411.404 0 402.48 0 384.662z\"><\/path><\/svg><\/span>\n\t\t\t\t\t\t\t\t<span class=\"elementor-toggle-icon-opened\"><\/span>\n\t\t\t\t\t\t\t\t\t\t\t\t\t<\/span>\n\t\t\t\t\t\t\t\t\t\t\t\t<a class=\"elementor-toggle-title\" tabindex=\"0\">What are the main components of a Ceph cluster?<\/a>\n\t\t\t\t\t<\/div>\n\n\t\t\t\t\t<div id=\"elementor-tab-content-8912\" class=\"elementor-tab-content elementor-clearfix\" data-tab=\"2\" role=\"region\" aria-labelledby=\"elementor-tab-title-8912\"><p>A Ceph cluster is made up of several key elements:<\/p>\n<ul>\n<li><strong>Cluster Monitors (ceph-mon)<\/strong>: Manage and monitor node activity, ensuring cluster health and coherence.<\/li>\n<li><strong>Managers (ceph-mgr):<\/strong> Manage used space, metrics, and overall cluster health.<\/li>\n<li><strong>OSDs (Object Storage Daemons):<\/strong> Responsible for the actual storage of data, as well as its replication and recovery.<\/li>\n<li><strong>Metadata Servers (ceph-mds)<\/strong>: Store metadata and enable basic POSIX file system operations, facilitating the creation of CephFS.<\/li>\n<\/ul>\n<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<div class=\"elementor-toggle-item\">\n\t\t\t\t\t<div id=\"elementor-tab-title-8913\" class=\"elementor-tab-title\" data-tab=\"3\" role=\"button\" aria-controls=\"elementor-tab-content-8913\" aria-expanded=\"false\">\n\t\t\t\t\t\t\t\t\t\t\t\t<span class=\"elementor-toggle-icon elementor-toggle-icon-left\" aria-hidden=\"true\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<span class=\"elementor-toggle-icon-closed\"><svg class=\"e-font-icon-svg e-fas-caret-right\" viewBox=\"0 0 192 512\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\"><path d=\"M0 384.662V127.338c0-17.818 21.543-26.741 34.142-14.142l128.662 128.662c7.81 7.81 7.81 20.474 0 28.284L34.142 398.804C21.543 411.404 0 402.48 0 384.662z\"><\/path><\/svg><\/span>\n\t\t\t\t\t\t\t\t<span class=\"elementor-toggle-icon-opened\"><\/span>\n\t\t\t\t\t\t\t\t\t\t\t\t\t<\/span>\n\t\t\t\t\t\t\t\t\t\t\t\t<a class=\"elementor-toggle-title\" tabindex=\"0\">What is the CRUSH algorithm in Ceph?<\/a>\n\t\t\t\t\t<\/div>\n\n\t\t\t\t\t<div id=\"elementor-tab-content-8913\" class=\"elementor-tab-content elementor-clearfix\" data-tab=\"3\" role=\"region\" aria-labelledby=\"elementor-tab-title-8913\"><p>CRUSH is the algorithm that determines how and where data is stored and retrieved in Ceph. It calculates storage locations based on the crush map, ensuring even distribution and eliminating single points of failure.<\/p>\n<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<div class=\"elementor-toggle-item\">\n\t\t\t\t\t<div id=\"elementor-tab-title-8914\" class=\"elementor-tab-title\" data-tab=\"4\" role=\"button\" aria-controls=\"elementor-tab-content-8914\" aria-expanded=\"false\">\n\t\t\t\t\t\t\t\t\t\t\t\t<span class=\"elementor-toggle-icon elementor-toggle-icon-left\" aria-hidden=\"true\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<span class=\"elementor-toggle-icon-closed\"><svg class=\"e-font-icon-svg e-fas-caret-right\" viewBox=\"0 0 192 512\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\"><path d=\"M0 384.662V127.338c0-17.818 21.543-26.741 34.142-14.142l128.662 128.662c7.81 7.81 7.81 20.474 0 28.284L34.142 398.804C21.543 411.404 0 402.48 0 384.662z\"><\/path><\/svg><\/span>\n\t\t\t\t\t\t\t\t<span class=\"elementor-toggle-icon-opened\"><\/span>\n\t\t\t\t\t\t\t\t\t\t\t\t\t<\/span>\n\t\t\t\t\t\t\t\t\t\t\t\t<a class=\"elementor-toggle-title\" tabindex=\"0\"> \u00bfQu\u00e9 son las r\u00e9plicas en Ceph y c\u00f3mo se configuran?<\/a>\n\t\t\t\t\t<\/div>\n\n\t\t\t\t\t<div id=\"elementor-tab-content-8914\" class=\"elementor-tab-content elementor-clearfix\" data-tab=\"4\" role=\"region\" aria-labelledby=\"elementor-tab-title-8914\"><p>Replicas in Ceph determine how many copies of each object are stored in the cluster. A higher number of replicas increases fault tolerance but also consumes more storage space. It is essential to balance the number of replicas based on redundancy needs and available storage capacity.<\/p>\n<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t<script type=\"application\/ld+json\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@type\":\"FAQPage\",\"mainEntity\":[{\"@type\":\"Question\",\"name\":\"What is Ceph and why integrate it with Proxmox?\",\"acceptedAnswer\":{\"@type\":\"Answer\",\"text\":\"<p>Ceph is a free, scalable, distributed file system designed to offer high performance and fault tolerance with no single points of failure. When integrated with Proxmox, you get robust distributed storage that improves the resilience and scalability of virtual machines and containers.<\\\/p>\\n\"}},{\"@type\":\"Question\",\"name\":\"What are the main components of a Ceph cluster?\",\"acceptedAnswer\":{\"@type\":\"Answer\",\"text\":\"<p>A Ceph cluster is made up of several key elements:<\\\/p>\\n<ul>\\n<li><strong>Cluster Monitors (ceph-mon)<\\\/strong>: Manage and monitor node activity, ensuring cluster health and coherence.<\\\/li>\\n<li><strong>Managers (ceph-mgr):<\\\/strong> Manage used space, metrics, and overall cluster health.<\\\/li>\\n<li><strong>OSDs (Object Storage Daemons):<\\\/strong> Responsible for the actual storage of data, as well as its replication and recovery.<\\\/li>\\n<li><strong>Metadata Servers (ceph-mds)<\\\/strong>: Store metadata and enable basic POSIX file system operations, facilitating the creation of CephFS.<\\\/li>\\n<\\\/ul>\\n\"}},{\"@type\":\"Question\",\"name\":\"What is the CRUSH algorithm in Ceph?\",\"acceptedAnswer\":{\"@type\":\"Answer\",\"text\":\"<p>CRUSH is the algorithm that determines how and where data is stored and retrieved in Ceph. It calculates storage locations based on the crush map, ensuring even distribution and eliminating single points of failure.<\\\/p>\\n\"}},{\"@type\":\"Question\",\"name\":\"\\u00bfQu\\u00e9 son las r\\u00e9plicas en Ceph y c\\u00f3mo se configuran?\",\"acceptedAnswer\":{\"@type\":\"Answer\",\"text\":\"<p>Replicas in Ceph determine how many copies of each object are stored in the cluster. A higher number of replicas increases fault tolerance but also consumes more storage space. It is essential to balance the number of replicas based on redundancy needs and available storage capacity.<\\\/p>\\n\"}}]}<\/script>\n\t\t\t\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<\/div>\n\t\t","protected":false},"excerpt":{"rendered":"<p>Installing and configuring CEPH in Proxmox In the next post, Proxmox and Ceph from 0 to 100 part III, we will discuss the installation and configuration of Ceph in Proxmox. First of all we have to know that Ceph is a free, scalable, high performance and robust distributed file system, designed to have no single [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":7552,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[48,49,50],"tags":[],"class_list":["post-2575","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-cloud-infraestructures","category-linux-world","category-open-source"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.8 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Proxmox and Ceph from 0 to 100 Part III - LBT<\/title>\n<meta name=\"description\" content=\"We continue with Proxmox and Ceph from 0 to 100 part III. In this third part we will start working with Ceph....\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/tech.lobobrothers.com\/en\/proxmox-and-ceph-from-0-to-100-part-iii\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Proxmox and Ceph from 0 to 100 Part III - LBT\" \/>\n<meta property=\"og:description\" content=\"We continue with Proxmox and Ceph from 0 to 100 part III. In this third part we will start working with Ceph....\" \/>\n<meta property=\"og:url\" content=\"https:\/\/tech.lobobrothers.com\/en\/proxmox-and-ceph-from-0-to-100-part-iii\/\" \/>\n<meta property=\"og:site_name\" content=\"Blog sobre linux y el mundo opensource\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/LoboBrothers\/\" \/>\n<meta property=\"article:published_time\" content=\"2020-10-06T09:13:18+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-02-06T14:09:32+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2025\/02\/proxmox-ceph-cero-a-a100.parte-3-scaled.jpg.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"2048\" \/>\n\t<meta property=\"og:image:height\" content=\"1365\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"TL\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"TL\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"14 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/tech.lobobrothers.com\/en\/proxmox-and-ceph-from-0-to-100-part-iii\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/tech.lobobrothers.com\/en\/proxmox-and-ceph-from-0-to-100-part-iii\/\"},\"author\":{\"name\":\"TL\",\"@id\":\"https:\/\/tech.lobobrothers.com\/en\/#\/schema\/person\/11c359ab9896aa196007651fa6208beb\"},\"headline\":\"Proxmox and Ceph from 0 to 100 Part III\",\"datePublished\":\"2020-10-06T09:13:18+00:00\",\"dateModified\":\"2025-02-06T14:09:32+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/tech.lobobrothers.com\/en\/proxmox-and-ceph-from-0-to-100-part-iii\/\"},\"wordCount\":2152,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/tech.lobobrothers.com\/en\/#organization\"},\"image\":{\"@id\":\"https:\/\/tech.lobobrothers.com\/en\/proxmox-and-ceph-from-0-to-100-part-iii\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2025\/02\/proxmox-ceph-cero-a-a100.parte-3-scaled.jpg.webp\",\"articleSection\":[\"Cloud Infraestructures\",\"Linux World\",\"Open Source\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/tech.lobobrothers.com\/en\/proxmox-and-ceph-from-0-to-100-part-iii\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/tech.lobobrothers.com\/en\/proxmox-and-ceph-from-0-to-100-part-iii\/\",\"url\":\"https:\/\/tech.lobobrothers.com\/en\/proxmox-and-ceph-from-0-to-100-part-iii\/\",\"name\":\"Proxmox and Ceph from 0 to 100 Part III - LBT\",\"isPartOf\":{\"@id\":\"https:\/\/tech.lobobrothers.com\/en\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/tech.lobobrothers.com\/en\/proxmox-and-ceph-from-0-to-100-part-iii\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/tech.lobobrothers.com\/en\/proxmox-and-ceph-from-0-to-100-part-iii\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2025\/02\/proxmox-ceph-cero-a-a100.parte-3-scaled.jpg.webp\",\"datePublished\":\"2020-10-06T09:13:18+00:00\",\"dateModified\":\"2025-02-06T14:09:32+00:00\",\"description\":\"We continue with Proxmox and Ceph from 0 to 100 part III. In this third part we will start working with Ceph....\",\"breadcrumb\":{\"@id\":\"https:\/\/tech.lobobrothers.com\/en\/proxmox-and-ceph-from-0-to-100-part-iii\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/tech.lobobrothers.com\/en\/proxmox-and-ceph-from-0-to-100-part-iii\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/tech.lobobrothers.com\/en\/proxmox-and-ceph-from-0-to-100-part-iii\/#primaryimage\",\"url\":\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2025\/02\/proxmox-ceph-cero-a-a100.parte-3-scaled.jpg.webp\",\"contentUrl\":\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2025\/02\/proxmox-ceph-cero-a-a100.parte-3-scaled.jpg.webp\",\"width\":2048,\"height\":1365,\"caption\":\"proxmox 0 to 100 part 3\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/tech.lobobrothers.com\/en\/proxmox-and-ceph-from-0-to-100-part-iii\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Portada\",\"item\":\"https:\/\/tech.lobobrothers.com\/en\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Proxmox and Ceph from 0 to 100 Part III\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/tech.lobobrothers.com\/en\/#website\",\"url\":\"https:\/\/tech.lobobrothers.com\/en\/\",\"name\":\"Tech LBT\",\"description\":\"Como apasionados de la tecnolog\u00eda y amantes del open source creamos este blog con art\u00edculos interesantes obre linux, cloud, open source, criptomonedas y ciberseguridad\",\"publisher\":{\"@id\":\"https:\/\/tech.lobobrothers.com\/en\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/tech.lobobrothers.com\/en\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/tech.lobobrothers.com\/en\/#organization\",\"name\":\"Lobo Brothers Technology\",\"url\":\"https:\/\/tech.lobobrothers.com\/en\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/tech.lobobrothers.com\/en\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2019\/06\/logo_red.png\",\"contentUrl\":\"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2019\/06\/logo_red.png\",\"width\":110,\"height\":50,\"caption\":\"Lobo Brothers Technology\"},\"image\":{\"@id\":\"https:\/\/tech.lobobrothers.com\/en\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/LoboBrothers\/\",\"https:\/\/www.linkedin.com\/company\/lobobrothers\/about\/?viewAsMember=true\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/tech.lobobrothers.com\/en\/#\/schema\/person\/11c359ab9896aa196007651fa6208beb\",\"name\":\"TL\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/tech.lobobrothers.com\/en\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/a2d3b9e0b67bd28fe8248346c09cbe07?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/a2d3b9e0b67bd28fe8248346c09cbe07?s=96&d=mm&r=g\",\"caption\":\"TL\"}}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Proxmox and Ceph from 0 to 100 Part III - LBT","description":"We continue with Proxmox and Ceph from 0 to 100 part III. In this third part we will start working with Ceph....","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/tech.lobobrothers.com\/en\/proxmox-and-ceph-from-0-to-100-part-iii\/","og_locale":"en_US","og_type":"article","og_title":"Proxmox and Ceph from 0 to 100 Part III - LBT","og_description":"We continue with Proxmox and Ceph from 0 to 100 part III. In this third part we will start working with Ceph....","og_url":"https:\/\/tech.lobobrothers.com\/en\/proxmox-and-ceph-from-0-to-100-part-iii\/","og_site_name":"Blog sobre linux y el mundo opensource","article_publisher":"https:\/\/www.facebook.com\/LoboBrothers\/","article_published_time":"2020-10-06T09:13:18+00:00","article_modified_time":"2025-02-06T14:09:32+00:00","og_image":[{"width":2048,"height":1365,"url":"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2025\/02\/proxmox-ceph-cero-a-a100.parte-3-scaled.jpg.webp","type":"image\/jpeg"}],"author":"TL","twitter_card":"summary_large_image","twitter_misc":{"Written by":"TL","Est. reading time":"14 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/tech.lobobrothers.com\/en\/proxmox-and-ceph-from-0-to-100-part-iii\/#article","isPartOf":{"@id":"https:\/\/tech.lobobrothers.com\/en\/proxmox-and-ceph-from-0-to-100-part-iii\/"},"author":{"name":"TL","@id":"https:\/\/tech.lobobrothers.com\/en\/#\/schema\/person\/11c359ab9896aa196007651fa6208beb"},"headline":"Proxmox and Ceph from 0 to 100 Part III","datePublished":"2020-10-06T09:13:18+00:00","dateModified":"2025-02-06T14:09:32+00:00","mainEntityOfPage":{"@id":"https:\/\/tech.lobobrothers.com\/en\/proxmox-and-ceph-from-0-to-100-part-iii\/"},"wordCount":2152,"commentCount":0,"publisher":{"@id":"https:\/\/tech.lobobrothers.com\/en\/#organization"},"image":{"@id":"https:\/\/tech.lobobrothers.com\/en\/proxmox-and-ceph-from-0-to-100-part-iii\/#primaryimage"},"thumbnailUrl":"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2025\/02\/proxmox-ceph-cero-a-a100.parte-3-scaled.jpg.webp","articleSection":["Cloud Infraestructures","Linux World","Open Source"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/tech.lobobrothers.com\/en\/proxmox-and-ceph-from-0-to-100-part-iii\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/tech.lobobrothers.com\/en\/proxmox-and-ceph-from-0-to-100-part-iii\/","url":"https:\/\/tech.lobobrothers.com\/en\/proxmox-and-ceph-from-0-to-100-part-iii\/","name":"Proxmox and Ceph from 0 to 100 Part III - LBT","isPartOf":{"@id":"https:\/\/tech.lobobrothers.com\/en\/#website"},"primaryImageOfPage":{"@id":"https:\/\/tech.lobobrothers.com\/en\/proxmox-and-ceph-from-0-to-100-part-iii\/#primaryimage"},"image":{"@id":"https:\/\/tech.lobobrothers.com\/en\/proxmox-and-ceph-from-0-to-100-part-iii\/#primaryimage"},"thumbnailUrl":"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2025\/02\/proxmox-ceph-cero-a-a100.parte-3-scaled.jpg.webp","datePublished":"2020-10-06T09:13:18+00:00","dateModified":"2025-02-06T14:09:32+00:00","description":"We continue with Proxmox and Ceph from 0 to 100 part III. In this third part we will start working with Ceph....","breadcrumb":{"@id":"https:\/\/tech.lobobrothers.com\/en\/proxmox-and-ceph-from-0-to-100-part-iii\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/tech.lobobrothers.com\/en\/proxmox-and-ceph-from-0-to-100-part-iii\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/tech.lobobrothers.com\/en\/proxmox-and-ceph-from-0-to-100-part-iii\/#primaryimage","url":"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2025\/02\/proxmox-ceph-cero-a-a100.parte-3-scaled.jpg.webp","contentUrl":"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2025\/02\/proxmox-ceph-cero-a-a100.parte-3-scaled.jpg.webp","width":2048,"height":1365,"caption":"proxmox 0 to 100 part 3"},{"@type":"BreadcrumbList","@id":"https:\/\/tech.lobobrothers.com\/en\/proxmox-and-ceph-from-0-to-100-part-iii\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Portada","item":"https:\/\/tech.lobobrothers.com\/en\/"},{"@type":"ListItem","position":2,"name":"Proxmox and Ceph from 0 to 100 Part III"}]},{"@type":"WebSite","@id":"https:\/\/tech.lobobrothers.com\/en\/#website","url":"https:\/\/tech.lobobrothers.com\/en\/","name":"Tech LBT","description":"Como apasionados de la tecnolog\u00eda y amantes del open source creamos este blog con art\u00edculos interesantes obre linux, cloud, open source, criptomonedas y ciberseguridad","publisher":{"@id":"https:\/\/tech.lobobrothers.com\/en\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/tech.lobobrothers.com\/en\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/tech.lobobrothers.com\/en\/#organization","name":"Lobo Brothers Technology","url":"https:\/\/tech.lobobrothers.com\/en\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/tech.lobobrothers.com\/en\/#\/schema\/logo\/image\/","url":"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2019\/06\/logo_red.png","contentUrl":"https:\/\/tech.lobobrothers.com\/wp-content\/uploads\/2019\/06\/logo_red.png","width":110,"height":50,"caption":"Lobo Brothers Technology"},"image":{"@id":"https:\/\/tech.lobobrothers.com\/en\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/LoboBrothers\/","https:\/\/www.linkedin.com\/company\/lobobrothers\/about\/?viewAsMember=true"]},{"@type":"Person","@id":"https:\/\/tech.lobobrothers.com\/en\/#\/schema\/person\/11c359ab9896aa196007651fa6208beb","name":"TL","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/tech.lobobrothers.com\/en\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/a2d3b9e0b67bd28fe8248346c09cbe07?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/a2d3b9e0b67bd28fe8248346c09cbe07?s=96&d=mm&r=g","caption":"TL"}}]}},"post_mailing_queue_ids":[],"_links":{"self":[{"href":"https:\/\/tech.lobobrothers.com\/en\/wp-json\/wp\/v2\/posts\/2575","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/tech.lobobrothers.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/tech.lobobrothers.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/tech.lobobrothers.com\/en\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/tech.lobobrothers.com\/en\/wp-json\/wp\/v2\/comments?post=2575"}],"version-history":[{"count":3,"href":"https:\/\/tech.lobobrothers.com\/en\/wp-json\/wp\/v2\/posts\/2575\/revisions"}],"predecessor-version":[{"id":8119,"href":"https:\/\/tech.lobobrothers.com\/en\/wp-json\/wp\/v2\/posts\/2575\/revisions\/8119"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/tech.lobobrothers.com\/en\/wp-json\/wp\/v2\/media\/7552"}],"wp:attachment":[{"href":"https:\/\/tech.lobobrothers.com\/en\/wp-json\/wp\/v2\/media?parent=2575"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/tech.lobobrothers.com\/en\/wp-json\/wp\/v2\/categories?post=2575"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/tech.lobobrothers.com\/en\/wp-json\/wp\/v2\/tags?post=2575"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}