PVE/Ceph RBD | Proxmox Support Forum
forum.proxmox.com › threads › pve-ceph-rbdJun 11, 2020 · make your disk image fit this structure or and run qm rescan (you will get a message like VM 101: add unreferenced volume 'rbd:vm-101-disk-4' as 'unused0' to config) use qm importdisk (same link) For Ceph the first alternative is better. If you run rbd list you should have something like vm-100-disk-0. If that does not help you, please post.
Datasheet for Proxmox Virtual Environment
www.proxmox.com › en › downloads• Integrated Ceph, a distributed object store and file system. • Management via GUI or CLI. • Run Ceph RBD and CephFS directly on the Proxmox VE cluster nodes. • Easy-to-use installation wizard. • Proxmox delivers its own Ceph packages. • Ceph support is included in the support agreement. • Configure external Ceph clusters via the ...
Ceph Advice : Proxmox
https://www.reddit.com/r/Proxmox/comments/rkxgcr/ceph_adviceM.2 512GB SSD (ceph pool for VMs) 120GB SSD SATA (Proxmox installation) node3. CPU: Xeon E5-2630L V4. HDDs: 6x6TB -> cephfs pool. M.2 512GB SSD (ceph pool for VMs) 120GB SSD SATA (Proxmos installation) As mentioned this Ceph Cluster is already productive with Data so I can't destroy all OSDs and start from scratch :) cheers
Ceph Server - Proxmox VE
https://pve.proxmox.com/wiki/Ceph_ServerInstall Ceph Server on Proxmox VE; Proxmox YouTube channel. You can subscribe to our Proxmox VE Channel on YouTube to get updates about new videos. Ceph Misc ... (not LSI), see VM options. I remember some panic when using LSI recently but I did not debug it further as modern OS should use virtio-scsi anyways. https: ...