Ceph Benchmark - proxmox.com
www.proxmox.com › en › downloadsand integrated Ceph Storage. To optimize performance in hyper-converged deployments, with Proxmox VE and Ceph storage, the appropriate hardware setup is essential. This benchmark presents possible setups and their performance outcomes, with the intention of supporting Proxmox users in making better decisions. EXECUTIVE SUMMARY
Proxmox VE Ceph Benchmark 2020
www.proxmox.com › en › downloadsOct 14, 2020 · To optimize performance in hyper-converged deployments, with Proxmox VE and Ceph storage, the appropriate hardware setup is essential. This benchmark presents possible setups and their performance outcomes, with the intention of supporting Proxmox users in making better decisions.
Proxmox VE Ceph and BCache Performance - IORUX
www.iorux.com › wp-content › uploadsAt the moment of the benchmarking (May 2020), Proxmox VE was on its version 6.2-1, pve kernel 5.4.41-1, Ceph version 14.2.9 Nautilus and bcache-tools 1.0.8-3. Storage for OSDs All storage attached to the Ceph cluster is datacenter and enterprise class. It features power-loss protection systems, high performance and high endurance characteristics.
Ceph performance | Proxmox Support Forum
forum.proxmox.com › threads › ceph-performanceOct 19, 2018 · Now, I also have an old Proxmox 4 jewel-based ceph cluster with old SAS HDD's that gives me this performance: Code: # rados bench -p IMB-test 60 rand -t 1 Total time run: 60.179736 Total reads made: 788 Read size: 4194304 Object size: 4194304 Bandwidth (MB/sec): 52.3764 Average IOPS: 13 Stddev IOPS: 2 Max IOPS: 19 Min IOPS: 7 Average Latency (s ...