For example, you could use Ceph for local high performance storage while Swift could serve as a multi-region Glance backend where replication management is important but speed is not critical. In this article, we’ll discuss why Ceph is perfect fit for OpenStack. Over a million developers have joined DZone. The objective of this experiment is to compare two different storage systems for the cloud (both Swift and Ceph can be used with OpenStack) with an object-based interface, with the intention of evaluating the performance of Ceph with respect to a system – Swift, that is considered to be very mature and counts already many production deployments. Published at DZone with permission of Jason Phippen, DZone MVB. But it isn't wrinkle-free, as some parts of Ceph, such as the object storage daemon (OSD) code, are still under major renovation. Ceph, on the other hand, has its own set of issues, especially in a cloud context. Ceph delivers unified storage, supporting File, Block, and Object. On the other hand, Swift in the same two-region architecture will be able to write locally first and then replicate to the remote region over a period of time due to the eventual consistency design. Ceph is a Swiss army knife, complete with the Swiss army knife’s array of potential use cases: corkscrew, screwdriver, saw, bottle opener, even a needle. The obvious point of File, Block, and Object in the same wrapper. Amazon S3 or OpenStack Swift (Ceph RADOS Gateway) CRUSH. In a worst case scenario, such a configuration can corrupt the cluster. You might think Ceph or Swift are better, that's fine, but it's no toy. Ceph vs. Not a problem in Swift. Monitor quorum Journal and Cache tier 4 Architecture • Ceph clients connect directly to the Storage nodes eliminating any bottleneck. Its multi-region capabilities may trump Ceph’s speed and stronger consistency model. Ceph is viewed only as Object Store serving Objects via Swift REST API (not RADOS Objects), Ceph’s other interfaces which provide file and block based access are ignored here. Anybody in the proprietary camp will tell you that the money you save by avoiding software costs can come back in additional engineering skills costs: paying for the support contracts or skilled headcount required, and keeping that skilled headcount up to speed with developments comes at a cost. Earlier I had shared an article with the steps to configure ceph storage cluster in OpenStack. Required fields are marked *. The technique used is called CRUSH or Controlled Replication Under Scalable Hashing. In light of Ceph’s drawbacks, you might ask why we don’t just build a Ceph cluster system that spans two regions? Kubernetes tutorials, product updates and featured articles. notacoward on Mar 20, 2018. Swift is Object only. May 14, 2017 | By: SUSE. About me •Vincenzo Pii ... •Two OpenStack clouds (stable and experimental) •One cluster dedicated to storage research Jul 24, 2014 GÉANT eduPERT meeting . Swift also requires a write quorum, but the write_affinity setting can configure the cluster to force a quorum of writes to the local region, so after the local writes are finished the write returns a success status. We’ll also show you how to integrate three prominent OpenStack use cases with Ceph: Cinder (block storage), Glance (images) and Nova (VM virtual disks).. Ceph provides unified scale-out storage, using commodity x86 hardware that is self-healing and intelligently anticipates failures. Swift vs. Ceph Object – Write Performance • Ceph and OpenStack Swift object storage systems reassemble data on the fly when reading. OpenStack Object Storage (Swift). The Ceph cluster being a distributed architecture some solution had to be designed to provide an efficient way to distribute the data across the multiple OSDs in the cluster. In the Swift vs. Ceph race for OpenStack storage, it would seem that Ceph is winning -- at least right now.