订阅内容

By Federico Lucifredi, Red Hat Storage

 

 

Red Hat Ceph Storage 3 is our annual major release of Red Hat Ceph Storage, and it brings great new features to customers in the areas of containers, usability, and raw technology horsepower. It includes support for CephFS, giving us a comprehensive, all-in-one storage solution in Ceph spanning block, object, and file alike. It introduces iSCSI support to provide storage to platforms like VMWare ESX and Windows Server that currently lack native Ceph drivers. And we are introducing support for client-side caching with dm-cache.

On the usability front, we're introducing new automation to manage the cluster with less user intervention (dynamic bucket sharding), a troubleshooting tool to analyze and flag invalid cluster configurations (Ceph Medic), and a new monitoring dashboard (Ceph Metrics) that brings enhanced insight into the state of the cluster.

Last, but definitely not least, containerized storage daemons (CSDs) drive a significant improvement in TCO through better hardware utilization.

Containers, containers, never enough containers!

We graduated to fully supporting our Ceph distribution running containerized in Docker application containers earlier in June 2017 with the 2.3 release, after more than a year of open testing of tech preview images.

Red Hat Ceph Storage 3 raises the bar by introducing colocated CSDs as a supported configuration. CSDs drive a significant TCO improvement through better hardware utilization; the baseline object store cluster we recommend to new users spans 10 OSD storage nodes, 3 MON controllers, and 3 RGW S3 gateways. By allowing colocation, the smaller MON and RGW nodes can now run colocated on the OSDs’ hardware, allowing users to avoid not only the capital expense of those servers but the ongoing operational cost of managing those servers. Pricing that configuration using a popular hardware vendor, we estimate that users could experience a 24% hardware cost reduction or, in alternative, add 30% more raw storage for the same initial hardware invoice.

“All nodes are storage nodes now!”

We are accomplishing this improvement by colocating any of the Ceph scale-out daemons on the storage servers, one per host. Containers reserve RAM and CPU allocations that protect both the OSD and the co-located daemon from resource starvation during rebalancing or recovery load spikes. We can currently colocate all the scale-out daemons except the new iSCSI gateway, but we expect that in the short term MON, MGR, RGW, and the newly supported MDS should take the lion's share of these configurations.

As my marketing manager is found of saying, all nodes are storage nodes now! Just as important, we can field a containerized deployment using the very same ceph-ansible playbooks our customers are familiar with and have come to love. Users can conveniently learn how to operate with containerized storage while still relying on the same tools—and we continue to support RPM-based deployments. So if you would rather see others cross the chasm first, that is totally okay as well—You can continue operating with RPMs and Ansible as you are accustomed to.

CephFS: now fully supportawesome

The Ceph filesystem, CephFS for friends, is the Ceph interface providing the abstraction of a POSIX-compliant filesystem backed by the storage of a RADOS object storage cluster. CephFS achieved reliability and stability already last year, but with this new version, the MDS directory metadata service is fully scale-out, eliminating our last remaining concern to its production use. In Sage Weil’s own words, it is now fully awesome!

“CephFS is now fully awesome!” —Sage Weil

With this new version, CephFS is now fully supported by Red Hat. For details about CephFS, see the Ceph File System Guide for Red Hat Ceph Storage 3. While I am on the subject, I'd like to give a shout-out to the unsung heroes in our awesome storage documentation team: They have steadily introduced high-quality guides with every release, and our customers are surely taking notice.

iSCSI and NFS: compatibility trifecta

Earlier this year, we introduced the first version of our NFS gateway, allowing a user to mount an S3 bucket as if it was an NFS folder, for quick bulk import and export of data from the cluster, as literally every device out there speaks NFS natively. In this release, we're enhancing the NFS gateway with support for NFS v.3 alongside the existing NFS v.4 support.

The remaining leg of our legacy compatibility plan is iSCSI. While iSCSI is not ideally suited to a scale-out system like Ceph, the use of multipathing for failover makes the fit smoother than one would expect, as no explicit HA is needed to manage failover.

With Red Hat Ceph Storage 3, we're bringing to GA the iSCSI gateway service that we have been previewing during the past year. While we continue to favor the LibRBD interface as it is more featureful and delivers better performance, iSCSI makes perfect sense as a fall-back to connect VMWare and Windows servers to Ceph storage, and generally anywhere a native Ceph block driver is not yet available. With this initial release, we are supporting VMWare ESX 6.5, Windows Server 2016, and RHV 4.x over an iSCSI interface, and you can expect to see us adding more platforms to the list of supported clients next year as we plan to increase the reach of our automated testing infrastructure.

¡Arriba, arriba! ¡Ándale, ándale!

Red Hat’s famous Performance and Scale team has revisited client-side caching tuning with the new codebase and blessed an optimized configuration for dm-cache that can now be easily configured with Ceph-volume, the new up-and-coming tool that is slated by the Community to eventually give the aging ceph-disk a well-deserved retirement.

Making things faster is important, but equally important is insight into performance metrics. The new dashboard is well deserving of a blog on its own right, but let’s just say that it plainly makes available a significant leap in performance monitoring to Ceph environments, starting with the cluster as a whole and drilling into individual metrics or individual nodes as needed to track down performance issues. Select users have been patiently testing our early builds with Luminous this summer, and their consistently positive feedback makes us confident you will love the results.

Linux monitoring has many flavors, and while we supply tools as part of the product, customers often want to integrate their existing infrastructure, whether it is Nagios alerting in very binary tones that something seems to be wrong, or another tool. For this reason, we joined forces with our partners at Datadog to introduce a joint configuration for SAAS monitoring of Ceph using Datadog’s impressive tools.

Get the stats

More than 30 features are landing in this release alongside our rebasing of the enterprise product to the Luminous codebase. These map to almost 500 bugs in our downstream tracking system and hundreds more upstream in the Luminous 12.2.1 release we started from. I'd like to briefly call attention to about 20 of them that our very dedicated global support team prioritized for us as the most impactful way to further smooth out the experience of new users and move forward on our march toward making Ceph evermore enterprise-ready and easy to use. This is our biggest release yet, and its timely delivery 3 months after the upstream freeze is an impressive achievement for our Development and Quality Assurance engineering teams.

As always, those of you with an insatiable thirst for detail should read the release notes next—and feel free to ping me on Twitter if you have any questions!

The third one's a charm

Better economics through improved hardware utilization, great value-add for customers in the form of new access modes in file, iSCSI, and NFS compatibility, joined by improved usability and across-the-board technological advancement are the themes we tried to hit with this release. I think we delivered… but we aren’t done yet. We plan to send more stuff your way this year! Stay tuned.

But if you can't wait to hear more about the new object storage features in Red Hat Ceph Storage 3, read this blog post by Uday Boppana.


关于作者

Federico Lucifredi is the Product Management Director for Ceph Storage at Red Hat and a co-author of O'Reilly's "Peccary Book" on AWS System Administration.

Read full bio
UI_Icon-Red_Hat-Close-A-Black-RGB

按频道浏览

automation icon

自动化

有关技术、团队和环境 IT 自动化的最新信息

AI icon

人工智能

平台更新使客户可以在任何地方运行人工智能工作负载

open hybrid cloud icon

开放混合云

了解我们如何利用混合云构建更灵活的未来

security icon

安全防护

有关我们如何跨环境和技术减少风险的最新信息

edge icon

边缘计算

简化边缘运维的平台更新

Infrastructure icon

基础架构

全球领先企业 Linux 平台的最新动态

application development icon

应用领域

我们针对最严峻的应用挑战的解决方案

Original series icon

原创节目

关于企业技术领域的创客和领导者们有趣的故事