分布式文件系統,Ceph 0.94 發布
Ceph是加州大學Santa Cruz分校的Sage Weil(DreamHost的聯合創始人)專為博士論文設計的新一代自由軟件分布式文件系統。自2007年畢業之后,Sage開始全職投入到Ceph開 發之中,使其能適用于生產環境。Ceph的主要目標是設計成基于POSIX的沒有單點故障的分布式文件系統,使數據能容錯和無縫的復制。2010年3 月,Linus Torvalds將Ceph client合并到內 核2.6.34中。IBM開發者園地的一篇文章 探討了Ceph的架構,它的容錯實現和簡化海量數據管理的功能。
Ceph 0.94 發布, 此版本主要更新信息如下:-
RADOS Performance: a range of improvements have been made in the OSD and client-side librados code that improve the throughput on flash backends and improve parallelism and scaling on fast machines. Simplified
</li> -
RGW deployment: the ceph-deploy tool now has a new ‘ceph-deploy rgw create HOST’ command that quickly deploys a instance of the S3/Swift gateway using the embedded Civetweb server. This is vastly simpler than the previous Apache-based deployment. There are a few rough edges (e.g., around SSL support) but we encourage users to try the new method.
</li> -
RGW object versioning: RGW now supports the S3 object versioning API, which preserves old version of objects instead of overwriting them.
</li> -
RGW bucket sharding: RGW can now shard the bucket index for large buckets across, improving performance for very large buckets.
</li> -
RBD object maps: RBD now has an object map function that tracks which parts of the image are allocating, improving performance for clones and for commands like export and delete.
</li> -
RBD mandatory locking: RBD has a new mandatory locking framework (still disabled by default) that adds additional safeguards to prevent multiple clients from using the same image at the same time.
</li> -
RBD copy-on-read: RBD now supports copy-on-read for image clones, improving performance for some workloads.
</li> -
CephFS snapshot improvements: Many many bugs have been fixed with CephFS snapshots. Although they are still disabled by default, stability has improved significantly.
</li> -
CephFS Recovery tools: We have built some journal recovery and diagnostic tools. Stability and performance of single-MDS systems is vastly improved in Giant, and more improvements have been made now in Hammer. Although we still recommend caution when storing important data in CephFS, we do encourage testing for non-critical workloads so that we can better guage the feature, usability, performance, and stability gaps.
</li> -
CRUSH improvements: We have added a new straw2 bucket algorithm that reduces the amount of data migration required when changes are made to the cluster.
</li> -
RADOS cache tiering: A series of changes have been made in the cache tiering code that improve performance and reduce latency.
</li> -
Experimental RDMA support: There is now experimental support for RDMA via the Accelio (libxio) library.
</li> -
New administrator commands: The ‘ceph osd df’ command shows pertinent details on OSD disk utilizations. The ‘ceph pg ls …’ command makes it much simpler to query PG states while diagnosing cluster issues.
</li> </ul>詳細信息請查看發行頁面。
此版本現已提供下載:
https://github.com/ceph/ceph/archive/v0.94.zip
本文由用戶 jopen 自行上傳分享,僅供網友學習交流。所有權歸原作者,若您的權利被侵害,請聯系管理員。轉載本站原創文章,請注明出處,并保留原始鏈接、圖片水印。本站是一個以用戶分享為主的開源技術平臺,歡迎各類分享!