Dm-Clone to be Included in Linux Kernel

Starting from upcoming version 5.4, new module will be included, designed to clone the block devices, e.g. remote acrchive device 
23 September 2019   355

Linus Torvalds accepted into the kernel branch, on the basis of which release 5.4 is formed, the implementation of the dm-clone module with the implementation of a new handler based on Device-Mapper, which allows you to clone an existing block device. The module makes it possible to create a local copy based on a read-only block device that can be recorded during the cloning process.

As a typical application of dm-clone, is a network cloning of remote archive devices, read-only and processing I / O with large delays, to a local fast device that supports recording and processing requests with minimal delays.

The key difference from the solutions based on Unionfs and OverlayFS is that dm-clone works at the block device level, regardless of the file system used on this device, and forms a complete copy of the source device, and does not impose an additional layer in which changes are tracked. Unlike dm-mirror, the dm-clone module was originally designed to work only with the original section in read-only mode, without translating write operations to it. In dm-snapshot, a full copy is not created and there is no support for background copying. In dm-cache, a full copy is not created, write operations are forwarded, and the work boils down to caching hits. The closest in functionality is dm-thin, but it does not support background copy operations and is limited only by the use of certain types of partitions (thin-provisioning).

LizardFS 3.13.0-rc2 to be Rolled Out

The release of LizardFS 3.13.0 with  Raft consensus algorithm as a main innovation is scheduled to be released in late December
12 November 2019   119

After a year-long pause in development, work on the new branch of the fault-tolerant distributed file system LizardFS 3.13 was resumed and the second candidate for releases was published. Recently there was a change of ownership of the company developing LizardFS, a new leadership was adopted and the developers changed. Over the past two years, the project has moved away from the community and did not pay due attention to it, but the new team intends to revive its previous relations with the community and establish close interaction with it. The project code is written in C and C ++ and is distributed under the GPLv3 license.

LizardFS is a distributed cluster file system that allows you to disperse data across different servers, but provide access to them in the form of a single large partition, the work with which is carried out by analogy with traditional disk partitions. The mounted section with LizardFS supports POSIX file attributes, ACLs, locks, sockets, channels, device files, symbolic and hard links. The system does not have a single point of failure, all components are redundant. Parallelization of data operations is supported (several clients can access files at the same time).

The release of LizardFS 3.13.0 is scheduled to be released in late December. The main innovation of LizardFS 3.13 is the use of the Raft consensus algorithm (using its own implementation of uRaft, which was previously used in commercial products) to ensure fault tolerance (switching master servers in the event of a failure). Using uRaft simplifies setup and reduces latency when recovering from a failure, but requires at least three working nodes, one of which is used for quorum.

Among other changes: a new client based on the FUSE3 subsystem, solving problems with error correction, the nfs-ganesha plugin was rewritten in C language. The 3.13.0-rc2 update fixes several critical errors that made the previous test releases of the 3.13 branch unsuitable for use (patches for the 3.12 branch have not yet been published, and the upgrade from 3.12 to 3.13 still leads to complete data loss).

In 2020, work will focus on developing Agama, the new completely rewritten core of LizardFS, which, according to the developers, will provide a three-fold increase in productivity compared to branch 3.12. Agama will make the transition to event-driven architecture (event driven), asio-based I / O based on asio, work primarily in user space (to reduce the dependence on kernel caching mechanisms). In addition, a new debugging subsystem and network activity analyzer with support for performance tuning will be offered.

Full support for versioning write operations will be added to the LizardFS client, which will increase the reliability of disaster recovery, solve the problems that arise when different clients share the same data, and will achieve a significant increase in performance. The client will be transferred to its own network subsystem operating in user space. The first working prototype of LizardFS based on Agama is planned to be prepared in the second quarter of 2020. At the same time, they promise to implement tools for integrating LizardFS with the Kubernetes platform.

Get more info at the official website.