pkgsrc-Changes archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

CVS commit: pkgsrc/filesystems/glusterfs



Module Name:    pkgsrc
Committed By:   manu
Date:           Tue Jun  2 03:44:16 UTC 2015

Modified Files:
        pkgsrc/filesystems/glusterfs: Makefile PLIST distinfo
Added Files:
        pkgsrc/filesystems/glusterfs/patches: patch-10963
Removed Files:
        pkgsrc/filesystems/glusterfs/patches: patch-rpc_rpc-lib_src_rpcsvc.c
            patch-xlator_storage_posix_src_posix.c
            patch-xlators_mgmt_glusterd_src_Makefile.in

Log Message:
* Bitrot Detection

Bitrot detection is a technique used to identify an ?insidious?
type of disk error where data is silently corrupted with no indication
from the disk to the storage software layer that an error has
occurred. When bitrot detection is enabled on a volume, gluster
performs signing of all files/objects in the volume and scrubs data
periodically for signature verification. All anomalies observed
will be noted in log files.

* Multi threaded epoll for performance improvements

Gluster 3.7 introduces multiple threads to dequeue and process more
requests from epoll queues. This improves performance by processing
more I/O requests. Workloads that involve read/write operations on
a lot of small files can benefit from this enhancement.

* Volume Tiering [Experimental]

Policy based tiering for placement of files. This feature will serve
as a foundational piece for building support for data classification.

Volume Tiering is marked as an experimental feature for this release.
It is expected to be fully supported in a 3.7.x minor release.
Trashcan

This feature will enable administrators to temporarily store deleted
files from Gluster volumes for a specified time period.

* Efficient Object Count and Inode Quota Support

This improvement enables an easy mechanism to retrieve the number
of objects per directory or volume. Count of objects/files within
a directory hierarchy is stored as an extended attribute of a
directory. The extended attribute can be queried to retrieve the
count.

This feature has been utilized to add support for inode quotas.

* Pro-active Self healing for Erasure Coding

Gluster 3.7 adds pro-active self healing support for erasure coded
volumes.

* Exports and Netgroups Authentication for NFS

This feature adds Linux-style exports & netgroups authentication
to the native NFS server. This enables administrators to restrict
access to specific clients & netgroups for volume/sub-directory
NFSv3 exports.

* GlusterFind

GlusterFind is a new tool that provides a mechanism to monitor data
events within a volume. Detection of events like modified files is
made easier without having to traverse the entire volume.

* Rebalance Performance Improvements

Rebalance and remove brick operations in Gluster get a performance
boost by speeding up identification of files needing movement and
a multi-threaded mechanism to move all such files.

* NFSv4 and pNFS support

Gluster 3.7 supports export of volumes through NFSv4, NFSv4.1 and
pNFS. This support is enabled via NFS Ganesha. Infrastructure changes
done in Gluster 3.7 to support this feature include:

  - Addition of upcall infrastructure for cache invalidation.
  - Support for lease locks and delegations.
  - Support for enabling Ganesha through Gluster CLI.
  - Corosync and pacemaker based implementation providing resource
    monitoring and failover to accomplish NFS HA.

pNFS support for Gluster volumes and NFSv4 delegations are in beta
for this release. Infrastructure changes to support Lease locks and
NFSv4 delegations are targeted for a 3.7.x minor release.

* Snapshot Scheduling

With this enhancement, administrators can schedule volume snapshots.

* Snapshot Cloning

Volume snapshots can now be cloned to create a new writeable volume.

* Sharding [Experimental]

Sharding addresses the problem of fragmentation of space within a
volume. This feature adds support for files that are larger than
the size of an individual brick. Sharding works by chunking files
to blobs of a configurabe size.

Sharding is an experimental feature for this release. It is expected
to be fully supported in a 3.7.x minor release.

* RCU in glusterd

Thread synchronization and critical section access has been improved
by introducing userspace RCU in glusterd

* Arbiter Volumes

Arbiter volumes are 3 way replicated volumes where the 3rd brick
of the replica is automatically configured as an arbiter. The 3rd
brick contains only metadata which provides network partition
tolerance and prevents split-brains from happening.

Update to GlusterFS 3.7.1

* Better split-brain resolution

split-brain resolutions can now be also driven by users without
administrative intervention.

* Geo-replication improvements

There have been several improvements in geo-replication for stability
and performance.

* Minor Improvements

  - Message ID based logging has been added for several translators.
  - Quorum support for reads.
  - Snapshot names contain timestamps by default.Subsequent access
    to the snapshots should be done by the name listed in gluster
    snapshot list
  - Support for gluster volume get <volname> added.
  - libgfapi has added handle based functions to get/set POSIX ACLs
    based on common libacl structures.


To generate a diff of this commit:
cvs rdiff -u -r1.50 -r1.51 pkgsrc/filesystems/glusterfs/Makefile
cvs rdiff -u -r1.23 -r1.24 pkgsrc/filesystems/glusterfs/PLIST
cvs rdiff -u -r1.37 -r1.38 pkgsrc/filesystems/glusterfs/distinfo
cvs rdiff -u -r0 -r1.1 pkgsrc/filesystems/glusterfs/patches/patch-10963
cvs rdiff -u -r1.3 -r0 \
    pkgsrc/filesystems/glusterfs/patches/patch-rpc_rpc-lib_src_rpcsvc.c
cvs rdiff -u -r1.1 -r0 \
    pkgsrc/filesystems/glusterfs/patches/patch-xlator_storage_posix_src_posix.c
cvs rdiff -u -r1.2 -r0 \
    pkgsrc/filesystems/glusterfs/patches/patch-xlators_mgmt_glusterd_src_Makefile.in

Please note that diffs are not public domain; they are subject to the
copyright notices on the relevant files.




Home | Main Index | Thread Index | Old Index