Menü Schließen

GlusterFS in Version 3.7.12 veröffentlicht

Gluster Logo

Das skalierbare Netzwerkfilesystem, GlusterFS, wurde vor 5 Tagen in der Version 3.7.12 veröffentlicht. Diese Minor Version behebt über 100 Fehler. Darunter im Bereich der Quota, der Geo-Replikation, des Tiering, in NFS-Ganesha, Worker, SMB, RPC, Posix und Heal-Prozess uvm.

GlusterFS Release Notes 3.7.12

Bugs Fixed

The following bugs have been fixed in this release.

  • 1063506 – No xml output on gluster volume heal info command with –xml
  • 1212676 – NetBSD port
  • 1257894 – „rm -rf *“ from multiple mount points fails to remove directories on all the subvolumes
  • 1268125 – glusterd memory overcommit
  • 1283972 – dht must avoid fresh lookups when a single replica pair goes offline
  • 1294077 – uses deprecated find -perm +xxx syntax
  • 1294675 – Healing queue rarely empty
  • 1312721 – tar complains: : file changed as we read it
  • 1312722 – when any brick/sub-vol is down and rebalance is not performing any action(fixing lay-out or migrating data) it should not say ‚Starting rebalance on volume has been successful‘ .
  • 1316533 – RFE: Provide a mechanism to disable some tests in regression
  • 1316808 – Data Tiering:tier volume status shows as in-progress on all nodes of a cluster even if the node is not part of volume
  • 1319380 – trash xlator : trash_unlink_mkdir_cbk() enters in an infinte loop which results segfault
  • 1322523 – Fd based fops should not be logging ENOENT/ESTALE
  • 1323017 – Ordering query results from libgfdb
  • 1323564 – [scale] Brick process does not start after node reboot
  • 1324510 – Continuous nfs_grace_monitor log messages observed in /var/log/messages
  • 1325843 – [HC] Add disk in a Hyper-converged environment fails when glusterfs is running in directIO mode
  • 1325857 – Multi-threaded SHD support
  • 1326174 – Volume stop is failing when one of brick is down due to underlying filesystem crash
  • 1326212 – gluster volume heal info shows conservative merge entries as in split-brain
  • 1326413 – /var/lib/glusterd/$few-directories not owned by any package, causing it to remain after glusterfs-server is uninstalled
  • 1327863 – cluster/afr: Fix partial heals in 3-way replication
  • 1327864 – assert failure happens when parallel rm -rf is issued on nfs mounts
  • 1328410 – SAMBA+TIER : Wrong message display.On detach tier success the message reflects Tier command failed.
  • 1328473 – DHT : If Directory creation is in progress and rename of that Directory comes from another mount point then after both operation few files are not accessible and not listed on mount and more than one Directory have same gfid
  • 1328706 – [geo-rep]: geo status shows $MASTER Nodes always with hostname even if volume is configured with IP
  • 1328836 – glusterfs-libs postun ldconfig: relative path `1′ used to build cache
  • 1329062 – Inconsistent directory structure on dht subvols caused by parent layouts going stale during entry create operations because of fix-layout
  • 1329115 – quota : fix null dereference issue
  • 1329492 – Peers goes to rejected state after reboot of one node when quota is enabled on cloned volume.
  • 1329779 – Inode leaks found in data-self-heal
  • 1329989 – snapshot-clone: clone volume doesn’t start after node reboot
  • 1330018 – RFE Sort volume quota list output alphabetically by path
  • 1330132 – Disperse volume fails on high load and logs show some assertion failures
  • 1330241 – Backport ’nuke‘ (fast tree deletion) functionality to 3.7
  • 1330249 – glusterd: SSL certificate depth volume option is incorrect
  • 1330428 – [tiering]: during detach tier operation, Input/output error is seen with new file writes on NFS mount
  • 1330450 – [geo-rep]: schedule_georep.py doesn’t touch the mount in every iteration
  • 1330529 – [DHT-Rebalance]: with few brick process down, rebalance process isn’t killed even after stopping rebalance process
  • 1330545 – setting lower op-version should throw failure message
  • 1330739 – [RFE] We need more debug info from stack wind and unwind calls
  • 1330765 – Migration does not work when EC is used as a tiered volume.
  • 1330855 – A replicated volume takes too much to come online when one server is down
  • 1330892 – nfs-ganesha crashes with segfault error while doing refresh config on volume.
  • 1331263 – SAMBA+TIER : File size is not getting updated when created on windows samba share mount
  • 1331264 – libgfapi:Setting need_lookup on wrong list
  • 1331342 – self-heal does fsyncs even after setting ensure-durability off
  • 1331502 – Under high read load, sometimes the message „XDR decoding failed“ appears in the logs and read fails
  • 1331759 – runner: extract and return actual exit status of child
  • 1331772 – glusterd: fix max pmap alloc to GF_PORT_MAX
  • 1331924 – [geo-rep]: schedule_georep.py doesn’t work when invoked using cron
  • 1331933 – rm -rf to a dir gives directory not empty(ENOTEMPTY) error
  • 1331934 – glusterd restart is failing if volume brick is down due to underlying FS crash.
  • 1331938 – glusterfsd: return actual exit status on mount process
  • 1331941 – rpc: fix gf_process_reserved_ports
  • 1332072 – values for Number of Scrubbed files, Number of Unsigned files, Last completed scrub time and Duration of last scrub are shown as zeros in bit rot scrub status
  • 1332074 – Marker: Lot of dict_get errors in brick log!!
  • 1332372 – Do not succeed mkdir without gfid-req
  • 1332397 – posix: Set correct d_type for readdirp() calls
  • 1332404 – the wrong variable was being checked for gf_strdup
  • 1332433 – Ganesha+Tiering: Continuous „0-glfs_h_poll_cache_invalidation: invalid argument“ messages getting logged in ganesha-gfapi logs.
  • 1332776 – glusterd + bitrot : Creating clone of snapshot. error „xlator.c:148:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/3.7.9/xlator/features/bitrot.so: cannot open shared object file:
  • 1332790 – quota: client gets IO error instead of disk quota exceed when the limit is exceeded
  • 1332838 – rpc: assign port only if it is unreserved
  • 1333237 – DHT: Once remove brick start failed in between Remove brick commit should not be allowed
  • 1333239 – [AFR]: „volume heal info“ command is failing during in-service upgrade to latest.
  • 1333241 – Fix excessive logging due to NULL dict in dht
  • 1333268 – SMB:while running I/O on cifs mount and doing graph switch causes cifs mount to hang.
  • 1333528 – Unexporting a volume sometimes fails with „Dynamic export addition/deletion failed“.
  • 1333645 – NFS+attach tier:IOs hang while attach tier is issued
  • 1333661 – ganesha exported volumes doesn’t get synced up on shutdown node when it comes up.
  • 1333934 – Detach tier fire before the background fixlayout is complete may result in failure
  • 1334204 – Test open-behind.t failing fairly often on NetBSD
  • 1334441 – SAMBA-VSS : Permission denied issue while restoring the directory from windows client 1 when files are deleted from windows client 2
  • 1334566 – Self heal shows different information for the same volume from each node
  • 1334700 – readdir-ahead does not fetch xattrs that md-cache needs in it’s internal calls
  • 1334750 – stop all gluster processes should also also include glusterfs mount process
  • 1335016 – set errno in case of inode_link failures
  • 1335686 – Self Heal fails on a replica3 volume with ‚disk quota exceeded‘
  • 1335728 – [geo-rep]: Multiple geo-rep session to the same slave is allowed for different users
  • 1335729 – mount/fuse: Logging improvements
  • 1335792 – [Tiering]: Detach tier commit is allowed before rebalance is complete
  • 1335813 – rpc: change client insecure port ceiling from 65535 to 49151
  • 1335821 – Revert „features/shard: Make o-direct writes work with sharding: http://review.gluster.org/#/c/13846/
  • 1335836 – Heal info shows split-brain for .shard directory though only one brick was down
  • 1336148 – [Tiering]: Files remain in hot tier even after detach tier completes
  • 1336199 – failover is not working with latest builds.
  • 1336284 – Worker dies with [Errno 5] Input/output error upon creation of entries at slave
  • 1336331 – Unexporting a volume sometimes fails with „Dynamic export addition/deletion failed“.
  • 1336470 – [Tiering]: The message ‚Max cycle time reached..exiting migration‘ incorrectly displayed as an ‚error‘ in the logs
  • 1336948 – [NFS-Ganesha] : stonith-enabled option not set with new versions of cman,pacemaker,corosync and pcs
  • 1337022 – DHT : few Files are not accessible and not listed on mount + more than one Directory have same gfid + (sometimes) attributes has ?? in ls output after renaming Directories from multiple client at same time
  • 1337113 – Modified volume options are not syncing once glusterd comes up.
  • 1337653 – log flooded with Could not map name=xxxx to a UUID when config’d with long hostnames
  • 1337779 – tests/bugs/write-behind/1279730.t fails spuriously
  • 1337831 – one of vm goes to paused state when network goes down and comes up back
  • 1337837 – Files present in the .shard folder even after deleting all the vms from the UI
  • 1337872 – Some of VMs go to paused state when there is concurrent I/O on vms
  • 1338668 – AFR : fuse,nfs mount hangs when directories with same names are created and deleted continuously
  • 1338969 – common-ha: ganesha.nfsd not put into NFS-GRACE after fail-back
  • 1339226 – gfapi: set mem_acct for the variables created for upcall
  • 1339446 – ENOTCONN error during parallel rmdir
  • 1340992 – Directory creation(mkdir) fails when the remove brick is initiated for replicated volumes accessing via nfs-ganesha
  • 1341068 – [geo-rep]: Monitor crashed with [Errno 3] No such process
  • 1341121 – [geo-rep]: If the session is renamed, geo-rep configuration are not retained
  • 1341478 – [geo-rep]: Snapshot creation having geo-rep session is broken
  • 1341952 – changelog: changelog_rollover breaks when number of fds opened is more than 1024
  • 1342348 – Log parameters such as the gfid, fd address, offset and length of the reads upon failure for easier debugging
  • 1342374 – [quota+snapshot]: Directories are inaccessible from activated snapshot, when the snapshot was created during directory creation
  • 1342431 – [georep]: Stopping volume fails if it has geo-rep session (Even in stopped state)
  • 1342453 – upgrade path when slave volume uuid used in geo-rep session
  • 1342903 – O_DIRECT support for sharding
  • 1342964 – self heal deamon killed due to oom kills on a dist-disperse volume using nfs ganesha
  • 1343362 – Input / Output when chmoding files on NFS mount point
  • 1344422 – fd leak in disperse
  • 1344561 – conservative merge happening on a x3 volume for a deleted file
  • 1344595 – [disperse] mkdir after re balance give Input/Output Error
  • 1344605 – [geo-rep]: Add-Brick use case: create push-pem force on existing geo-rep fails
  • 1346132 – tiering : Multiple brick processes crashed on tiered volume while taking snapshots
  • 1346184 – quota : rectify quota-deem-statfs default value in gluster v set help command
  • 1346751 – Unsafe access to inode->fd_list

Known Issues

  • Commit b33f3c9, which introduces changes to improve IPv6 support in GlusterFS has been reverted as it exposed problems in network encryption, which could cause a GlusterFS cluster to fail operating correctly when management network encryption is used.
  • Network encryption has an issue which could sometimes prevent reconnections from correctly happening.

 

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert