Menü Schließen

GlusterFS in Version 3.7.9 veröffentlicht

Gluster Logo

Die Entwickler der Open-Source Lösung für ein verteiltes Dateisystem, GlusterFS, haben die Version 3.7.9 veröffentlicht. Diese behebt über 120 Fehler, grade für das Data Tiereing Feature, und sorgt somit für eine bessere Stabilität und Usability.

GlusterFS Liste der Änderungen in Version 3.7.9

  • 1317959 – inode ref leaks with perf-test.sh
  • 1318203 – Tiering should break out of iterating query file once cycle time completes.
  • 1314680 – Speed up regression tests
  • 1301030 – [Snapshot]: Snapshot restore stucks in post validation.
  • 1311377 – Memory leak in glusterd
  • 1309462 – Upgrade from 3.7.6 to 3.7.8 causes massive drop in write performance. Fresh install of 3.7.8 also has low write performance
  • 1315935 – glusterfs-libs postun scriptlet fail /sbin/ldconfig: relative path `1′ used to build cache
  • 1315552 – glusterfs brick process crashed
  • 1299712 – [HC] Implement fallocate, discard and zerofill with sharding
  • 1315557 – SEEK_HOLE and SEEK_DATA should return EINVAL when protocol support is missing
  • 1315639 – Glusterfind hook script failing if /var/lib/glusterd/glusterfind dir was absent
  • 1312762 – [geo-rep]: Session goes to faulty with Errno 13: Permission denied
  • 1314641 – Encrypted rpc clients do not reconnect sometimes
  • 1296175 – geo-rep: hard-link rename issue on changelog replay
  • 1315939 – ‚gluster volume get‘ returns 0 value for server-quorum-ratio
  • 1315562 – setting lower op-version should throw failure message
  • 1309462 – Upgrade from 3.7.6 to 3.7.8 causes massive drop in write performance. Fresh install of 3.7.8 also has low write performance
  • 1297209 – no-mtab (-n) mount option ignore next mount option
  • 1296208 – Geo-Replication Session goes „FAULTY“ when application logs rolled on master
  • 1315582 – Geo-replication CPU usage is 100%
  • 1314617 – Data Tiering:Don’t allow a detach-tier commit if detach-tier start has failed to complete
  • 1305749 – Vim commands from a non-root user fails to execute on fuse mount with trash feature enabled
  • 1309191 – [RFE] Schedule Geo-replication
  • 1313309 – Handle Rsync/Tar errors effectively
  • 1313310 – [RFE]Add –no-encode option to the glusterfind pre command
  • 1313311 – Dist-geo-rep : geo-rep worker crashed while init with [Errno 34] Numerical result out of range.
  • 1302979 – [georep+tiering]: Hardlink sync is broken if master volume is tiered
  • 1311865 – Data Tiering:Lot of Promotions/Demotions failed error messages
  • 1257012 – Fix the tests infra
  • 1313131 – quarantine folder becomes empty and bitrot status does not list any files which are corrupted
  • 1313623 – [georep+disperse]: Geo-Rep session went to faulty with errors „[Errno 5] Input/output error“
  • 1313921 – Incorrect file size on mount if stat is served from the arbiter brick.
  • 1315140 – Statedump crashes in open-behind because of NULL dereference
  • 1315142 – Move away from gf_log completely to gf_msg
  • 1315008 – glusterfs-server %post script is not quiet, prints „success“ during installation
  • 1302202 – Unable to get the client statedump, as /var/run/gluster directory is not available by default
  • 1313233 – Wrong permissions set on previous copy of truncated files inside trash directory
  • 1313313 – Gluster manpage doesn’t show georeplication options
  • 1315009 – glusterd: coverity warning in glusterd-snapshot-utils.c copy_nfs_ganesha_file()
  • 1313921 – Incorrect file size on mount if stat is served from the arbiter brick.
  • 1313315 – [HC] glusterfs mount crashed
  • 1283757 – EC: File healing promotes it to hot tier
  • 1314571 – rsyslog can’t be completely removed due to dependency in libglusterfs
  • 1313302 – quota: reduce latency for testcase ./tests/bugs/quota/bug-1293601.t
  • 1312954 – quota: xattr trusted.glusterfs.quota.limit-objects not healed on a root of newly added brick
  • 1314164 – glusterd: does not start
  • 1313339 – features.sharding is not available in ‚gluster volume set help‘
  • 1314548 – tier: GCC throws Unused variable warning for conf in tier_link_cbk function
  • 1314204 – nfs-ganesha setup fails on fedora
  • 1312878 – Glusterd: Creation of volume is failing if one of the brick is down on the server
  • 1313776 – ec-read-policy.t can report a false-failure
  • 1293224 – Disperse: Disperse volume (cold vol) crashes while writing files on tier volume
  • 1313448 – Readdirp op_ret is modified by client xlator in case of xdata_rsp presence
  • 1311822 – rebalance : output of rebalance status should show ‚ run time ‚ in proper format (day,hour:min:sec)
  • 1312721 – tar complains: : file changed as we read it
  • 1311445 – Implement inode_forget_cbk() similar fops in gfapi
  • 1312623 – gluster vol get volname user.metadata-text“ Command fails with „volume get option: failed: Did you mean cluster.metadata-self-heal?“
  • 1311572 – tests : remove brick command execution displays success even after, one of the bricks down.
  • 1311451 – Add missing release-notes for release-3.7
  • 1311441 – Fix mem leaks related to gfapi applications
  • 1309238 – Issues with refresh-config when the „.export_added“ has different values on different nodes
  • 1312200 – Handle negative fcntl flock->l_len values
  • 1311441 – Fix mem leaks related to gfapi applications
  • 1306131 – Attach tier : Creates fail with invalid argument errors
  • 1311836 – [Tier]: Endup in multiple entries of same file on client after rename which had a hardlinks
  • 1306129 – promotions not happening when space is created on previously full hot tier
  • 1290865 – nfs-ganesha server do not enter grace period during failover/failback
  • 1304889 – Memory leak in dht
  • 1311411 – gfapi : listxattr is broken for handle ops.
  • 1311041 – Tiering status and rebalance status stops getting updated
  • 1309233 – cd to .snaps fails with „transport endpoint not connected“ after force start of the volume.
  • 1311043 – [RFE] While creating a snapshot the timestamp has to be appended to the snapshot name.
  • 1310999 – “ Failed to aggregate response from node/brick“
  • 1310969 – glusterd logs are filled with „readv on /var/run/a30ad20ae7386a2fe58445b1a2b1359c.socket failed (Invalid argument)“
  • 1310972 – After GlusterD restart, Remove-brick commit happening even though data migration not completed.
  • 1310632 – Newly created volume start, starting the bricks when server quorum not met
  • 1296007 – libgfapi: Errno incorrectly set to EINVAL even on success
  • 1308415 – DHT : for many operation directory/file path is ‚(null)‘ in brick log
  • 1308410 – for each Directory which was self healed
  • 1306131 – Attach tier : Creates fail with invalid argument errors
  • 1257012 – Fix the tests infra
  • 1310544 – DHT: Take blocking locks while renaming files
  • 1295359 – tiering: T files getting created , even after disk quota exceeds
  • 1295359 – tiering: T files getting created , even after disk quota exceeds
  • 1295359 – tiering: T files getting created , even after disk quota exceeds
  • 1295360 – [Tier]: can not delete symlinks from client using rm
  • 1295347 – [Tier]: „Bad file descriptor“ on removal of symlink only on tiered volume
  • 1282388 – Data Tiering:delete command rm -rf not deleting files the linkto file(hashed) which are under migration and possible spit-brain observed and possible disk wastage
  • 1295365 – [Tier]: Killing glusterfs tier process doesn’t reflect as failed/faulty in tier status
  • 1306131 – Attach tier : Creates fail with invalid argument errors
  • 1306131 – Attach tier : Creates fail with invalid argument errors
  • 1306131 – Attach tier : Creates fail with invalid argument errors
  • 1289031 – RFE:nfs-ganesha:prompt the nfs-ganesha disable cli to let user provide „yes or no“ option
  • 1302528 – Remove brick command execution displays success even after, one of the bricks down.
  • 1254430 – glusterfs dead when user creates a rdma volume
  • 1308414 – AFR: cluster options like data-self-heal, metadata-self-heal and entry-self-heal should not be allowed to set, if volume is not distribute-replicate volume
  • 1285829 – AFR: 3-way-replication: Transport point not connected error messaged not displayed when one of the replica pair is down
  • 1308400 – dht: NULL layouts referenced while the I/O is going on tiered volume
  • 1306129 – promotions not happening when space is created on previously full hot tier
  • 1304963 – [GlusterD]: After log rotate of cmd_history.log file, the next executed gluster commands are not present in the cmd_history.log file.
  • 1308800 – access-control : spurious error log message on every setxattr call
  • 1288352 – Few snapshot creation fails with pre-validation failed message on tiered volume.
  • 1304692 – hardcoded gsyncd path causes geo-replication to fail on non-redhat systems
  • 1306922 – Self heal command gives error „Launching heal operation to perform index self heal on volume vol0 has been unsuccessful“
  • 1305256 – GlusterD restart, starting the bricks when server quorum not met
  • 1246121 – Disperse volume : client glusterfs crashed while running IO
  • 1293534 – guest paused due to IO error from gluster based storage doesn’t resume automatically or manually
  • 1302962 – Rebalance process crashed during cleanup_and_exit
  • 1306163 – [USS]: If .snaps already exists, ls -la lists it even after enabling USS
  • 1305868 – [USS]: Need defined rules for snapshot-directory, setting to a/b works but in linux a/b is b is subdirectory of a
  • 1305742 – Lot of assertion failures are seen in nfs logs with disperse volume
  • 1306514 – promotions not balanced across hot tier sub-volumes
  • 1306302 – Data Tiering:Change the default tiering values to optimize tiering settings
  • 1306738 – gluster volume heal info takes extra 2 seconds
  • 1305029 – [quota]: Incorrect disk usage shown on a tiered volume
  • 1306136 – Able to create files when quota limit is set to 0
  • 1299712 – [HC] Implement fallocate, discard and zerofill with sharding
  • 1306138 – [tiering]: Quota object limits not adhered to, in a tiered volume
  • 1296040 – [tiering]: Tiering isn’t started after attaching hot tier and hence no promotion/demotion
  • 1305755 – Start self-heal and display correct heal info after replace brick
  • 1303033 – tests : Modifying tests for crypt xlator
  • 1305428 – [Fuse: ] crash while –attribute-timeout and -entry-timeout are set to 0

Quelle: https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.9.md

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert