search for: zlog

Displaying 7 results from an estimated 7 matches for "zlog".

Did you mean: log
2017 Sep 21
1
Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
...replica 2 volume of 3.7.x gluster. Than I would like to change replica 2 to replica 3 in order to correct quorum issue that Infrastructure currently has. Infrastructure setup: - all clients running on same nodes as servers (FUSE mounts) - under gluster there is ZFS pool running as raidz2 with SSD ZLOG/ZIL cache - all two hypervisor running as GlusterFS nodes and also Qemu compute nodes (Ubuntu 16.04 LTS) - we are running Qemu VMs that accesses VMs disks via gfapi (Opennebula) - we currently run : 1x2 , Type: Replicate volume Current Versions : glusterfs-* [package] 3.7.6-1ubuntu1 qemu-* [packa...
2017 Sep 20
3
Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
...replica 2 volume of 3.7.x gluster. Than I would like to change replica 2 to replica 3 in order to correct quorum issue that Infrastructure currently has. Infrastructure setup: - all clients running on same nodes as servers (FUSE mounts) - under gluster there is ZFS pool running as raidz2 with SSD ZLOG/ZIL cache - all two hypervisor running as GlusterFS nodes and also Qemu compute nodes (Ubuntu 16.04 LTS) - we are running Qemu VMs that accesses VMs disks via gfapi (Opennebula) - we currently run : 1x2 , Type: Replicate volume Current Versions : glusterfs-* [package] 3.7.6-1ubuntu1 qemu-* [packa...
2017 Sep 21
1
Fwd: Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
...replica 2 volume of 3.7.x gluster. Than I would like to change replica 2 to replica 3 in order to correct quorum issue that Infrastructure currently has. Infrastructure setup: - all clients running on same nodes as servers (FUSE mounts) - under gluster there is ZFS pool running as raidz2 with SSD ZLOG/ZIL cache - all two hypervisor running as GlusterFS nodes and also Qemu compute nodes (Ubuntu 16.04 LTS) - we are running Qemu VMs that accesses VMs disks via gfapi (Opennebula) - we currently run : 1x2 , Type: Replicate volume Current Versions : glusterfs-* [package] 3.7.6-1ubuntu1 qemu-* [packag...
2017 Sep 22
0
Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
...eplica 2 volume of 3.7.x gluster. Than I would like to change replica 2 to replica 3 in order to correct quorum issue that Infrastructure currently has. *Infrastructure setup:* - all clients running on same nodes as servers (FUSE mounts) - under gluster there is ZFS pool running as raidz2 with SSD ZLOG/ZIL cache - all two hypervisor running as GlusterFS nodes and also Qemu compute nodes (Ubuntu 16.04 LTS) - we are running Qemu VMs that accesses VMs disks via gfapi (Opennebula) - we currently run : 1x2 , Type: Replicate volume *Current Versions :* glusterfs-* [package] 3.7.6-1ubuntu1 qemu-* [pack...
2017 Sep 16
0
Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
...replica 2 volume of 3.7.x gluster. Than I would like to change replica 2 to replica 3 in order to correct quorum issue that Infrastructure currently has. Infrastructure setup: - all clients running on same nodes as servers (FUSE mounts) - under gluster there is ZFS pool running as raidz2 with SSD ZLOG/ZIL cache - all two hypervisor running as GlusterFS nodes and also Qemu compute nodes (Ubuntu 16.04 LTS) - we are running Qemu VMs that accesses VMs disks via gfapi (Opennebula) - we currently run : 1x2 , Type: Replicate volume Current Versions : glusterfs-* [package] 3.7.6-1ubuntu1 qemu-* [packa...
2008 Jan 30
18
ZIL controls in Solaris 10 U4?
Is it true that Solaris 10 u4 does not have any of the nice ZIL controls that exist in the various recent Open Solaris flavors? I would like to move my ZIL to solid state storage, but I fear I can''t do it until I have another update. Heck, I would be happy to just be able to turn the ZIL off to see how my NFS on ZFS performance is effected before spending the $''s. Anyone
2017 Sep 22
2
Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
...x gluster. > Than I would like to change replica 2 to replica 3 in order to correct quorum issue that Infrastructure currently has. > > Infrastructure setup: > - all clients running on same nodes as servers (FUSE mounts) > - under gluster there is ZFS pool running as raidz2 with SSD ZLOG/ZIL cache > - all two hypervisor running as GlusterFS nodes and also Qemu compute nodes (Ubuntu 16.04 LTS) > - we are running Qemu VMs that accesses VMs disks via gfapi (Opennebula) > - we currently run : 1x2 , Type: Replicate volume > > Current Versions : > glusterfs-* [package]...