Displaying 20 results from an estimated 44 matches for "syncops".
2023 Jun 01
3
Using glusterfs for virtual machines with qcow2 images
Hi,
we'd like to use glusterfs for Proxmox and virtual machines with qcow2
disk images. We have a three node glusterfs setup with one volume and
Proxmox is attached and VMs are created, but after some time, and I think
after much i/o is going on for a VM, the data inside the virtual machine
gets corrupted. When I copy files from or to our glusterfs
directly everything is OK, I've
2023 Jun 07
1
Using glusterfs for virtual machines with qcow2 images
Hi Chris,
here is a link to the settings needed for VM storage: https://github.com/gluster/glusterfs/blob/03592930239c3b43cbbdce17607c099ae075fd6d/extras/group-virt.example#L4
You can also ask in ovirt-users for real-world settings.Test well before changing production!!!
IMPORTANT: ONCE SHARDING IS ENABLED, IT CANNOT BE DISABLED !!!
Best Regards,Strahil Nikolov?
On Mon, Jun 5, 2023 at 13:55,
2023 Jun 01
1
Using glusterfs for virtual machines with qcow2 images
Chris:
Whilst I don't know what is the issue nor the root cause of your issue with using GlusterFS with Proxmox, but I am going to guess that you might already know that Proxmox "natively" supports Ceph, which the Wikipedia article for it says that it is a distributed object storage system.
Maybe that might work better with Proxmox?
Hope this helps.
Sorry that I wasn't able
2023 Jun 01
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
We use qcow2 with libvirt based kvm on many small clusters and have
found it to be exremely reliable though maybe not the fastest, though
some of that is most of our storage is SATA SSDs in a software RAID1
config for each brick.
What problems are you running into?
You just mention 'problems'
-wk
On 6/1/23 8:42 AM, Christian Schoepplein wrote:
> Hi,
>
> we'd like to use
2023 Jun 07
1
Using glusterfs for virtual machines with qcow2 images
Hi everybody
Regarding the issue with mount, usually I am using this systemd service to
bring up the mount points:
/etc/systemd/system/glusterfsmounts.service
[Unit]
Description=Glustermounting
Requires=glusterd.service
Wants=glusterd.service
After=network.target network-online.target glusterd.service
[Service]
Type=simple
RemainAfterExit=true
ExecStartPre=/usr/sbin/gluster volume list
2023 Jun 02
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
On oVirt / Redhat Virtualization,
the following Gluster volumes settings are recommended to be applied
(preferably at the creation of the volume)
These settings are important for data reliability, ( Note that Replica 3 or
Replica 2+1 is expected )
performance.quick-read=off
performance.read-ahead=off
performance.io-cache=off
performance.low-prio-threads=32
network.remote-dio=enable
2023 Jun 02
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
Try turn off this options:
performance.write-behind
performance.flush-behind
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram
Em sex., 2 de jun. de 2023 ?s 07:55, Guillaume Pavese <
guillaume.pavese at interactiv-group.com> escreveu:
> On oVirt / Redhat Virtualization,
> the following Gluster volumes settings are recommended to be applied
> (preferably at
2023 Jun 05
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
Hi Gilberto, hi all,
thanks a lot for all your answers.
At first I changed both settings mentioned below and first test look good.
Before changing the settings I was able to crash a new installed VM every
time after a fresh installation by producing much i/o, e.g. when installing
Libre Office. This always resulted in corrupt files inside the VM, but
researching the qcow2 file with the
2016 Jul 06
1
"No previous versions" - GPFS 3.5 and shadow_copy2
...Can someone please have a look at the smb.conf to see if any glaring mistakes are present, or suggest how I can troubleshoot the problem?
[global]
netbios name = store
workgroup = IC
security = ads
realm = IC.AC.UK
kerberos method = secrets and keytab
vfs objects = shadow_copy2 syncops gpfs fileid
ea support = yes
store dos attributes = yes
map readonly = no
map archive = no
map system = no
map hidden = no
unix extensions = no
allocation roundup size = 4096
disable netbios = yes
smb ports = 445
# server signing = mandatory
template shell = /bi...
2019 Jul 18
2
Samba async performance - bottleneck or bug?
...conf, just listing the lines I edited (the rest is default):
[global]
netbios name = prox
case sensitive = no
server min protocol = SMB3
client min protocol = SMB3
[Media]
path = /zfs/synctest
valid users = myuser
read only = no
writeable = yes
guest ok = no
create mode = 770
directory mode = 770
syncops:disable = true # I tried the test with both this option enabled as well as commented out, and it didn't seem to make a difference.
Thanks in advance.
2019 Jul 19
0
Samba async performance - bottleneck or bug?
...> netbios name = prox
> case sensitive = no
> server min protocol = SMB3
> client min protocol = SMB3
>
> [Media]
> path = /zfs/synctest
> valid users = myuser
> read only = no
> writeable = yes
> guest ok = no
> create mode = 770
> directory mode = 770
> syncops:disable = true # I tried the test with both this option enabled as well as commented out, and it didn't seem to make a difference.
syncops parameters only have an effect if the syncops VFS module is
enabled.
Cheers, David
2010 Nov 28
3
Rebuilding samba3x rpms results in size doubled
...shadow_copy2.so
-rwxr-xr-x 1 root root 9668 Mar 31 2010 shadow_copy.so
-rwxr-xr-x 1 root root 13844 Mar 31 2010 smb_traffic_analyzer.so
-rwxr-xr-x 1 root root 13796 Mar 31 2010 streams_depot.so
-rwxr-xr-x 1 root root 17984 Mar 31 2010 streams_xattr.so
-rwxr-xr-x 1 root root 5560 Mar 31 2010 syncops.so
-rwxr-xr-x 1 root root 17744 Mar 31 2010 xattr_tdb.so
-rwxr-xr-x 1 root root 22052 Mar 31 2010 /usr/lib/samba/vfs/acl_tdb.so
-rwxr-xr-x 1 root root 17920 Mar 31 2010 /usr/lib/samba/vfs/acl_xattr.so
-rwxr-xr-x 1 root root 9668 Mar 31 2010 /usr/lib/samba/vfs/audit.so
-rwxr-xr-x 1 root root 140...
2013 Aug 08
2
not able to restart the brick for distributed volume
Hi All,
I am facing issues restarting the gluster volume. When I start the volume
after stopping it, the gluster fails to start the volume
Below is the message that I get on CLI
/root> gluster volume start _home
volume start: _home: failed: Commit failed on localhost. Please check the
log file for more details.
Logs says that it was unable to start the brick
[2013-08-08
2019 Jul 19
3
Samba async performance - bottleneck or bug?
...o
> > server min protocol = SMB3
> > client min protocol = SMB3
> > [Media]
> > path = /zfs/synctest
> > valid users = myuser
> > read only = no
> > writeable = yes
> > guest ok = no
> > create mode = 770
> > directory mode = 770
> > syncops:disable = true # I tried the test with both this option enabled as well as commented out, and it didn't seem to make a difference.
>
> syncops parameters only have an effect if the syncops VFS module is
> enabled.
>
> Cheers, David
2023 Feb 23
1
Big problems after update to 9.6
Hello,
We have a cluster with two nodes, "sg" and "br", which were running
GlusterFS 9.1, installed via the Ubuntu package manager. We updated the
Ubuntu packages on "sg" to version 9.6, and now have big problems. The "br"
node is still on version 9.1.
Running "gluster volume status" on either host gives "Error : Request timed
out". On
2013 Jan 31
2
ACLs on a directory on GPFS
...me permissions on a files, I can open it / modify it without any
problems.
Here is my seetings :
mmlsfs sb
-D nfs4 File locking semantics in effect
-k all ACL semantics in effect
/etc/samba/smb.conf :
---------------------
clustering = yes
fileid:mapping = fsname
vfs objects = shadow_copy2 syncops gpfs fileid
shadow:snapdir = .snapshots
shadow:fixinodes =yes
gpfs:sharemodes = Yes
gpfs:leases = Yes
posix locking = Yes
kernel oplocks = Yes
level2 oplocks = no
force unknown acl user = Yes
nfs4: mode = special
nfs4: chown = yes
nfs4: acedup = merge
[share]
read only = No
browseable = yes
path =...
2018 Feb 06
5
strange hostname issue on volume create command with famous Peer in Cluster state error message
Hello,
i installed glusterfs 3.11.3 version 3 nodes ubuntu 16.04 machine. All machines have same /etc/hosts.
node1 hostname
pri.ostechnix.lan
node2 hostname
sec.ostechnix.lan
node2 hostname
third.ostechnix.lan
51.15.77.14 pri.ostechnix.lan pri
51.15.90.60 sec.ostechnix.lan sec
163.172.151.120 third.ostechnix.lan third
volume create command is
root at
2012 Feb 21
2
bootstrap in time dependent Cox model
Dear R-list,
I am wondering how to perform a bootstrap in R for the weighted time
dependent Cox model? (Andersen?Gill format, with multiple observations
from each patients) to obtain the bootstrap standard error of the
treatment effect.
Below is an example dataset. Would 'censboot' be appropriate to use in
this context? Any suggestions/references/direction to R-package will
be highly
2016 Jul 05
1
GPFS AFM Export Problem
...server string = Samba Server Version %v
socket options = TCP_NODELAY SO_KEEPALIVE TCP_KEEPCNT=4 TCP_KEEPIDLE=240 TCP_KEEPINTVL=15
store dos attributes = yes
strict allocate = yes
strict locking = no
unix extensions = no
vfs objects = shadow_copy2 syncops fileid streams_xattr gpfs
gpfs:dfreequota = yes
gpfs:hsm = yes
gpfs:leases = yes
gpfs:prealloc = yes
gpfs:sharemodes = yes
gpfs:winattr = yes
nfs4:acedup = merge
nfs4:chow...
2018 Feb 06
0
strange hostname issue on volume create command with famous Peer in Cluster state error message
Did you do gluster peer probe? Check out the documentation:
http://docs.gluster.org/en/latest/Administrator%20Guide/Storage%20Pools/
On Tue, Feb 6, 2018 at 5:01 PM, Ercan Aydo?an <ercan.aydogan at gmail.com> wrote:
> Hello,
>
> i installed glusterfs 3.11.3 version 3 nodes ubuntu 16.04 machine. All
> machines have same /etc/hosts.
>
> node1 hostname
> pri.ostechnix.lan