Displaying 20 results from an estimated 44 matches for "syncopate".
Did you mean:
sync_page
2023 Jun 01
3
Using glusterfs for virtual machines with qcow2 images
Hi,
we'd like to use glusterfs for Proxmox and virtual machines with qcow2
disk images. We have a three node glusterfs setup with one volume and
Proxmox is attached and VMs are created, but after some time, and I think
after much i/o is going on for a VM, the data inside the virtual machine
gets corrupted. When I copy files from or to our glusterfs
directly everything is OK, I've
2023 Jun 07
1
Using glusterfs for virtual machines with qcow2 images
Hi Chris,
here is a link to the settings needed for VM storage: https://github.com/gluster/glusterfs/blob/03592930239c3b43cbbdce17607c099ae075fd6d/extras/group-virt.example#L4
You can also ask in ovirt-users for real-world settings.Test well before changing production!!!
IMPORTANT: ONCE SHARDING IS ENABLED, IT CANNOT BE DISABLED !!!
Best Regards,Strahil Nikolov?
On Mon, Jun 5, 2023 at 13:55,
2023 Jun 01
1
Using glusterfs for virtual machines with qcow2 images
Chris:
Whilst I don't know what is the issue nor the root cause of your issue with using GlusterFS with Proxmox, but I am going to guess that you might already know that Proxmox "natively" supports Ceph, which the Wikipedia article for it says that it is a distributed object storage system.
Maybe that might work better with Proxmox?
Hope this helps.
Sorry that I wasn't able
2023 Jun 01
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
We use qcow2 with libvirt based kvm on many small clusters and have
found it to be exremely reliable though maybe not the fastest, though
some of that is most of our storage is SATA SSDs in a software RAID1
config for each brick.
What problems are you running into?
You just mention 'problems'
-wk
On 6/1/23 8:42 AM, Christian Schoepplein wrote:
> Hi,
>
> we'd like to use
2023 Jun 07
1
Using glusterfs for virtual machines with qcow2 images
Hi everybody
Regarding the issue with mount, usually I am using this systemd service to
bring up the mount points:
/etc/systemd/system/glusterfsmounts.service
[Unit]
Description=Glustermounting
Requires=glusterd.service
Wants=glusterd.service
After=network.target network-online.target glusterd.service
[Service]
Type=simple
RemainAfterExit=true
ExecStartPre=/usr/sbin/gluster volume list
2023 Jun 02
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
On oVirt / Redhat Virtualization,
the following Gluster volumes settings are recommended to be applied
(preferably at the creation of the volume)
These settings are important for data reliability, ( Note that Replica 3 or
Replica 2+1 is expected )
performance.quick-read=off
performance.read-ahead=off
performance.io-cache=off
performance.low-prio-threads=32
network.remote-dio=enable
2023 Jun 02
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
Try turn off this options:
performance.write-behind
performance.flush-behind
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram
Em sex., 2 de jun. de 2023 ?s 07:55, Guillaume Pavese <
guillaume.pavese at interactiv-group.com> escreveu:
> On oVirt / Redhat Virtualization,
> the following Gluster volumes settings are recommended to be applied
> (preferably at
2023 Jun 05
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
Hi Gilberto, hi all,
thanks a lot for all your answers.
At first I changed both settings mentioned below and first test look good.
Before changing the settings I was able to crash a new installed VM every
time after a fresh installation by producing much i/o, e.g. when installing
Libre Office. This always resulted in corrupt files inside the VM, but
researching the qcow2 file with the
2016 Jul 06
1
"No previous versions" - GPFS 3.5 and shadow_copy2
Hi all,
At some point recently my customers can no longer see GPFS snapshots under the Windows Previous Versions tab. It simply says "No previous versions available". If a fileset is exported with the flag "force user = root" then Previous Versions *are* displayed.
[2016/07/06 10:07:35.602080, 3] ../source3/smbd/vfs.c:1322(check_reduced_name)
check_reduced_name:
2019 Jul 18
2
Samba async performance - bottleneck or bug?
Hi,
I have a ZFS dataset that has sync writes disabled (setting sync=disabled) which means that it will only do async writes, and sync requests get converted to async writes. The ZFS dataset is hosted on a single Samsung 840 Pro 512GB SATA SSD.
I have this same dataset served as a Samba share, using Proxmox VE 6. Samba version 4.9.5-Debian (Buster), protocol SMB3_11. Kernel version 5.0.15.
To
2019 Jul 19
0
Samba async performance - bottleneck or bug?
Hi,
On Thu, 18 Jul 2019 19:04:47 +0000, douxevip via samba wrote:
> Hi,
>
> I have a ZFS dataset that has sync writes disabled (setting sync=disabled) which means that it will only do async writes, and sync requests get converted to async writes. The ZFS dataset is hosted on a single Samsung 840 Pro 512GB SATA SSD.
> I have this same dataset served as a Samba share, using Proxmox VE
2010 Nov 28
3
Rebuilding samba3x rpms results in size doubled
Hi,
I have rebuilt samba3x SRPM in Centos 5.5. The resultings RPM's are nearly
in triple size of the original RPMs. I have installed and checked the binary
files are stripped. What can result in such difference in RPM sizes?
I have not changed anything on built and install sections of spec file.
Regards,
Some files and sizes in original samba3x rpm:
-rwxr-xr-x 1 root root 17904 Mar 31
2013 Aug 08
2
not able to restart the brick for distributed volume
Hi All,
I am facing issues restarting the gluster volume. When I start the volume
after stopping it, the gluster fails to start the volume
Below is the message that I get on CLI
/root> gluster volume start _home
volume start: _home: failed: Commit failed on localhost. Please check the
log file for more details.
Logs says that it was unable to start the brick
[2013-08-08
2019 Jul 19
3
Samba async performance - bottleneck or bug?
Hi David,
Thanks for your reply.
> Hmm, so this "async" (sync=disabled?) ZFS tunable means that it
> completely ignores O_SYNC and O_DIRECT and runs the entire workload in
> RAM? I know nothing about ZFS, but that sounds like a mighty dangerous
> setting for production deployments.
Yes, you are correct - sync writes will flush to RAM, just like async, will stay in RAM for
2023 Feb 23
1
Big problems after update to 9.6
Hello,
We have a cluster with two nodes, "sg" and "br", which were running
GlusterFS 9.1, installed via the Ubuntu package manager. We updated the
Ubuntu packages on "sg" to version 9.6, and now have big problems. The "br"
node is still on version 9.1.
Running "gluster volume status" on either host gives "Error : Request timed
out". On
2013 Jan 31
2
ACLs on a directory on GPFS
Hello,
I am using the vfs_gpfs samba module to map ACLs through samba. It works
fine on files, but directory ACLs are ignored. Ex:
getfacl /sb/share/myplace/
file: sb/share/myplace/
owner: root
group: root
user::rwx
user:afrankel:rwx
group::---
mask::rwx
other::---
When I try to access this folder in Windows, I get permission denied.
The same permissions on a files, I can open it / modify it
2018 Feb 06
5
strange hostname issue on volume create command with famous Peer in Cluster state error message
Hello,
i installed glusterfs 3.11.3 version 3 nodes ubuntu 16.04 machine. All machines have same /etc/hosts.
node1 hostname
pri.ostechnix.lan
node2 hostname
sec.ostechnix.lan
node2 hostname
third.ostechnix.lan
51.15.77.14 pri.ostechnix.lan pri
51.15.90.60 sec.ostechnix.lan sec
163.172.151.120 third.ostechnix.lan third
volume create command is
root at
2012 Feb 21
2
bootstrap in time dependent Cox model
Dear R-list,
I am wondering how to perform a bootstrap in R for the weighted time
dependent Cox model? (Andersen?Gill format, with multiple observations
from each patients) to obtain the bootstrap standard error of the
treatment effect.
Below is an example dataset. Would 'censboot' be appropriate to use in
this context? Any suggestions/references/direction to R-package will
be highly
2016 Jul 05
1
GPFS AFM Export Problem
Hi All,
I'm having a frustrating time exporting a GPFS Independent Writer AFM fileset through Samba.
Native GPFS directories exported through Samba seem to work properly, but when creating an export which points to an AFM IW fileset, I get "Access Denied" errors when trying to create files from an SMB client and even more unusual "Failed to enumerate objects in the container:
2018 Feb 06
0
strange hostname issue on volume create command with famous Peer in Cluster state error message
Did you do gluster peer probe? Check out the documentation:
http://docs.gluster.org/en/latest/Administrator%20Guide/Storage%20Pools/
On Tue, Feb 6, 2018 at 5:01 PM, Ercan Aydo?an <ercan.aydogan at gmail.com> wrote:
> Hello,
>
> i installed glusterfs 3.11.3 version 3 nodes ubuntu 16.04 machine. All
> machines have same /etc/hosts.
>
> node1 hostname
> pri.ostechnix.lan