Displaying 20 results from an estimated 2000 matches similar to: "File size diff on local disk vs NFS share"
2012 May 02
1
File size diff between NFS mount and local disk
Hi all,
I never really paid attention to this but a file on an NFS mount is showing 64M in size, but when copying the file to a local drive, it shows 2.5MB in size.
My NFS server is hardware Raided with a volume stripe size of 128K were the volume size is 20TB.
My NFS clients are the same distro as the server being Centos.
Is this due to my stripe size?
Nuggets are appreciated.
- aurf
2023 May 02
1
'error=No space left on device' but, there is plenty of space all nodes
Hi Gluster users,
We are seeing 'error=No space left on device' issue and hoping someone might
could advise?
We are using a 12 node glusterfs v10.4 distributed vsftpd backup cluster for
years (not new) and recently 2 weeks ago upgraded to v9 > v10.4. I do not
know if the upgrade is related to this new issue.
We are seeing a new issue 'error=No space left on device' error
2023 May 04
1
'error=No space left on device' but, there is plenty of space all nodes
Hi,Have you checked inode usage (df -i /lvbackups/brick ) ?
Best Regards,Strahil Nikolov
On Tuesday, May 2, 2023, 3:05 AM, brandon at thinkhuge.net wrote:
Hi Gluster users,
We are seeing 'error=No space left on device' issue and hoping someone might
could advise?
We are using a 12 node glusterfs v10.4 distributed vsftpd backup cluster for
years (not new) and recently 2 weeks ago
2023 Jun 01
3
Using glusterfs for virtual machines with qcow2 images
Hi,
we'd like to use glusterfs for Proxmox and virtual machines with qcow2
disk images. We have a three node glusterfs setup with one volume and
Proxmox is attached and VMs are created, but after some time, and I think
after much i/o is going on for a VM, the data inside the virtual machine
gets corrupted. When I copy files from or to our glusterfs
directly everything is OK, I've
2023 Jun 07
1
Using glusterfs for virtual machines with qcow2 images
Hi Chris,
here is a link to the settings needed for VM storage: https://github.com/gluster/glusterfs/blob/03592930239c3b43cbbdce17607c099ae075fd6d/extras/group-virt.example#L4
You can also ask in ovirt-users for real-world settings.Test well before changing production!!!
IMPORTANT: ONCE SHARDING IS ENABLED, IT CANNOT BE DISABLED !!!
Best Regards,Strahil Nikolov?
On Mon, Jun 5, 2023 at 13:55,
2023 Jun 01
1
Using glusterfs for virtual machines with qcow2 images
Chris:
Whilst I don't know what is the issue nor the root cause of your issue with using GlusterFS with Proxmox, but I am going to guess that you might already know that Proxmox "natively" supports Ceph, which the Wikipedia article for it says that it is a distributed object storage system.
Maybe that might work better with Proxmox?
Hope this helps.
Sorry that I wasn't able
2023 Jun 01
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
We use qcow2 with libvirt based kvm on many small clusters and have
found it to be exremely reliable though maybe not the fastest, though
some of that is most of our storage is SATA SSDs in a software RAID1
config for each brick.
What problems are you running into?
You just mention 'problems'
-wk
On 6/1/23 8:42 AM, Christian Schoepplein wrote:
> Hi,
>
> we'd like to use
2023 Jun 07
1
Using glusterfs for virtual machines with qcow2 images
Hi everybody
Regarding the issue with mount, usually I am using this systemd service to
bring up the mount points:
/etc/systemd/system/glusterfsmounts.service
[Unit]
Description=Glustermounting
Requires=glusterd.service
Wants=glusterd.service
After=network.target network-online.target glusterd.service
[Service]
Type=simple
RemainAfterExit=true
ExecStartPre=/usr/sbin/gluster volume list
2023 Jun 02
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
On oVirt / Redhat Virtualization,
the following Gluster volumes settings are recommended to be applied
(preferably at the creation of the volume)
These settings are important for data reliability, ( Note that Replica 3 or
Replica 2+1 is expected )
performance.quick-read=off
performance.read-ahead=off
performance.io-cache=off
performance.low-prio-threads=32
network.remote-dio=enable
2023 Jun 02
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
Try turn off this options:
performance.write-behind
performance.flush-behind
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram
Em sex., 2 de jun. de 2023 ?s 07:55, Guillaume Pavese <
guillaume.pavese at interactiv-group.com> escreveu:
> On oVirt / Redhat Virtualization,
> the following Gluster volumes settings are recommended to be applied
> (preferably at
2023 Jun 05
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
Hi Gilberto, hi all,
thanks a lot for all your answers.
At first I changed both settings mentioned below and first test look good.
Before changing the settings I was able to crash a new installed VM every
time after a fresh installation by producing much i/o, e.g. when installing
Libre Office. This always resulted in corrupt files inside the VM, but
researching the qcow2 file with the
2012 Nov 06
2
I am very confused about strip Stripe what way it hold space?
I have 4 dell 2970 server , three server harddisk is 146Gx6 ,one hard disk is 72Gx6:
each server mount info is
/dev/sda4 on /exp1 type xfs (rw)
/dev/sdb1 on /exp2 type xfs (rw)
/dev/sdc1 on /exp3 type xfs (rw)
/dev/sdd1 on /exp4 type xfs (rw)
/dev/sde1 on /exp5 type xfs (rw)
/dev/sdf1 on /exp6 type xfs (rw)
I create a gluster volume have 4 stripe
gluster volume create test-volume3 stripe 4
2011 Apr 18
1
rhel nfs bug with 5.5 - nfsd: blocked for more then 120 sec
Hi all,
I ran into this bug on my NFS server which is serving an XFS fs;
https://bugzilla.redhat.com/show_bug.cgi?id=616833
It was suggested using bind mounts.
My current fstab on my server is;
/dev/sdc1 /SHARE xfs defaults,noatime,nodiratime,logbufs=8,uquota 1 2
Unsure how to integrate bind mounts in this scheme to see if I can
avoid this bug until it is fixed.
Any ideas?
- aurf
2015 Aug 26
5
[PATCH] Call ExitBootServices twice
From: Sylvain Gault <sylvain.gault at gmail.com>
On some architecture, including my hardware, the function ExitBootServices may
need to be called twice in order to successfully exit the boot services. As
stated by the UEFI spec, the first call to ExitBootServices may perform a
partial shutdown of the services. It seems that during this partial shutdown,
the memory map can be modified, thus
2023 May 04
1
'error=No space left on device' but, there is plenty of space all nodes
Hi Strahil and Gluster users,
Yes I had checked but, checked again and only 1% inode usage. 99% free. Same every node.
Example:
[root at nybaknode1 ]# df -i /lvbackups/brick
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/vgbackups-lvbackups 3108921344 93602 3108827742 1% /lvbackups
[root at nybaknode1 ]#
I neglected to clarify in
2017 Apr 30
2
allocsize: change from 3.9 to 4.0
Hi all,
I added support for the allocsize function attribute to our compiler
(LDC), thinking that that would enable removal of function calls when the
allocated memory is not used.
For example:
```
declare i8* @my_malloc(i32) allocsize(0)
define void @test_malloc() {
%1 = call i8* @my_malloc(i32 100)
ret void
}
```
I thought the my_alloc call in test_malloc would be removed, but `opt -O3`
2012 Apr 25
1
KVM - Virtio drivers for Centos 5.1
Hi all,
Really enjoying KVM as I was a long time user of Xen. Both are cool, just enjoying the new thing.
Wondering if any one could share some nuggets on how to get a Centos 5.1 VM guest to use virtio?
Trying to use virtio over the ide.
Thanks in advance,
- aurf
2020 Apr 15
2
question on the signature of malloc
Hi all,
consider the following function from Core.cpp in LLVM 9.0.0:
LLVMValueRef LLVMBuildMalloc(LLVMBuilderRef B, LLVMTypeRef Ty,
const char *Name) {
Type* ITy = Type::getInt32Ty(unwrap(B)->GetInsertBlock()->getContext());
Constant* AllocSize = ConstantExpr::getSizeOf(unwrap(Ty));
AllocSize = ConstantExpr::getTruncOrBitCast(AllocSize, ITy);
2015 Nov 02
3
[PATCH] efi: Call ExitBootServices at least twice
On Tue, Aug 25, 2015 at 11:54 PM, celelibi--- via Syslinux
<syslinux at zytor.com> wrote:
> From: Sylvain Gault <sylvain.gault at gmail.com>
>
> Some firmware implementations may need ExitBootServices to be called
> twice. The second time with an updated memory map key.
>
> Signed-off-by: Sylvain Gault <sylvain.gault at gmail.com>
> ---
> efi/main.c | 75
2015 Sep 16
1
[PATCH] efi: Call ExitBootServices at least twice
On Wed, 26 Aug 2015 05:54:04 +0200
celelibi--- via Syslinux <syslinux at zytor.com> wrote:
> From: Sylvain Gault <sylvain.gault at gmail.com>
>
> Some firmware implementations may need ExitBootServices to be called
> twice. The second time with an updated memory map key.
>
> Signed-off-by: Sylvain Gault <sylvain.gault at gmail.com>
> ---
> efi/main.c |