Displaying 20 results from an estimated 400 matches similar to: "LVM + XFS + external log + snapshots"
2023 May 04
1
'error=No space left on device' but, there is plenty of space all nodes
Hi Strahil and Gluster users,
Yes I had checked but, checked again and only 1% inode usage. 99% free. Same every node.
Example:
[root at nybaknode1 ]# df -i /lvbackups/brick
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/vgbackups-lvbackups 3108921344 93602 3108827742 1% /lvbackups
[root at nybaknode1 ]#
I neglected to clarify in
2023 May 02
1
'error=No space left on device' but, there is plenty of space all nodes
Hi Gluster users,
We are seeing 'error=No space left on device' issue and hoping someone might
could advise?
We are using a 12 node glusterfs v10.4 distributed vsftpd backup cluster for
years (not new) and recently 2 weeks ago upgraded to v9 > v10.4. I do not
know if the upgrade is related to this new issue.
We are seeing a new issue 'error=No space left on device' error
2023 May 04
1
'error=No space left on device' but, there is plenty of space all nodes
Hi,Have you checked inode usage (df -i /lvbackups/brick ) ?
Best Regards,Strahil Nikolov
On Tuesday, May 2, 2023, 3:05 AM, brandon at thinkhuge.net wrote:
Hi Gluster users,
We are seeing 'error=No space left on device' issue and hoping someone might
could advise?
We are using a 12 node glusterfs v10.4 distributed vsftpd backup cluster for
years (not new) and recently 2 weeks ago
2013 Jan 15
1
Sluggish server with big array
Hi!
I am experiencing strange behavior with my new CentOS 6.3
installation (system up to date). I have a big 11 TB EXT4 array mounted
on /home/data. The server is a Tyan 2U (TA26-B3992-E) with Dual Opteron
2216, one cpu has 4 Gigs RAM and the other has 8 Gigs (so 12 Gigs
total). The server uses 8 x Seagate 2TB SAS hard disks (7200 RPM) and
all the disks are grouped together to form 1
2010 Feb 06
1
shadow_copy2 prob? FSCTL..GET..DATA: max_data_count(114) too small (118) bytes needed!
I have "/home" as a logical volume. I have snapshots:
LV VG Attr LSize Origin Snap% Move Log Copy% Convert
2010.02.05-01.26.19 Home swi-ao 10.00G lvol0 39.81
2010.02.06-02.37.52 Home swi-ao 5.00G lvol0 0.25
lvol0 Home owi-ao 1.00T
and they are mounted:
2011 May 25
1
Hook script to preserve one partition untouched during install
This hook script tries to address the fact that a RHEV-H installation
will format all the storage devices available in the machine in order to
create HostVG and AppVG with all the available space. It may be the case
that RHEV-H needs to respect and co-exist with a proposed partitioning
scheme, not getting all the storage space for HostVG and AppVG volume
groups.
The proposed solution adds the
2023 Jun 07
1
Using glusterfs for virtual machines with qcow2 images
Hi Chris,
here is a link to the settings needed for VM storage: https://github.com/gluster/glusterfs/blob/03592930239c3b43cbbdce17607c099ae075fd6d/extras/group-virt.example#L4
You can also ask in ovirt-users for real-world settings.Test well before changing production!!!
IMPORTANT: ONCE SHARDING IS ENABLED, IT CANNOT BE DISABLED !!!
Best Regards,Strahil Nikolov?
On Mon, Jun 5, 2023 at 13:55,
2023 Jun 01
1
Using glusterfs for virtual machines with qcow2 images
Chris:
Whilst I don't know what is the issue nor the root cause of your issue with using GlusterFS with Proxmox, but I am going to guess that you might already know that Proxmox "natively" supports Ceph, which the Wikipedia article for it says that it is a distributed object storage system.
Maybe that might work better with Proxmox?
Hope this helps.
Sorry that I wasn't able
2023 Jun 01
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
We use qcow2 with libvirt based kvm on many small clusters and have
found it to be exremely reliable though maybe not the fastest, though
some of that is most of our storage is SATA SSDs in a software RAID1
config for each brick.
What problems are you running into?
You just mention 'problems'
-wk
On 6/1/23 8:42 AM, Christian Schoepplein wrote:
> Hi,
>
> we'd like to use
2023 Jun 07
1
Using glusterfs for virtual machines with qcow2 images
Hi everybody
Regarding the issue with mount, usually I am using this systemd service to
bring up the mount points:
/etc/systemd/system/glusterfsmounts.service
[Unit]
Description=Glustermounting
Requires=glusterd.service
Wants=glusterd.service
After=network.target network-online.target glusterd.service
[Service]
Type=simple
RemainAfterExit=true
ExecStartPre=/usr/sbin/gluster volume list
2023 Jun 02
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
On oVirt / Redhat Virtualization,
the following Gluster volumes settings are recommended to be applied
(preferably at the creation of the volume)
These settings are important for data reliability, ( Note that Replica 3 or
Replica 2+1 is expected )
performance.quick-read=off
performance.read-ahead=off
performance.io-cache=off
performance.low-prio-threads=32
network.remote-dio=enable
2023 Jun 02
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
Try turn off this options:
performance.write-behind
performance.flush-behind
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram
Em sex., 2 de jun. de 2023 ?s 07:55, Guillaume Pavese <
guillaume.pavese at interactiv-group.com> escreveu:
> On oVirt / Redhat Virtualization,
> the following Gluster volumes settings are recommended to be applied
> (preferably at
2012 Aug 21
1
[PATCH] xfs: add a new api xfs_repair
Add a new api xfs_repair for repairing an XFS filesystem.
Signed-off-by: Wanlong Gao <gaowanlong at cn.fujitsu.com>
---
daemon/xfs.c | 116 +++++++++++++++++++++++++++++++++++++++++
generator/generator_actions.ml | 23 ++++++++
gobject/Makefile.inc | 6 ++-
po/POTFILES | 1 +
src/MAX_PROC_NR | 2 +-
5 files changed, 145
2015 Oct 01
0
req->nr_phys_segments > queue_max_segments (was Re: kernel BUG at drivers/block/virtio_blk.c:172!)
On Thu, Oct 01, 2015 at 03:10:14AM +0200, Thomas D. wrote:
> Hi,
>
> I have a virtual machine which fails to boot linux-4.1.8 while mounting
> file systems:
>
> > * Mounting local filesystem ...
> > ------------[ cut here ]------------
> > kernel BUG at drivers/block/virtio_blk.c:172!
> > invalid opcode: 000 [#1] SMP
> > Modules linked in: pcspkr
2006 Jan 13
1
How to disconnect to a database????
Hi,
We have experienced some problems with
ActiveRecord::Base class of Ruby-Rails. We are
building a web-application based on the Ruby-rails
framework and the web-application needs to access to
difference databases, so we do not pre-define our
database accesses in the database.yml file. In fact,
we are using the
ActiveRecord::Base.establish_connection() to connect
to our database, the function
2023 Jun 05
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
Hi Gilberto, hi all,
thanks a lot for all your answers.
At first I changed both settings mentioned below and first test look good.
Before changing the settings I was able to crash a new installed VM every
time after a fresh installation by producing much i/o, e.g. when installing
Libre Office. This always resulted in corrupt files inside the VM, but
researching the qcow2 file with the
2012 Oct 05
0
No subject
for all three nodes:=0A=
=0A=
Brick1: gluster-0-0:/mseas-data-0-0=0A=
Brick2: gluster-0-1:/mseas-data-0-1=0A=
Brick3: gluster-data:/data=0A=
=0A=
=0A=
Which node are you trying to mount to /data? If it is not the=0A=
gluster-data node, then it will fail if there is not a /data directory.=0A=
In this case, it is a good thing, since mounting to /data on gluster-0-0=0A=
or gluster-0-1 would not
2023 Jun 01
3
Using glusterfs for virtual machines with qcow2 images
Hi,
we'd like to use glusterfs for Proxmox and virtual machines with qcow2
disk images. We have a three node glusterfs setup with one volume and
Proxmox is attached and VMs are created, but after some time, and I think
after much i/o is going on for a VM, the data inside the virtual machine
gets corrupted. When I copy files from or to our glusterfs
directly everything is OK, I've
2012 Oct 05
0
No subject
for all three nodes:
Brick1: gluster-0-0:/mseas-data-0-0
Brick2: gluster-0-1:/mseas-data-0-1
Brick3: gluster-data:/data
Which node are you trying to mount to /data? If it is not the
gluster-data node, then it will fail if there is not a /data directory.
In this case, it is a good thing, since mounting to /data on gluster-0-0
or gluster-0-1 would not accomplish what you need.
To clarify, there
2011 Apr 18
1
rhel nfs bug with 5.5 - nfsd: blocked for more then 120 sec
Hi all,
I ran into this bug on my NFS server which is serving an XFS fs;
https://bugzilla.redhat.com/show_bug.cgi?id=616833
It was suggested using bind mounts.
My current fstab on my server is;
/dev/sdc1 /SHARE xfs defaults,noatime,nodiratime,logbufs=8,uquota 1 2
Unsure how to integrate bind mounts in this scheme to see if I can
avoid this bug until it is fixed.
Any ideas?
- aurf