similar to: Centos 6.6, apparent xfs corruption

Displaying 20 results from an estimated 200 matches similar to: "Centos 6.6, apparent xfs corruption"

2015 Dec 30
1
hostname service?
>The service you are referring to is hostnamed [1]. hostnamed is >designed to start on request and terminate after an idle period. >Programs on your computer are probably querying the service to >determine if your hostname has changed. I see that I couldn't previously find it with systemctl because it is a "static" service, neither enabled nor disabled. What is
2023 May 04
1
'error=No space left on device' but, there is plenty of space all nodes
Hi Strahil and Gluster users, Yes I had checked but, checked again and only 1% inode usage. 99% free. Same every node. Example: [root at nybaknode1 ]# df -i /lvbackups/brick Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/vgbackups-lvbackups 3108921344 93602 3108827742 1% /lvbackups [root at nybaknode1 ]# I neglected to clarify in
2023 May 02
1
'error=No space left on device' but, there is plenty of space all nodes
Hi Gluster users, We are seeing 'error=No space left on device' issue and hoping someone might could advise? We are using a 12 node glusterfs v10.4 distributed vsftpd backup cluster for years (not new) and recently 2 weeks ago upgraded to v9 > v10.4. I do not know if the upgrade is related to this new issue. We are seeing a new issue 'error=No space left on device' error
2023 May 04
1
'error=No space left on device' but, there is plenty of space all nodes
Hi,Have you checked inode usage (df -i /lvbackups/brick ) ? Best Regards,Strahil Nikolov On Tuesday, May 2, 2023, 3:05 AM, brandon at thinkhuge.net wrote: Hi Gluster users, We are seeing 'error=No space left on device' issue and hoping someone might could advise? We are using a 12 node glusterfs v10.4 distributed vsftpd backup cluster for years (not new) and recently 2 weeks ago
2006 Mar 20
1
Aterisk with Realtime
Hi iam working with asterisk with mysql Realtime when i have confgured and run the asterisk iam getting the following error i dig all the places for help could not find the results could some one help me what is wrong iam using 1.2.5 on FC4 Mar 20 23:04:52 NOTICE[2054] cdr.c: CDR simple logging enabled. Mar 20 23:04:52 NOTICE[2054] indications.c: Removed default indication country
2023 Jun 07
1
Using glusterfs for virtual machines with qcow2 images
Hi Chris, here is a link to the settings needed for VM storage: https://github.com/gluster/glusterfs/blob/03592930239c3b43cbbdce17607c099ae075fd6d/extras/group-virt.example#L4 You can also ask in ovirt-users for real-world settings.Test well before changing production!!! IMPORTANT: ONCE SHARDING IS ENABLED, IT CANNOT BE DISABLED !!! Best Regards,Strahil Nikolov? On Mon, Jun 5, 2023 at 13:55,
2023 Jun 01
1
Using glusterfs for virtual machines with qcow2 images
Chris: Whilst I don't know what is the issue nor the root cause of your issue with using GlusterFS with Proxmox, but I am going to guess that you might already know that Proxmox "natively" supports Ceph, which the Wikipedia article for it says that it is a distributed object storage system. Maybe that might work better with Proxmox? Hope this helps. Sorry that I wasn't able
2023 Jun 01
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
We use qcow2 with libvirt based kvm on many small clusters and have found it to be exremely reliable though maybe not the fastest, though some of that is most of our storage is SATA SSDs in a software RAID1 config for each brick. What problems are you running into? You just mention 'problems' -wk On 6/1/23 8:42 AM, Christian Schoepplein wrote: > Hi, > > we'd like to use
2023 Jun 07
1
Using glusterfs for virtual machines with qcow2 images
Hi everybody Regarding the issue with mount, usually I am using this systemd service to bring up the mount points: /etc/systemd/system/glusterfsmounts.service [Unit] Description=Glustermounting Requires=glusterd.service Wants=glusterd.service After=network.target network-online.target glusterd.service [Service] Type=simple RemainAfterExit=true ExecStartPre=/usr/sbin/gluster volume list
2023 Jun 02
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
On oVirt / Redhat Virtualization, the following Gluster volumes settings are recommended to be applied (preferably at the creation of the volume) These settings are important for data reliability, ( Note that Replica 3 or Replica 2+1 is expected ) performance.quick-read=off performance.read-ahead=off performance.io-cache=off performance.low-prio-threads=32 network.remote-dio=enable
2015 Jul 09
0
built kernel-3.10.0-229.7.2.el7 OK but install fails
On Thu, Jul 9, 2015 at 3:05 PM, Nicholas Geovanis <nickgeovanis at gmail.com> wrote: > Hi all - > First the boilerplate: > On centos-release.x86_64 7-0.1406.el7.centos.2.3 > [root at localhost x86_64]# uname -a > Linux localhost.localdomain 3.10.0-123.el7.x86_64 #1 SMP Mon Jun 30 > 12:09:22 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux > [root at localhost
2015 Jul 10
0
built kernel-3.10.0-229.7.2.el7 OK but install fails
Thanks! Up on kernel 3.10.0-229.7.2.el7.centos. Page http://wiki.centos.org/HowTos/Custom_Kernel specifically states using rpm and not yum for the new kernel install, so perhaps needs that slight revision for 7......Nick G On Thu, Jul 9, 2015 at 5:05 PM, Nicholas Geovanis <nickgeovanis at gmail.com> wrote: > Hi all - > First the boilerplate: > On centos-release.x86_64
2015 Dec 13
0
Re: Need firewalld clue
On Sun, 13 Dec 2015 01:46, Nicholas Geovanis <nickgeovanis at ...> wrote: > I don't really understand the intent behind firewalld. The RHEL7 Security > Guide states "A graphical configuration tool, *firewall-config*, is used to > configure firewalld, which in turn uses *iptables tool* to communicate with > *Netfilter* in the kernel which implements packet
2023 Jun 02
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
Try turn off this options: performance.write-behind performance.flush-behind --- Gilberto Nunes Ferreira (47) 99676-7530 - Whatsapp / Telegram Em sex., 2 de jun. de 2023 ?s 07:55, Guillaume Pavese < guillaume.pavese at interactiv-group.com> escreveu: > On oVirt / Redhat Virtualization, > the following Gluster volumes settings are recommended to be applied > (preferably at
2023 Jun 05
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
Hi Gilberto, hi all, thanks a lot for all your answers. At first I changed both settings mentioned below and first test look good. Before changing the settings I was able to crash a new installed VM every time after a fresh installation by producing much i/o, e.g. when installing Libre Office. This always resulted in corrupt files inside the VM, but researching the qcow2 file with the
2023 Jun 01
3
Using glusterfs for virtual machines with qcow2 images
Hi, we'd like to use glusterfs for Proxmox and virtual machines with qcow2 disk images. We have a three node glusterfs setup with one volume and Proxmox is attached and VMs are created, but after some time, and I think after much i/o is going on for a VM, the data inside the virtual machine gets corrupted. When I copy files from or to our glusterfs directly everything is OK, I've
2012 Oct 09
2
Mount options for NFS
We're experiencing problems with some legacy software when it comes to NFS access. Even though files are visible in a terminal and can be accessed with standard shell tools and vi, this software typically complains that the files are empty or not syntactically correct. The NFS filesystems in question are 8TB+ XFS filesystems mounted with
2015 Jul 09
5
built kernel-3.10.0-229.7.2.el7 OK but install fails
Hi all - First the boilerplate: On centos-release.x86_64 7-0.1406.el7.centos.2.3 [root at localhost x86_64]# uname -a Linux localhost.localdomain 3.10.0-123.el7.x86_64 #1 SMP Mon Jun 30 12:09:22 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux [root at localhost x86_64]# rpm -qa kernel\* | sort kernel-3.10.0-123.el7.x86_64 kernel-devel-3.10.0-123.el7.x86_64
2017 Aug 18
4
Problem with softwareraid
Hello all, i have already had a discussion on the software raid mailinglist and i want to switch to this one :) I am having a really strange problem with my md0 device running centos7. after a new start of my server the md0 was gone. now after trying to find the problem i detected the following: Booting any installed kernel gives me NO md0 device. (ls /dev/md* doesnt give anything). a 'cat
2013 Dec 10
0
Re: gentoo linux, problem starting vm´s when cache=none
On Tue, Dec 10, 2013 at 03:21:59PM +0100, Marko Weber | ZBF wrote: > Hello Daniel, > > Am 2013-12-10 11:23, schrieb Daniel P. Berrange: > >On Tue, Dec 10, 2013 at 11:20:35AM +0100, Marko Weber | ZBF wrote: > >> > >>hello mailinglist, > >> > >>on gentoo system with qemu-1.6.1, libvirt 1.1.4, libvirt-glib-0.1.7, > >>virt-manager 0.10.0-r1