similar to: CentOS-announce Digest, Vol 9, Issue 13

Displaying 20 results from an estimated 2000 matches similar to: "CentOS-announce Digest, Vol 9, Issue 13"

2005 Nov 20
0
CEBA-2005:1120-1 CentOS 4 x86_64 xfsprogs - bugfix update (CENTOSPLUS Only)
CentOS Errata and Bugfix Advisory 2005:1120-1 CentOS 4 x86_64 xfsprogs - bugfix update (CENTOSPLUS Only) This package is for the version of xfsprogs is that is included in the centosplus repo for CentOS-4 ... this is not an update to the main CentOS-4 repo. The SGI team has released a new xfsprogs SRPM for the XFS project. The following items have changed since the last release of this
2005 Nov 20
0
CEBA-2005:1120-1 CentOS 4 i386 xfsprogs - bugfix update (CENTOSPLUS Only)
CentOS Errata and Bugfix Advisory 2005:1120-1 CentOS 4 i386 xfsprogs - bugfix update (CENTOSPLUS Only) This package is for the version of xfsprogs is that is included in the centosplus repo for CentOS-4 ... this is not an update to the main CentOS-4 repo. The SGI team has released a new xfsprogs SRPM for the XFS project. The following items have changed since the last release of this package:
2012 Jul 09
1
[PATCH] NEW API: add new api xfs_info
Add xfs_info to show the geometry of the xfs filesystem. Signed-off-by: Wanlong Gao <gaowanlong at cn.fujitsu.com> --- Hi Rich, This patch add xfs_info, and start the xfs support work. I'd like to add the xfs support, like xfs_growfs, xfs_io, xfs_db, xfs_repair etc. Any thoughts? Thanks, Wanlong Gao daemon/Makefile.am | 1 + daemon/xfs.c | 69
2012 Aug 21
1
[PATCH] xfs: add a new api xfs_repair
Add a new api xfs_repair for repairing an XFS filesystem. Signed-off-by: Wanlong Gao <gaowanlong at cn.fujitsu.com> --- daemon/xfs.c | 116 +++++++++++++++++++++++++++++++++++++++++ generator/generator_actions.ml | 23 ++++++++ gobject/Makefile.inc | 6 ++- po/POTFILES | 1 + src/MAX_PROC_NR | 2 +- 5 files changed, 145
2009 Dec 05
1
Building xfsprogs can't find valid UUID library
On CentOS 5.3 x86_64, I'm trying to build xfsprogs-2.9.4-1.el5.centos.x86_64 with -ggdb so I can use it with gdb and examine the data structures when using xfs_db. I've installed the src rpm as a non-root user and when I run rpmbuild -bc during the ./configure I get the error: . . . checking uuid.h usability... no checking uuid.h presence... no checking for uuid.h... no checking
2015 Aug 06
1
xfs quota weirdness
Hi all, I have a quota problem with xfs (xfsprogs 3.1.7+b1 on debian GNU/Linux 7 -- wheezy) and samba-4.1.19. If I set a user quota to say 10GB, windows explorer reports a 20GB quota of which none used. If I change quota to x, windows explorer reports 2x space of which none used. So I assume samba is somehow getting (albeit incomplete and incorrect) xfs quota info from operating system. disks
2017 Mar 12
0
[PATCH v4 3/7] New API: yara_load
The yara_load API allows to load a set of Yara rules contained within a file on the host. Rules can be in binary format, as when compiled with yarac command, or in source code format. In the latter case, the rules will be first compiled and then loaded. Subsequent calls of the yara_load API will result in the discard of the previously loaded rules. Signed-off-by: Matteo Cafasso
2006 Sep 19
1
Hung XFS filesystems on Samba server
This is probably a hardware problem but I am posting here in case anyone else has seen it or it is actually software. If you have seen anything like it please let me know. Chuck For the last 1.5 years I have had occasional problems on a large (6.8 TB) Samba server. Two of the mounted filesystems will partially dismount at intervals between 3 days and 3 months. Files will still be open but any
2017 Apr 04
0
[PATCH v5 3/7] New API: yara_load
The yara_load API allows to load a set of Yara rules contained within a file on the host. Rules can be in binary format, as when compiled with yarac command, or in source code format. In the latter case, the rules will be first compiled and then loaded. Subsequent calls of the yara_load API will result in the discard of the previously loaded rules. Signed-off-by: Matteo Cafasso
2024 Oct 13
1
XFS corruption reported by QEMU virtual machine with image hosted on gluster
Hello all! We are experiencing a strange problem with QEMU virtual machines where the virtual machine image is hosted on a gluster volume. Access via fuse. (Our GFAPI attempt failed, it doesn?t seem to work properly with current QEMU/distro/gluster). We have the volume tuned for ?virt?. So we use qemu-img to create a raw image. You can use sparse or falloc with equal results. We start a virtual
2024 Oct 14
1
XFS corruption reported by QEMU virtual machine with image hosted on gluster
Hey Erik, I am running a similar setup with no issues having Ubuntu Host Systems on HPE DL380 Gen 10. I however used to run libvirt/qemu via nfs-ganesha on top of gluster flawlessly. Recently I upgraded to the native GFAPI implementation, which is poorly documented with snippets all over the internet. Although I cannot provide a direct solution for your issue, I am however suggesting to try
2024 Oct 14
1
XFS corruption reported by QEMU virtual machine with image hosted on gluster
First a heartfelt thanks for writing back. In a solution (not having this issue) we do use nfs-ganesha to host filesystem squashfs root FS objects to compute nodes. It is working great. We also have fuse-through-LIO. The solution here is 3 servers making up with cluster admin node. The XFS issue is only observed when we try to replace an existing one with another XFS on top, and only with RAW,
2020 Nov 06
1
Centos 8 and xfs_quota
Folks I'm trying to use xfs_quota to keep track of disk space usage for my users. The documentation states that I should specify "uquota" as an option on the mount data in /etc/fstab. Yet, I cannot find the entry in fstab that corresponds to the Logical Volume that ends up being mounted on /home1. The system in question was installed on a single disk system. Later, after
2015 Mar 27
1
xfs_quotas, [SOLVED]
Y'all know I've been fighting this, on and off, for months. The last few days, I've done a *lot* of googling, and finally got a clue in a reply in a thread I found, where someone noted that you CANNOT enable things like pquotas on an XFS filesystem with mount -o remount, you *MUST* umount it, then mount it. In spite of mount showing the quota for the filesystem, it never worked. I
2010 Apr 13
2
XFS-filesystem corrupted by defragmentation Was: Performance problems with XFS on Centos 5.4
Before I'd try to defragment my whole filesystem (see attached mail for whole story) I figured "Let's try it on some file". So I did > xfs_bmap /raid/Temp/someDiskimage.iso [output shows 101 extents and 1 hole] Then I defragmented the file > xfs_fsr /raid/Temp/someDiskimage.iso extents before:101 after:3 DONE > xfs_bmap /raid/Temp/someDiskimage.iso [output shows 3
2007 Mar 05
1
Deletion of xattr don't sync
Is there option for deleting xattr-s same as --delete for files, because it seems deletion of xattr don't sync, however it is synced when data of file change, see example below. Maybe this is a bug or maybe this is ok? System gentoo attr-2.4.32, rsync form today cvs. -- Regards Stanislaw Gruszka stasiu@sg /mnt/hda5/export $ echo "data" > file stasiu@sg /mnt/hda5/export $
2023 May 04
1
'error=No space left on device' but, there is plenty of space all nodes
Hi Strahil and Gluster users, Yes I had checked but, checked again and only 1% inode usage. 99% free. Same every node. Example: [root at nybaknode1 ]# df -i /lvbackups/brick Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/vgbackups-lvbackups 3108921344 93602 3108827742 1% /lvbackups [root at nybaknode1 ]# I neglected to clarify in
2023 May 02
1
'error=No space left on device' but, there is plenty of space all nodes
Hi Gluster users, We are seeing 'error=No space left on device' issue and hoping someone might could advise? We are using a 12 node glusterfs v10.4 distributed vsftpd backup cluster for years (not new) and recently 2 weeks ago upgraded to v9 > v10.4. I do not know if the upgrade is related to this new issue. We are seeing a new issue 'error=No space left on device' error
2023 May 04
1
'error=No space left on device' but, there is plenty of space all nodes
Hi,Have you checked inode usage (df -i /lvbackups/brick ) ? Best Regards,Strahil Nikolov On Tuesday, May 2, 2023, 3:05 AM, brandon at thinkhuge.net wrote: Hi Gluster users, We are seeing 'error=No space left on device' issue and hoping someone might could advise? We are using a 12 node glusterfs v10.4 distributed vsftpd backup cluster for years (not new) and recently 2 weeks ago
2005 Aug 10
1
Why only a "" string for heading for row.names with write.csv with a matrix?
Consider: > x <- matrix(1:6, 2,3) > rownames(x) <- c("ID1", "ID2") > colnames(x) <- c("Attr1", "Attr2", "Attr3") > x Attr1 Attr2 Attr3 ID1 1 3 5 ID2 2 4 6 > write.csv(x,file="x.csv") "","Attr1","Attr2","Attr3" "ID1",1,3,5