Displaying 15 results from an estimated 15 matches for "xfs_db".
Did you mean:
nss_db
2010 Apr 13
2
XFS-filesystem corrupted by defragmentation Was: Performance problems with XFS on Centos 5.4
...]
Then I defragmented the file
> xfs_fsr /raid/Temp/someDiskimage.iso
extents before:101 after:3 DONE
> xfs_bmap /raid/Temp/someDiskimage.iso
[output shows 3 extents and 1 hole]
and now comes the bummer: i wanted to check the fragmentation of the
whole filesystem (just for checking):
> xfs_db -r /dev/mapper/VolGroup00-LogVol04
xfs_db: unexpected XFS SB magic number 0x00000000
xfs_db: read failed: Invalid argument
xfs_db: data size check failed
cache_node_purge: refcount was 1, not zero (node=0x2a25c20)
xfs_db: cannot read root inode (22)
THAT output was definitly not there when I did t...
2005 Nov 21
0
CentOS-announce Digest, Vol 9, Issue 13
...has released a new xfsprogs SRPM for the XFS project.
The following items have changed since the last release of this package:
xfsprogs-2.7.3 (29 September 2005)
- Fix xfs_repair handling of the morebits bit.
- Merge back several kernel changes related to attr2.
- Extended xfs_db expert mode commands
- Clean up some fsxattr uses to reduce number of syscalls,
now that IRIX also supports project identifiers via this
interface.
xfsprogs-2.7.2 (28 September 2005)
- Fix up xfs_repair segmentation fault due to wrong allocation
size....
2009 Dec 05
1
Building xfsprogs can't find valid UUID library
On CentOS 5.3 x86_64, I'm trying to build
xfsprogs-2.9.4-1.el5.centos.x86_64 with -ggdb so I can use it with gdb
and examine the data structures when using xfs_db. I've installed the
src rpm as a non-root user and when I run rpmbuild -bc during
the ./configure I get the error:
.
.
.
checking uuid.h usability... no
checking uuid.h presence... no
checking for uuid.h... no
checking sys/uuid.h usability... no
checking sys/uuid.h presence... no
checking for...
2005 Nov 20
0
CEBA-2005:1120-1 CentOS 4 x86_64 xfsprogs - bugfix update (CENTOSPLUS Only)
...has released a new xfsprogs SRPM for the XFS project.
The following items have changed since the last release of this package:
xfsprogs-2.7.3 (29 September 2005)
- Fix xfs_repair handling of the morebits bit.
- Merge back several kernel changes related to attr2.
- Extended xfs_db expert mode commands
- Clean up some fsxattr uses to reduce number of syscalls,
now that IRIX also supports project identifiers via this
interface.
xfsprogs-2.7.2 (28 September 2005)
- Fix up xfs_repair segmentation fault due to wrong allocation
size....
2005 Nov 20
0
CEBA-2005:1120-1 CentOS 4 i386 xfsprogs - bugfix update (CENTOSPLUS Only)
...has released a new xfsprogs SRPM for the XFS project.
The following items have changed since the last release of this package:
xfsprogs-2.7.3 (29 September 2005)
- Fix xfs_repair handling of the morebits bit.
- Merge back several kernel changes related to attr2.
- Extended xfs_db expert mode commands
- Clean up some fsxattr uses to reduce number of syscalls,
now that IRIX also supports project identifiers via this
interface.
xfsprogs-2.7.2 (28 September 2005)
- Fix up xfs_repair segmentation fault due to wrong allocation
size....
2012 Jul 09
1
[PATCH] NEW API: add new api xfs_info
Add xfs_info to show the geometry of the xfs filesystem.
Signed-off-by: Wanlong Gao <gaowanlong at cn.fujitsu.com>
---
Hi Rich,
This patch add xfs_info, and start the
xfs support work.
I'd like to add the xfs support, like
xfs_growfs, xfs_io, xfs_db, xfs_repair etc.
Any thoughts?
Thanks,
Wanlong Gao
daemon/Makefile.am | 1 +
daemon/xfs.c | 69 ++++++++++++++++++++++++++++++++++++++++++
generator/generator_actions.ml | 6 ++++
src/MAX_PROC_NR | 2 +-
5 files changed, 78 insertions(+), 2 deleti...
2024 Oct 13
1
XFS corruption reported by QEMU virtual machine with image hosted on gluster
...not mount and will report XFS corruption.
If you dig into XFS repair, you can find a UUID mismatch between the superblock and the log. The log always retains the UUID of the original filesystem (the one we tried to replace). Running xfs_repair doesn?t truly repair, it just reports more corruption. xfs_db forcing to remake the log doesn?t help.
We can duplicate this with even a QEMU raw image of 50 megabytes. As far as we can tell, XFS is the only filesystem showing this behavior or at least the only one reporting a problem.
If we take QEMU out of the picture and create partitions directly on the...
2024 Oct 14
1
XFS corruption reported by QEMU virtual machine with image hosted on gluster
...report XFS corruption.
> If you dig into XFS repair, you can find a UUID mismatch between the
> superblock and the log. The log always retains the UUID of the
> original filesystem (the one we tried to replace). Running xfs_repair
> doesn?t truly repair, it just reports more corruption. xfs_db forcing
> to remake the log doesn?t help.
> ?
> We can duplicate this with even a QEMU raw image of 50 megabytes. As
> far as we can tell, XFS is the only filesystem showing this behavior
> or at least the only one reporting a problem.
> ?
> If we take QEMU out of the picture a...
2024 Oct 14
1
XFS corruption reported by QEMU virtual machine with image hosted on gluster
...not mount and will report XFS corruption.
If you dig into XFS repair, you can find a UUID mismatch between the superblock and the log. The log always retains the UUID of the original filesystem (the one we tried to replace). Running xfs_repair doesn?t truly repair, it just reports more corruption. xfs_db forcing to remake the log doesn?t help.
We can duplicate this with even a QEMU raw image of 50 megabytes. As far as we can tell, XFS is the only filesystem showing this behavior or at least the only one reporting a problem.
If we take QEMU out of the picture and create partitions directly on the...
2015 Feb 28
9
Looking for a life-save LVM Guru
Dear All,
I am in desperate need for LVM data rescue for my server.
I have an VG call vg_hosting consisting of 4 PVs each contained in a
separate hard drive (/dev/sda1, /dev/sdb1, /dev/sdc1, and /dev/sdd1).
And this LV: lv_home was created to use all the space of the 4 PVs.
Right now, the third hard drive is damaged; and therefore the third PV
(/dev/sdc1) cannot be accessed anymore. I would like
2017 Oct 23
0
gfid entries in volume heal info that do not heal
I'm not so lucky. ALL of mine show 2 links and none have the attr data
that supplies the path to the original.
I have the inode from stat. Looking now to dig out the path/filename
from xfs_db on the specific inodes individually.
Is the hash of the filename or <path>/filename and if so relative to
where? /, <path from top of brick>, ?
On Mon, 2017-10-23 at 18:54 +0000, Matt Waymack wrote:
> In my case I was able to delete the hard links in the .glusterfs
> folders of th...
2017 Oct 24
3
gfid entries in volume heal info that do not heal
...On Tue, Oct 24, 2017 at 3:28 AM, Jim Kinney <jim.kinney at gmail.com> wrote:
> I'm not so lucky. ALL of mine show 2 links and none have the attr data
> that supplies the path to the original.
>
> I have the inode from stat. Looking now to dig out the path/filename from
> xfs_db on the specific inodes individually.
>
> Is the hash of the filename or <path>/filename and if so relative to
> where? /, <path from top of brick>, ?
>
> On Mon, 2017-10-23 at 18:54 +0000, Matt Waymack wrote:
>
> In my case I was able to delete the hard links in the...
2017 Oct 24
0
gfid entries in volume heal info that do not heal
...gmail.com>
> wrote:
> >
> >
> >
> > I'm not so lucky. ALL of mine show 2 links and none have the attr
> > data that supplies the path to the original.
> >
> > I have the inode from stat. Looking now to dig out the
> > path/filename from xfs_db on the specific inodes individually.
> >
> > Is the hash of the filename or <path>/filename and if so relative
> > to where? /, <path from top of brick>, ?
> >
> > On Mon, 2017-10-23 at 18:54 +0000, Matt Waymack wrote:
> > > In my case I was able...
2017 Nov 06
0
gfid entries in volume heal info that do not heal
...gmail.com>
> wrote:
> >
> >
> >
> > I'm not so lucky. ALL of mine show 2 links and none have the attr
> > data that supplies the path to the original.
> >
> > I have the inode from stat. Looking now to dig out the
> > path/filename from xfs_db on the specific inodes individually.
> >
> > Is the hash of the filename or <path>/filename and if so relative
> > to where? /, <path from top of brick>, ?
> >
> > On Mon, 2017-10-23 at 18:54 +0000, Matt Waymack wrote:
> > > In my case I was able...
2017 Oct 23
2
gfid entries in volume heal info that do not heal
In my case I was able to delete the hard links in the .glusterfs folders of the bricks and it seems to have done the trick, thanks!
From: Karthik Subrahmanya [mailto:ksubrahm at redhat.com]
Sent: Monday, October 23, 2017 1:52 AM
To: Jim Kinney <jim.kinney at gmail.com>; Matt Waymack <mwaymack at nsgdv.com>
Cc: gluster-users <Gluster-users at gluster.org>
Subject: Re: