Displaying 7 results from an estimated 7 matches for "xfs_bmap".
2010 Apr 13
2
XFS-filesystem corrupted by defragmentation Was: Performance problems with XFS on Centos 5.4
Before I'd try to defragment my whole filesystem (see attached mail
for whole story) I figured "Let's try it on some file".
So I did
> xfs_bmap /raid/Temp/someDiskimage.iso
[output shows 101 extents and 1 hole]
Then I defragmented the file
> xfs_fsr /raid/Temp/someDiskimage.iso
extents before:101 after:3 DONE
> xfs_bmap /raid/Temp/someDiskimage.iso
[output shows 3 extents and 1 hole]
and now comes the bummer: i wanted to check the...
2013 Jul 03
1
Recommended filesystem for GlusterFS bricks.
Hi,
Which is the recommended filesystem to be used for the bricks in glusterFS. ?? XFS/EXT3/EXT4 etc .????
Thanks & Regards,
Bobby Jacob
Senior Technical Systems Engineer | eGroup
P SAVE TREES. Please don't print this e-mail unless you really need to.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2013 Feb 05
1
Destination file a lot larger then source (real size)
I have a script that syncs my backups to an NFS mount every day
The script works fine, without any errors, but there is a problem when
it comes to some large files
Let's take my pst file (8.9 gig) as an example
Source:
du -hs mypst.pst
8.9G mypst.pst
ls -alh mypst.pst
-rw-rw---- 1 me me 8.9G Jan 25 17:07 mypst.pst
That seems OK
Let's do that on the destination:
du -hs mypst.pst
2012 Jun 11
3
centos 6.2 xfs + nfs space allocation
Centos 6.2 system with xfs filesystem.
I'm sharing this filesystem using nfs.
When I create a 10 gigabyte test file from a nfs client system :
dd if=/dev/zero of=10Gtest bs=1M count=10000
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 74.827 s, 140 MB/s
Output from 'ls -al ; du' during this test :
-rw-r--r-- 1 root root 429170688 Jun 8 10:13 10Gtest
2009 Sep 08
1
3Ware 9650SE and XFS problems under Centos 5.3
...kernel:
Sep 7 21:08:19 backup kernel: Call Trace:
Sep 7 21:08:19 backup kernel: [<ffffffff882e1c7e>] :xfs:xfs_free_ag_extent+0x19f/0x67f
Sep 7 21:08:19 backup kernel: [<ffffffff882e356b>] :xfs:xfs_free_extent+0xa9/0xc9
Sep 7 21:08:19 backup kernel: [<ffffffff882f02ad>] :xfs:xfs_bmap_finish+0xf0/0x169
Sep 7 21:08:19 backup kernel: [<ffffffff8830de46>] :xfs:xfs_itruncate_finish+0x172/0x2b3
Sep 7 21:08:19 backup kernel: [<ffffffff88326f22>] :xfs:xfs_inactive+0x22e/0x821
Sep 7 21:08:19 backup kernel: [<ffffffff8832dd66>] :xfs:xfs_validate_fields+0x24/0x4b
S...
2012 Feb 23
1
default cluster.stripe-block-size for striped volumes on 3.0.x vs 3.3 beta (128kb), performance change if i reduce to a smaller block size?
Hi,
I've been migrating data from an old striped 3.0.x gluster install to
a 3.3 beta install. I copied all the data to a regular XFS partition
(4K blocksize) from the old gluster striped volume and it totaled
9.2TB. With the old setup I used the following option in a "volume
stripe" block in the configuration file in a client :
volume stripe
type cluster/stripe
option
2012 Mar 15
2
Usage Case: just not getting the performance I was hoping for
All,
For our project, we bought 8 new Supermicro servers. Each server is a
quad-core Intel cpu with 2U chassis supporting 8 x 7200 RPM Sata drives.
To start out, we only populated 2 x 2TB enterprise drives in each
server and added all 8 peers with their total of 16 drives as bricks to
our gluster pool as distributed replicated (2). The replica worked as
follows:
1.1 -> 2.1
1.2