similar to: XFS : Taking the plunge

Displaying 20 results from an estimated 1000 matches similar to: "XFS : Taking the plunge"

2010 Feb 11
2
xfs_repair doesn't fix sb versionnum missing attr
I run: xfs_check /dev/sdc10 And it reports: sb versionnum missing attr bit 10 Then I run: xfs_repair /dev/sdc10 And it reports output from 7 phases and "done". Again I run: xfs_check /dev/sdc10 And it reports: sb versionnum missing attr bit 10 Is this how it's supposed to work? Thanks for any help. Linus
2009 May 14
5
Preventing hour-long fsck on ext3-filesystem
Hi! I'm justing in the process of setting up a new fileserver for our company. I'm installing CentOS 5.3 (64 bit) on it. One of the "problems" with it is that it has a 3.5TB filesystem for the user data which I formatted during setup as an ext3. Now my experience with our current fileserver is that a 0.5TB ext3 filesystem needs approx half an hour to complete (and kicks in
2002 Nov 10
2
taking the 3.0 plunge
I'm just curious: It seems that more and more people are using 3.0 (or 3.0a20). Obviously on this list we normally hear the worst (ie xxx doesn't work, why?). What I would like to know about are 3.0 success stories. Stability? Platform? Compiler? Gotcha's encountered? Here is what I am considering: a Mandrake 9.0 box with it's default gcc 3.2, I am still using a windows 2K
2007 Dec 17
4
take plunge and yum update to 4.6
Usually I am one of the first ones to do it I have resisted this time... It is almost the Holidays and out centos 4 boxes have been online for like 2 years or whatever Has anyone boldly taken the plunge on any high use and/or Internet facing Centos$ servers and done a yum update or a yum -y update without any problems at all? :-) - rh
2003 Feb 18
3
The Big Plunge
Hola folks, After a few years of slowly phasing in various Linux and BSD platforms, the company I work for is willing to take a hard look at replacing its existing Windows NT domain controllers with a Linux/Samba combination. We only have about sixty people in our main office, but most of my experience is with smaller deployments. I'm not looking for step-by-step instructions, that's
2015 Mar 23
5
xfs fsck error metadata corruption
Hi, Everytime I restart Centos 7 I receive a error saying? metadata is corrupt and then I need to go through the process of mount and unmount the disk uuid then run xfs_repair {some uuid} or xfs_repair -L {some uuid} which ultimately corrupts even more. I?m running on a RAID 1 two identical drives this has happened more then once and had to reinstall. Any way I can prevent this when I
2013 Jul 03
1
Recommended filesystem for GlusterFS bricks.
Hi, Which is the recommended filesystem to be used for the bricks in glusterFS. ?? XFS/EXT3/EXT4 etc .???? Thanks & Regards, Bobby Jacob Senior Technical Systems Engineer | eGroup P SAVE TREES. Please don't print this e-mail unless you really need to. -------------- next part -------------- An HTML attachment was scrubbed... URL:
2013 Oct 29
1
XFS, inode64, and remount
Hi all, I was recently poking more into the inode64 mount option for XFS filesystems. I seem to recall a comment that you could remount a filesystem with inode64, but then a colleague ran into issues where he did that but was still out of inodes. So, I did more research, and found this posting to the XFS list: http://oss.sgi.com/archives/xfs/2008-05/msg01409.html So for people checking the
2012 Mar 02
1
xfs, inode64, and NFS
we recently deployed some large XFS file systems with centos 6.2 used as NFS servers... I've had some reports of a problem similar to the one reported here... http://www.linuxquestions.org/questions/red-hat-31/xfs-inode64-nfs-export-no_subtree_check-and-stale-nfs-file-handle-message-855844/ these reports are somewhat vague (third indirectly reported via internal corporate channels from
2012 Aug 21
1
[PATCH] xfs: add a new api xfs_repair
Add a new api xfs_repair for repairing an XFS filesystem. Signed-off-by: Wanlong Gao <gaowanlong at cn.fujitsu.com> --- daemon/xfs.c | 116 +++++++++++++++++++++++++++++++++++++++++ generator/generator_actions.ml | 23 ++++++++ gobject/Makefile.inc | 6 ++- po/POTFILES | 1 + src/MAX_PROC_NR | 2 +- 5 files changed, 145
2023 May 02
1
'error=No space left on device' but, there is plenty of space all nodes
Hi Gluster users, We are seeing 'error=No space left on device' issue and hoping someone might could advise? We are using a 12 node glusterfs v10.4 distributed vsftpd backup cluster for years (not new) and recently 2 weeks ago upgraded to v9 > v10.4. I do not know if the upgrade is related to this new issue. We are seeing a new issue 'error=No space left on device' error
2018 Mar 15
3
xfs file system errors
How do I fix an xfs file system error ? I searched and it says to run xfs_repair /dev/sda1 - did not work. I got an error on boot and the machine dropped into service mode by entering the PW. I entered the above command and it said couldnt load library... SO I rebooted, dropped into rescue mode. Again I entered the command above and it said teh same thing.something about could not load library
2023 May 04
1
'error=No space left on device' but, there is plenty of space all nodes
Hi,Have you checked inode usage (df -i /lvbackups/brick ) ? Best Regards,Strahil Nikolov On Tuesday, May 2, 2023, 3:05 AM, brandon at thinkhuge.net wrote: Hi Gluster users, We are seeing 'error=No space left on device' issue and hoping someone might could advise? We are using a 12 node glusterfs v10.4 distributed vsftpd backup cluster for years (not new) and recently 2 weeks ago
2016 Oct 21
3
NFS help
On Fri, Oct 21, 2016 at 11:21 AM, <m.roth at 5-cent.us> wrote: > Larry Martell wrote: >> We have 1 system ruining Centos7 that is the NFS server. There are 50 >> external machines that FTP files to this server fairly continuously. >> >> We have another system running Centos6 that mounts the partition the files >> are FTP-ed to using NFS. > <snip>
2016 Oct 24
3
NFS help
On Fri, Oct 21, 2016 at 11:42 AM, <m.roth at 5-cent.us> wrote: > Larry Martell wrote: >> On Fri, Oct 21, 2016 at 11:21 AM, <m.roth at 5-cent.us> wrote: >>> Larry Martell wrote: >>>> We have 1 system ruining Centos7 that is the NFS server. There are 50 >>>> external machines that FTP files to this server fairly continuously. >>>>
2014 Feb 21
1
transparent_huge_pages problem (again?)
Hi, I've experiencing problems with 6.5 guests on a 6.4 host when running hadoop with transparen_huge_pages enabled. As soon as I disable that feature everything returns to normal. I'm posting here because this issue cam up in the past: http://bugs.centos.org/view.php?id=5716 That bug was closed with "resolved in EL6.4" but now it seems to have returned. Apparently there
2015 Aug 04
3
xfs question
John R Pierce wrote: > On 8/4/2015 7:14 AM, m.roth at 5-cent.us wrote: >> >> CentOS 6.6 (well, just updated with CR). I have some xfs filesystems >> on a RAID. They've been mounted with the option of defaults. Will it break >> the whole thing if I now change that to inode64, or was that something >> I needed to do when the fs was created, or is there some
2019 Aug 28
1
xfs
I used to be able to 'repair' ext4 fs issue (when the boot process drops you into emergency mode)... but now with xfs - it does not seem to let me do that. xfs_repair /dev/sda3 or xfs_repair -L /dev/sda3 both say fatal error fatal error -- couldn?t initialize XFS library How can I repair XFS with burning a DVD and finding an external USB reader etc... Thanks, Jerry
2012 Oct 09
2
Mount options for NFS
We're experiencing problems with some legacy software when it comes to NFS access. Even though files are visible in a terminal and can be accessed with standard shell tools and vi, this software typically complains that the files are empty or not syntactically correct. The NFS filesystems in question are 8TB+ XFS filesystems mounted with
2008 Jan 22
2
forced fsck (again?)
hello everyone. i guess this has been asked before, but haven't found it in the faq. i have the following issue... it is not uncommon nowadays to have desktops with filesystems in the order of 500gb/1tb. now, my kubuntu (but other distros do the same) forces a fsck on ext3 every so often, no matter what. in the past it wasn't a big issue. but with sizes increasing so much, users are