similar to: [XFS] moving internal journal to external hard drive

Displaying 20 results from an estimated 80000 matches similar to: "[XFS] moving internal journal to external hard drive"

2013 Jun 14
1
Max XFS journal size in Centos 6 = 2GB?
Hi all, I read that XFS now has a max journal size of 2Gb rather than 128M. Is this correct? Thanks in advance, - aurf
2011 Jan 11
4
ext4 or XFS
Hi all, I've a 30TB hardware based RAID array. Wondering what you all thought of using ext4 over XFS. I've been a big XFS fan for years as I'm an Irix transplant but would like your opinions. This 30TB drive will be an NFS exported asset for my users housing home dirs and other frequently accessed files. - aurf
2011 Jan 18
3
disk quotas + centos 5,5 +xfs
Hi all, is any one aware quotas not working in 5,5? I'm using XFS as a file system. My fstab has the appropriate usrquota,grpquota but when I try to run; quotacheck -cug /foo I get; quotacheck: Can't find filesystem to check or filesystem not mounted with quota option. I already have a large amount of data in the mount. Do quotas get implemented on only empty filesystems and
2013 Jun 21
1
LVM + XFS + external log + snapshots
Hi all, So I have an XFS file system within LVM which has an external log. My mount option in FSTAB is; /dev/vg_spock_data/lv_data /data xfs logdev=/dev/sdc1,nobarrier,logbufs=8,noatime,nodiratime 1 1 All is well no issues and very fast. Now I'd like to snapshot this bad boy and then run rsnapshot to create a few days backup. A snapshot volume is created w/o issue; lvcreate -L250G -s
2001 Dec 11
1
More external journal woes.
I have been playing with external journals some more and thought I should share some experiences. I am running 2.4.16 with the ext3 patches from Andrew Morton and e2fsprogs 1.25 I have an ext3fs filesystem on an 8 drive RAID5 array and place the journal on a partition of the mirrored pair that I boot off (all drives SCSI). I have tried pulling the power cable and seeing what happens. I finally
2004 Sep 11
2
External journal on flash drive
Hi, I'd like to use a flash drive as a journal device, with the purpose of keeping the main disk drive spun down as long as possible. I have a couple of questions: 1) Does the journaling code spread write accesses to the journal device evenly, as I hope, or are there blocks that are particularly "hot"? I.e., do I have to worry about the flash device dying quickly because of
2012 Aug 30
0
CentOS 6 - Preferable snapshots via LVM, EXT4 or XFS
Hi all, I read some where that due to UUID conflicts, EXT4 + LVM snapshots is still the way to go in Cent 6. I do love XFS but was wondering your thoughts and experiences. - aurf
2003 Jul 27
0
data=journal with large external journals
I have a heavily loaded Apache 2 server which is experiencing what appear to be "write storms". The server is a dual Xeon box with hyperthreading, and appears to linux as a four-cpu box. It has 4GB of physical RAM and is never hitting swap. It's serving a large number of static image files off of four 135GB SCSI drives with external journals on a fifth volume. The journal volume
2013 Jun 27
1
15 min pause during boot - Setting up logical volume management
Hi all, I rebooted a server having a 20TB XFS volume under LVM and wait about 15 min to boot. It stays at; Setting up logical volume management For 15 min then proceeds to boot fine. During this time, I see the 14 disks of the 20TB volume flashing quickly as though being read. Nothing in my logs to indicate bad behavior. I am running the latest 6,4 kernel. Any one see this before? - aurf
2002 Dec 21
2
external journal
Hello! I have some questions about external journals. On a 4 disk software RAID 5 I have two big devices (one 60 GB, one 180 GB). I would like to give each an external journal on a 2 disk software RAID 1 device and use data=journal. The journal will get its own partition with mke2fs -O journal_dev /dev/mdwhatever. The used 6 harddrives are all IDE drives, they are connected through
2011 Jan 11
2
parted usage
Hello again, Been an interesting day. I'm attempting to use parted to create a partition on a 28TB volume which consists of 16x2TB drives configuired in a Raid 5 + spare, so total unformatted size is 28TB to the OS.. However upon entering parted, and making a gpt label, print reports back as follows; Model: Areca ARC-1680-VOL#000 (scsi) Disk /dev/sdc: 2199GB Sector Size
2002 Oct 02
2
kernel BUG at journal.c:1772!
Hello everyone. I'm running Red Hat 7.3 with kernel 2.4.18-10 and all errata patches installed. My system has been running for over three years without any problems. All I've done to it in that time is add a mirror set of two 80GB drives, a Promise IDE controller, and upgrade Red Hat through the 7.x series. Right now I have three drives. A 13GB system drive off the motherboard, and
2002 Oct 30
1
External Journal scenario - good idea?
Hello everyone, I've just recently joined the ext3-users list. I spent much of the weekend browsing over list archives and other tidbits I could find on the net, regarding using an external journal, and running in data=journal mode. From what I have seen looking around at what other folks are doing, data=journal with an external journal may be able to help our problem here. If I
2011 Apr 18
1
rhel nfs bug with 5.5 - nfsd: blocked for more then 120 sec
Hi all, I ran into this bug on my NFS server which is serving an XFS fs; https://bugzilla.redhat.com/show_bug.cgi?id=616833 It was suggested using bind mounts. My current fstab on my server is; /dev/sdc1 /SHARE xfs defaults,noatime,nodiratime,logbufs=8,uquota 1 2 Unsure how to integrate bind mounts in this scheme to see if I can avoid this bug until it is fixed. Any ideas? - aurf
2005 Mar 10
3
a few questions about ext3 journal
A few wild ideas/questions : 1) Is there a way to check the size of the journal of an ext3 filesystem ? I mean - the actually used size ; not the total size of the journal. 2) Would it be difficult to implement "freeze" of ext3 filesystem - that is, blocking all I/O to the filesystem until it's "unfrozen" (XFS can do that), for two purposes : A/ allowing
2017 Oct 10
1
ZFS with SSD ZIL vs XFS
I've had good results with using SSD as LVM cache for gluster bricks ( http://man7.org/linux/man-pages/man7/lvmcache.7.html). I still use XFS on bricks. On Tue, Oct 10, 2017 at 12:27 PM, Jeff Darcy <jeff at pl.atyp.us> wrote: > On Tue, Oct 10, 2017, at 11:19 AM, Gandalf Corvotempesta wrote: > > Anyone made some performance comparison between XFS and ZFS with ZIL > > on
2017 Oct 10
0
ZFS with SSD ZIL vs XFS
On Tue, Oct 10, 2017, at 11:19 AM, Gandalf Corvotempesta wrote: > Anyone made some performance comparison between XFS and ZFS with ZIL > on SSD, in gluster environment ? > > I've tried to compare both on another SDS (LizardFS) and I haven't > seen any tangible performance improvement. > > Is gluster different ? Probably not. If there is, it would probably favor
2018 Jul 14
0
ssm vs. lvm: moving physical drives and volume group to another system
Maybe not a good assumption afterall -- I can no longer boot using kernel 3.10.0-514 or 3.10.0-862. boot.log shows: Dependency failed for /mnt/data Dependency failed for Local File Systems Dependency failed for Mark the need to relabel after reboot. Dependency failed for Migrate local SELinux policy changes from the old store structure to the new structure. Dependency failed for Relabel all
2015 Aug 08
3
backing up email / saving maildir on external hard drives
Dear Christian, Thanks for your feedback. The HDD will not accept larger than 4GB (as its in FAT format). Its a new external HDD. Thinking of the best format(that would work with Mac , Win and Linux) .seems like a challenge. What's your view on NTFS? And why not exFAT? Thanks Kevin On Saturday, August 8, 2015, Christian Kivalo <ml+dovecot at valo.at> wrote: > > > Am 08.
2020 May 11
1
XFS problem
Hello, My server is running kernel 3.10.0-1062.12.1 in a CentOS Linux release 7.7.1908. Since some weeks ago, server is restarting after XFS errors. Logs in /var/crash reported this information: [...] [443804.295916] sd 0:0:0:0: [sda] FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [443804.295919] sd 0:0:0:0: [sda] CDB: Read(10) 28 00 04 53 e8 b0 00 00 28 00 [443804.295922]