similar to: Poor performance (1/4 that of XFS) when appending to lots of files

Displaying 20 results from an estimated 2000 matches similar to: "Poor performance (1/4 that of XFS) when appending to lots of files"

2010 Jan 21
1
/proc/mounts always shows "nobarrier" option for xfs, even when mounted with "barrier"
Ran into a confusing situation today. When I mount an xfs filesystem on a server running centos 5.4 x86_64 with kernel 2.6.18-164.9.1.el5, the barrier/nobarrier mount option as displayed in /proc/mounts is always set to "nobarrier" Here's an example: [root at host ~]# mount -o nobarrier /dev/vg1/homexfs /mnt [root at host ~]# grep xfs /proc/mounts /dev/vg1/homexfs /mnt xfs
2009 Jul 20
5
Offtopic: Which SAS / SATA HBA do you recommend?
Hi all, Sorry for the offtopic question. I hope though that others on this list or reading the archive find the answers useful too. It seems the Adaptec 1405 4port SAS HBA I bought only works with RHEL and SuSE through a closed source driver, and thus is quite useless :-( I was stupid enought to think "Works with RHEL and SuSE" meant "Certified for RHEL and SuSE, but driver in
2009 Aug 05
3
RAID[56] with arbitrary numbers of "parity" stripes.
We discussed using the top bits of the chunk type field field to store a number of redundant disks -- so instead of RAID5, RAID6, etc., we end up with a single ''RAID56'' flag, and the amount of redundancy is stored elsewhere. This attempts it, but I hate it and don''t really want to do it. The type field is designed as a bitmask, and _used_ as a bitmask in a number of
2013 Oct 09
1
XFS quotas not working at all (seemingly)
Hi All, I have a very strange problem that I'm unable to pinpoint at the moment. For some reason I am simply unable to get xfs_quotas to report correctly on a freshly installed, fully patched CentOS 6 box. I have specified all the same options as on another machine which *is* reporting quota LABEL=TEST /exports/TEST xfs inode64,nobarrier,delaylog,usrquota,grpquota 0 0 xfs_quota -xc
2009 Aug 06
10
RAID[56] status
If we''ve abandoned the idea of putting the number of redundant blocks into the top bits of the type bitmask (and I hope we have), then we''re fairly much there. Current code is at: git://, http://git.infradead.org/users/dwmw2/btrfs-raid56.git git://, http://git.infradead.org/users/dwmw2/btrfs-progs-raid56.git We have recovery working, as well as both full-stripe writes
2009 Jan 24
2
[PATCH] btrfs: flushoncommit mount option
Hi Chris- Here''s a simpler version of the patch that drops the unrelated sync_fs stuff. thanks- sage The ''flushoncommit'' mount option forces any data dirtied by a write in a prior transaction to commit as part of the current commit. This makes the committed state a fully consistent view of the file system from the application''s perspective (i.e., it
2013 Jun 21
1
LVM + XFS + external log + snapshots
Hi all, So I have an XFS file system within LVM which has an external log. My mount option in FSTAB is; /dev/vg_spock_data/lv_data /data xfs logdev=/dev/sdc1,nobarrier,logbufs=8,noatime,nodiratime 1 1 All is well no issues and very fast. Now I'd like to snapshot this bad boy and then run rsnapshot to create a few days backup. A snapshot volume is created w/o issue; lvcreate -L250G -s
2015 Sep 22
0
Centos 6.6, apparent xfs corruption
James Peltier wrote: > Do you have any XFS optimizations enabled in /etc/fstab such logbsize, nobarrier, etc? None. > is the filesystem full? What percentage of the file system is availabl e? There are 2 xfs filesystems: /dev/mapper/vg_gries01-LogVol00 3144200 1000428 2143773 32% /opt/splunk /dev/mapper/vg_gries00-LogVol00 307068 267001 40067 87% /opt/splunk/hot You'll
2017 Nov 13
0
how to add mount options for root filesystem inside lxc container
Hi all We use libvirt 3.0.0 + lxc, disk for container described with <filesystem/> tag, example: <filesystem type='block' accessmode='passthrough'> <source dev='/dev/data/lvm-part1'/> <target dir='/'/> </filesystem> Now we start use ssd disk and think how to provide additional options for mount root FS in container:
2016 Oct 24
0
NFS help
On 10/24/16 03:52, Larry Martell wrote: > On Fri, Oct 21, 2016 at 11:42 AM, <m.roth at 5-cent.us> wrote: >> Larry Martell wrote: >>> On Fri, Oct 21, 2016 at 11:21 AM, <m.roth at 5-cent.us> wrote: >>>> Larry Martell wrote: >>>>> We have 1 system ruining Centos7 that is the NFS server. There are 50 >>>>> external machines that
2016 Oct 27
2
NFS help
On Mon, Oct 24, 2016 at 7:51 AM, mark <m.roth at 5-cent.us> wrote: > On 10/24/16 03:52, Larry Martell wrote: >> >> On Fri, Oct 21, 2016 at 11:42 AM, <m.roth at 5-cent.us> wrote: >>> >>> Larry Martell wrote: >>>> >>>> On Fri, Oct 21, 2016 at 11:21 AM, <m.roth at 5-cent.us> wrote: >>>>> >>>>>
2016 Oct 24
3
NFS help
On Fri, Oct 21, 2016 at 11:42 AM, <m.roth at 5-cent.us> wrote: > Larry Martell wrote: >> On Fri, Oct 21, 2016 at 11:21 AM, <m.roth at 5-cent.us> wrote: >>> Larry Martell wrote: >>>> We have 1 system ruining Centos7 that is the NFS server. There are 50 >>>> external machines that FTP files to this server fairly continuously. >>>>
2016 Oct 24
2
NFS help
On 10/24/2016 04:51 AM, mark wrote: > Absolutely add nobarrier, and see what happens. Using "nobarrier" might increase overall write throughput, but it removes an important integrity feature, increasing the risk of filesystem corruption on power loss. I wouldn't recommend doing that unless your system is on a UPS, and you've tested and verified that it will perform an
2011 Jun 26
2
recovering from "zfs destroy -r"
Hi, Is there a simple way of rolling back to a specific TXG of a volume to recover from such a situation? Many thanks, Przem -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20110627/9b1c5a85/attachment.html>
2016 Oct 24
0
NFS help
Gordon Messmer wrote: > On 10/24/2016 04:51 AM, mark wrote: >> Absolutely add nobarrier, and see what happens. > > Using "nobarrier" might increase overall write throughput, but it > removes an important integrity feature, increasing the risk of > filesystem corruption on power loss. I wouldn't recommend doing that > unless your system is on a UPS, and
2010 May 15
7
Unable to Destroy One Particular Snapshot
Howdy All, I''ve a bit of a strange problem here. I have a filesystem with one snapshot that simply refuses to be destroyed. The snapshots just prior to it and just after it were destroyed without problem. While running the zfs destroy command on this particular snapshot, the server becomes more-or-less hung. It''s pingable but will not open a new shell (local or via ssh) however
2006 Jan 29
2
simulating a few thousand SIP clients?
hi i'm setting up a rig to handle quite a few SIP clients, so i need a way to simulate, say, 20k SIP ATAs. Does anyone know how? This should of course be as close as possible to 'reality', meaning n% calls per client and the usual REGISTER/OPTION traffic. thanks Best regards Roy Sigurd Karlsbakk roy@karlsbakk.net --- In space, loud sounds, like explosions, are even louder
2012 Nov 14
1
GE LP Series?
Hi all We have a 100kVA GE LP Series UPS. I can't find this series in the HCL, but other GE UPSes are listed. Would it be possible to somehow use NUT with this UPS? -- Vennlige hilsener / Best regards roy -- Roy Sigurd Karlsbakk (+47) 98013356 roy at karlsbakk.net http://blogg.karlsbakk.net/ GPG Public key: http://karlsbakk.net/roysigurdkarlsbakk.pubkey.txt -- I all pedagogikk er det
2010 Sep 25
4
dedup testing?
Hi all Has anyone done any testing with dedup with OI? On opensolaris there is a nifty "feature" that allows the system to hang for hours or days if attempting to delete a dataset on a deduped pool. This is said to be fixed, but I haven''t seen that myself, so I''m just wondering... I''ll get a 10TB test box released for testing OI in a few weeks, but before
2010 Dec 05
4
Zfs ignoring spares?
Hi all I have installed a new server with 77 2TB drives in 11 7-drive RAIDz2 VDEVs, all on WD Black drives. Now, it seems two of these drives were bad, one of them had a bunch of errors, the other was very slow. After zfs offlining these and then zfs replacing them with online spares, resilver ended and I thought it''d be ok. Appearently not. Albeit the resilver succeeds, the pool status