similar to: RFC: grouped (f)sync

Displaying 20 results from an estimated 9000 matches similar to: "RFC: grouped (f)sync"

2014 Jan 24
2
IOPS required by Asterisk for Call Recording
Hi What are the disk IOPS required for Asterisk call recording? I am trying to find out number of disks required in RAID array to record 500 calls. Is there any formula to calculate IOPS required by Asterisk call recording? This will help me to find IOPS for different scale. If I assume that Asterisk will write data on disk every second for each call, I will need disk array to support minimum
2013 Nov 15
7
[PATCH 1/2] xfstests: add generic/321 to test fsync() on directories V2
Btrfs had some issues with fsync()''ing directories and fsync()''ing after renames. These three new tests cover the 3 different issues we were seeing. This breaks out the dmflakey stuff into a common helper to be shared between generic/311 and generic/321. Thanks, Signed-off-by: Josef Bacik <jbacik@fusionio.com> --- V1->V2: rename test to generic/321 -removed an
2010 Dec 06
1
SQLite and ext3 journalling mode
Hi, Are SQLite users that are worried about losing data that has been committed (fsynced) better off setting data=journal than data=ordered (or even data=writeback)? The context is trying to reduce the number of writes to a flash file-system without sacrificing data integrity in the event of a power failure or OS crash. Thanks, Dan Kennedy.
2006 Feb 23
1
Ext3: Ordered : Fsync question
Does Fsync of a file on a ext3 fs mounted with "ordered" option(the default) result in flush the dirty data buffers in the fs that correspond to previous transactions. In other words, if I keep writing to file1 (lots of data), log something to file2, keep fsyncing file2 after every write - does this mean file1 data would be committed by fsyncs on file2. Please copy me on your replies
2010 Sep 27
1
mail_fsync=never doesn't work?
Hello, I've tried to set the above in Dovecot 2.0.3, but according to ktrace (FreeBSD) it still fsync()s a lot (pop3 processes for example). Is this switch useful at all?
2017 Sep 18
0
Confusing lstat() performance
I did a quick test on one of my lab clusters with no tuning except for quota being enabled: [root at dell-per730-03 ~]# gluster v info Volume Name: vmstore Type: Replicate Volume ID: 0d2e4c49-334b-47c9-8e72-86a4c040a7bd Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: 192.168.50.1:/rhgs/brick1/vmstore Brick2:
2007 Nov 04
3
Dovecot write activity (mostly 1.1.x)
I?m experiencing write activity that?s somewhat different from my previous qmail/courier-imap/Maildir setup. This more outspoken in v.1.1.x than v1.0.x (I?m using Maildir). Write activity is about half that of read activity when measuring throughput. But when measuring operations it?s about 5-7 times as high (measured with zpool iostat on ZFS). I think this might be due to the many small updates
2015 Nov 07
3
Re: mkfs.ext2 succeeds despite nbd write errors?
On Sat, Nov 7, 2015 at 5:03 AM, Richard W.M. Jones <rjones@redhat.com> wrote: > How about 'strace mkfs.ext2 ..' and see if any system calls are > returning errors. That would show you whether nbd-client is throwing > errors away, or whether mkfs is getting the errors and ignoring them > (seems pretty unlikely, but you never know). > > After that, it'd be down
2004 Sep 16
1
[PATCH] BUG on fsync/fdatasync with Ext3 data=journal
Hello, We found that fsync and fdatasync syscalls sometimes don't sync data in an ext3 file system under the following conditions. 1. Kernel version is 2.6.6 or later (including 2.6.8.1 and 2.6.9-rc2). 2. Ext3's journalling mode is "data=journal". 3. Create a file (whose size is 1Mbytes) and execute umount/mount. 4. lseek to a random position within the file, write 8192 bytes
2007 Aug 30
15
ZFS, XFS, and EXT4 compared
I have a lot of people whispering "zfs" in my virtual ear these days, and at the same time I have an irrational attachment to xfs based entirely on its lack of the 32000 subdirectory limit. I''m not afraid of ext4''s newness, since really a lot of that stuff has been in Lustre for years. So a-benchmarking I went. Results at the bottom:
2013 Dec 06
2
How reliable is XFS under Gluster?
Hello, I am in the point of picking up a FS for new brick nodes. I was used to like and use ext4 until now but I recently red for an issue introduced by a patch in ext4 that breaks the distributed translator. In the same time, it looks like the recommended FS for a brick is no longer ext4 but XFS which apparently will also be the default FS in the upcoming RedHat7. On the other hand, XFS is being
2007 Sep 26
1
strange fsync errors
Hi all, I'm using dovecot since a few months and it works great. But a few days ago some coworkers mentioned that they got errormessages in their Mailapp. I searched in the logfiles and found this: Sep 14 12:07:35 Mailserv dovecot: IMAP(eckhard-ma-domain-com): fsync(/home/eckhard-ma-domain-com/mails/.INBOX.0002-Druckangebote von Druckereien.0002-schmerk
2003 Apr 11
14
PATCH: Forcible delaying of UFS (soft)updates
Here's a patch against 4.8-RELEASE kernel that allows disk writes on softupdates-enabled filesystems to be delayed for (theoretically) arbitrarily long periods of time. The motivation for such updating policy is surprisingly not purely suicidal - it can allow disks on laptops to spin down immediately after I/O operations and stay idle for longer periods of time, thus saving considerable amount
2018 Mar 05
0
SQLite3 on 3 node cluster FS?
On Mon, Mar 5, 2018 at 8:21 PM, Paul Anderson <pha at umich.edu> wrote: > Hi, > > tl;dr summary of below: flock() works, but what does it take to make > sync()/fsync() work in a 3 node GFS cluster? > > I am under the impression that POSIX flock, POSIX > fcntl(F_SETLK/F_GETLK,...), and POSIX read/write/sync/fsync are all > supported in cluster operations, such that in
2013 Dec 18
2
[PATCH] Btrfs: improve the performance fluctuating of the fsync
In order to improve the performance of fsync, we use the outstanding ordered extents to avoid looking up the checksum from the csum tree. But we didn''t filter out the ordered extents whose csum is still being calculated, when we got those ordered extents, we had to wait for the csum calculation. It made the performance dropped down suddenly. (On my box, it drop down from 56MB/s to
2004 Feb 13
1
fsync in ext3: A question
Hi, I have a question on fsync() and ext3's journaling modes. Assume that I call fsync(fd) on a file. If that file is in 'data=journal' mode, would the fsync() return once the data gets safely into the journal ? On the other hand, if that file is in 'data=writeback' mode, would the fsync() return only when the data gets safely into its actual location ? Any help is
2006 Jul 31
20
ZFS vs. Apple XRaid
Hello all, After setting up a Solaris 10 machine with ZFS as the new NFS server, I''m stumped by some serious performance problems. Here are the (admittedly long) details (also noted at http://www.netmeister.org/blog/): The machine in question is a dual-amd64 box with 2GB RAM and two broadcom gigabit NICs. The OS is Solaris 10 6/06 and the filesystem consists of a single zpool stripe
2018 Mar 06
0
SQLite3 on 3 node cluster FS?
+Csaba. On Tue, Mar 6, 2018 at 2:52 AM, Paul Anderson <pha at umich.edu> wrote: > Raghavendra, > > Thanks very much for your reply. > > I fixed our data corruption problem by disabling the volume > performance.write-behind flag as you suggested, and simultaneously > disabling caching in my client side mount command. > Good to know it worked. Can you give us the
2018 Mar 05
6
SQLite3 on 3 node cluster FS?
Raghavendra, Thanks very much for your reply. I fixed our data corruption problem by disabling the volume performance.write-behind flag as you suggested, and simultaneously disabling caching in my client side mount command. In very modest testing, the flock() case appears to me to work well - before it would corrupt the db within a few transactions. Testing using built in sqlite3 locks is
2011 Nov 08
9
Performance-Tuning
Hi, I have > 11 TB hard used Mailstorage, saved als maildir in ext3 on HP EVA. I always wanted to make some mesurements about several influences to the performance (switch to ext4, switch to mdbox), but I never had enough time to do that. At the moment I *need* more speed, we have too much waitI/O on the system and I already used all other performance and tuning-tricks (separated cache,