Displaying 20 results from an estimated 729 matches for "fsynced".
Did you mean:
synced
2013 Nov 15
7
[PATCH 1/2] xfstests: add generic/321 to test fsync() on directories V2
Btrfs had some issues with fsync()''ing directories and fsync()''ing after
renames. These three new tests cover the 3 different issues we were seeing.
This breaks out the dmflakey stuff into a common helper to be shared between
generic/311 and generic/321. Thanks,
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
---
V1->V2: rename test to generic/321
-removed an
2015 Nov 07
3
Re: mkfs.ext2 succeeds despite nbd write errors?
On Sat, Nov 7, 2015 at 5:03 AM, Richard W.M. Jones <rjones@redhat.com> wrote:
> How about 'strace mkfs.ext2 ..' and see if any system calls are
> returning errors. That would show you whether nbd-client is throwing
> errors away, or whether mkfs is getting the errors and ignoring them
> (seems pretty unlikely, but you never know).
>
> After that, it'd be down
2004 Sep 16
1
[PATCH] BUG on fsync/fdatasync with Ext3 data=journal
Hello,
We found that fsync and fdatasync syscalls sometimes don't sync
data in an ext3 file system under the following conditions.
1. Kernel version is 2.6.6 or later (including 2.6.8.1 and 2.6.9-rc2).
2. Ext3's journalling mode is "data=journal".
3. Create a file (whose size is 1Mbytes) and execute umount/mount.
4. lseek to a random position within the file, write 8192 bytes
2007 Sep 26
1
strange fsync errors
Hi all,
I'm using dovecot since a few months and it works great.
But a few days ago some coworkers mentioned that they got
errormessages in their Mailapp.
I searched in the logfiles and found this:
Sep 14 12:07:35 Mailserv dovecot: IMAP(eckhard-ma-domain-com):
fsync(/home/eckhard-ma-domain-com/mails/.INBOX.0002-Druckangebote von
Druckereien.0002-schmerk
2006 Feb 25
1
Linux performance bug: fsync() for files with zero links
Linux kernel (as of 2.6.15.4) has the following performance bug:
Syncing (fsync() or fdatasync()) files with zero links (deleted files) in not
no-op, as it should be.
See details, a test C program, and the rationale in the URL below:
http://b2e.ex-code.com/index.php/soft/2006/02/24/linux_performance_bug_zero_links_fsync
In the article with the URL above it is also explained how to make much
2013 Oct 28
0
[PATCH] xfstests: add generic/320 to test fsync() on directories V2
Btrfs had some issues with fsync()''ing directories and fsync()''ing after
renames. These three new tests cover the 3 different issues we were seeing.
This breaks out the dmflakey stuff into a common helper to be shared between
generic/311 and generic/320. Thanks,
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
---
V1->V2: moved this out into its own test instead of
2004 Feb 13
1
fsync in ext3: A question
Hi,
I have a question on fsync() and ext3's journaling modes.
Assume that I call fsync(fd) on a file.
If that file is in 'data=journal' mode, would the fsync() return once the
data gets safely into the journal ?
On the other hand, if that file is in 'data=writeback' mode, would the
fsync() return only when the data gets safely into its actual location ?
Any help is
2007 Mar 21
1
Ext3 behavior on power failure
Hi all,
We are building a new system which is going to use ext3 FS. We would like to know more about the behavior of ext3 in the case of failure. But before I procede, I would like to share more information about our future system.
* Our application always does an fsync on files
* When symbolic links (more specifically fast symlink) are created, the host directory is also fsync'ed.
* Our
2013 Dec 18
2
[PATCH] Btrfs: improve the performance fluctuating of the fsync
In order to improve the performance of fsync, we use the outstanding
ordered extents to avoid looking up the checksum from the csum tree.
But we didn''t filter out the ordered extents whose csum is still being
calculated, when we got those ordered extents, we had to wait for the
csum calculation. It made the performance dropped down suddenly. (On
my box, it drop down from 56MB/s to
2011 Jan 05
1
RFC: grouped (f)sync
...n see two problems:
1. there is no call for committing a lot of file descriptors in one
transaction, so instead of fsync() for each of the modified FDs, a
sync() would be needed. sync() writes all buffers to stable storage,
which is bad if you have a mixed workload, where there are a lot of
non-fsynced data, or other heavy fsync users. But modern file systems,
like ZFS will write those back too, so there an fsync(fd) is -AFAIK-
mostly equivalent with a sync(pool on which fd is). sync() of course is
system wide, so if you have other file systems, those will be synced as
well. (this setting isn...
2009 Mar 18
24
rename(2), atomicity, crashes and fsync()
Hi all,
Recently there''s been discussion [1] in the Linux community about how
filesystems should deal with rename(2), particularly in the case of a crash.
ext4 was found to truncate files after a crash, that had been written with
open("foo.tmp"), write(), close() and then rename("foo.tmp", "foo"). This is
because ext4 uses delayed allocation and may not
2018 Feb 02
1
Does samba support fsync() a directory?
Hi!
Afair, fsync()ing a directory is subject to platform-specific and
implementation-defined behaviour, and may either
- work as you'd want and (probably) expect
- fail with an error
- fail silently
at least if your application targets more than one operating system/kernel -
and even on one and the same platform, different filesystems might exhibit
sublty different patterns of behaviour.
2018 Mar 05
2
SQLite3 on 3 node cluster FS?
Hi,
tl;dr summary of below: flock() works, but what does it take to make
sync()/fsync() work in a 3 node GFS cluster?
I am under the impression that POSIX flock, POSIX
fcntl(F_SETLK/F_GETLK,...), and POSIX read/write/sync/fsync are all
supported in cluster operations, such that in theory, SQLite3 should
be able to atomically lock the file (or a subset of page), modify
pages, flush the pages to
2018 Mar 05
0
SQLite3 on 3 node cluster FS?
On Mon, Mar 5, 2018 at 8:21 PM, Paul Anderson <pha at umich.edu> wrote:
> Hi,
>
> tl;dr summary of below: flock() works, but what does it take to make
> sync()/fsync() work in a 3 node GFS cluster?
>
> I am under the impression that POSIX flock, POSIX
> fcntl(F_SETLK/F_GETLK,...), and POSIX read/write/sync/fsync are all
> supported in cluster operations, such that in
2018 Feb 02
4
Does samba support fsync() a directory?
Hi group:
I need some help!
I use samba 4.5.8
And I mount a samba directory from CentOS 7.
When I run such program in the mounted directory:
```
#include <stdlib.h>
#include <stdio.h>
#include <errno.h>
#include <fcntl.h>
#include <string.h>
int main() {
printf("open aaa\n");
int fd = open("aaa", O_RDONLY |
2018 Mar 05
6
SQLite3 on 3 node cluster FS?
Raghavendra,
Thanks very much for your reply.
I fixed our data corruption problem by disabling the volume
performance.write-behind flag as you suggested, and simultaneously
disabling caching in my client side mount command.
In very modest testing, the flock() case appears to me to work well -
before it would corrupt the db within a few transactions.
Testing using built in sqlite3 locks is
2010 Apr 11
1
Re: Poor interactive performance with I/O loads with fsync()ing
On Sun, 11 Apr 2010 18:03:00 +0300, Avi Kivity <avi@redhat.com> wrote:
> On 04/09/2010 05:56 PM, Ben Gamari wrote:
> > On Mon, 29 Mar 2010 00:08:58 +0200, Andi Kleen<andi@firstfloor.org> wrote:
> >
> >> Ben Gamari<bgamari.foss@gmail.com> writes:
> >> ext4/XFS/JFS/btrfs should be better in this regard
> >>
> >>
>
2018 Mar 06
0
SQLite3 on 3 node cluster FS?
+Csaba.
On Tue, Mar 6, 2018 at 2:52 AM, Paul Anderson <pha at umich.edu> wrote:
> Raghavendra,
>
> Thanks very much for your reply.
>
> I fixed our data corruption problem by disabling the volume
> performance.write-behind flag as you suggested, and simultaneously
> disabling caching in my client side mount command.
>
Good to know it worked. Can you give us the
2005 Nov 25
28
ZFS and memcntl(..., MC_SYNC, ...)
It wouldn''t be proper to start my first post here without congratulations
and thanks to the ZFS team for such an impressive piece of work.
Anyway, on to my query. I''ve been trying out ZFS, with a particular focus in
reducing latency in a specific application. This application has a fair
amount of random writing going on in the background (which, of course, ZFS
will make
2016 Jan 18
0
[PATCH v2] resize, builder: fsync the output file before closing the libvirt connection.
Libvirt has a fixed 15 second timeout for qemu to exit. If qemu is
writing to a slow USB key, it can hang (in D state) for much longer
than this - many minutes usually.
To work around this, fsync the output file before closing the libvirt
connection so that qemu shouldn't have anything (much) to write. We
have to do this in a few places in the code.
This is a hack - it would be better to