search for: o_sync

Displaying 20 results from an estimated 84 matches for "o_sync".

Did you mean: _sync
2008 Jul 07
1
ZFS and Caching - write() syscall with O_SYNC
IHAC using ZFS in production, and he''s opening up some files with the O_SYNC flag. This affects subsequent write()''s by providing synchronized I/O file integrity completion. That is, each write(2) will wait for both the file data and file status to be physically updated. Because of this, he''s seeing some delays on the file write()''s. This is...
2007 Mar 21
1
EXT2 vs. EXT3: mount w/sync or fdatasync
My application always needs to sync file data after writing. I don't want anything handing around in the kernel buffers. I am wondering what is the best method to accomplish this. 1. Do I use EXT2 and use fdatasync() or fsync()? 2. Do I use EXT2 and mount with the "sync" option? 3. Do I use EXT2 and use the O_DIRECT flag on open()? 4. Do I use EXT3 in full journaled mode,
2020 Aug 07
2
Re: [PATCH nbdkit] file: Implement cache=none and fadvise=normal|random|sequential.
...ww.redhat.com/archives/libguestfs/2020-August/msg00078.html > > No, we had a bug when copying image from glance caused sanlock timeouts > because of the unpredictable page cache flushes. > > We tried to use fadvice but it did not help. The only way to avoid such issues > is with O_SYNC or O_DIRECT. O_SYNC is much slower but this is the path > we took for now in this flow. I'm interested in more background about this, because while it is true that O_DIRECT and POSIX_FADV_DONTNEED are not exactly equivalent, I think I've shown here that DONTNEED can be used to avoid pol...
2006 Apr 21
2
ext3 data=ordered - good enough for oracle?
Given that the default journaling mode of ext3 (i.e. ordered), does not guarantee write ordering after a crash, is this journaling mode safe enough to use for a database such as Oracle? If so, how are out of sync writes delt with? Kind regards, Herta Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm
2010 Oct 06
14
Bursty writes - why?
I have a 24 x 1TB system being used as an NFS file server. Seagate SAS disks connected via an LSI 9211-8i SAS controller, disk layout 2 x 11 disk RAIDZ2 + 2 spares. I am using 2 x DDR Drive X1s as the ZIL. When we write anything to it, the writes are always very bursty like this: ool 488K 20.0T 0 0 0 0 xpool 488K 20.0T 0 0 0 0 xpool
2016 Jan 06
0
[klibc:master] MIPS: Update archfcntl.h
...AuthorDate: Wed, 6 Jan 2016 00:43:25 +0000 Committer: H. Peter Anvin <hpa at linux.intel.com> CommitDate: Tue, 5 Jan 2016 17:45:36 -0800 [klibc] MIPS: Update archfcntl.h Update usr/include/arch/mips/archfcntl.h from kernel headers: - Add definitions of O_PATH, O_TMPFILE - Update value of O_SYNC to include __O_SYNC - Add definitions of F_{SET,GET}OWN_EX, F_GETOWNER_UIDS, F_OFD_{GETLK,SETLK,SETLKW}, F_OWNER_{TID,PID,PGRP} Signed-off-by: Ben Hutchings <ben at decadent.org.uk> Signed-off-by: H. Peter Anvin <hpa at linux.intel.com> --- usr/include/arch/mips/klibc/archfcntl.h |...
2003 Nov 22
1
performance gain in data journalling mode
hi, If I understand correctly, full journalling mode gives better performance for applications that do a lot of updates in O_SYNC. Could you please explain how this is possible? Doesn't full data journalling do twice as many writes as meta data journalling? han
2002 Jun 03
1
64 K write access grouped in a single disk access ?
Hi Stephen, I would like to know the behavior of ext3 for a write() request used with O_SYNC for 64 K in term of disk access method: - is there a chance to have only one disk access (instead of 16 x 4 K corresponding to each mapped page ) or at maximum two in journaled mode ? - if this is possible then how can i force such a grouping of data in a single disk output ? NB: i disable cached...
2002 Apr 30
2
writing processes are blocking in log_wait_common with data=ordered
I have a system with many processes writing to a common data file using pwrite. These processes are each writing to existing blocks in the file, not changing the file size, and the file has no holes. When the processes get going, they seem to bottleneck at log_wait_common (according to ps alnx). That is, one process is uninterruptible in log_wait_common, the rest are uninterruptible in down.
2020 Aug 07
2
Re: [PATCH nbdkit] file: Implement cache=none and fadvise=normal|random|sequential.
On Fri, Aug 07, 2020 at 04:43:12PM +0300, Nir Soffer wrote: > On Fri, Aug 7, 2020, 16:16 Richard W.M. Jones <rjones@redhat.com> wrote: > > I'm not sure if or even how we could ever do a robust O_DIRECT > > > > We can let the plugin an filter deal with that. The simplest solution is to > drop it on the user and require aligned requests. I mean this is very error
2001 Jul 26
5
ext3-2.4-0.9.4
...h has the same benefits of not corrupting data after crash+recovery. However for applications which require synchronous operation such as mail spools and synchronously exported NFS servers, this can be a performance win. I have seen dbench figures in this mode (where the files were opened O_SYNC) running at ten times the throughput of ext2. Not that this is the expected benefit for other applications! Looking at the above issues, one may initially think that the post-recovery data corruption is a serious issue with writeback mode, and that there are big advantages to using journalle...
2020 Aug 10
1
Re: [PATCH nbdkit] file: Implement cache=none and fadvise=normal|random|sequential.
...l > > > > > > No, we had a bug when copying image from glance caused sanlock timeouts > > > because of the unpredictable page cache flushes. > > > > > > We tried to use fadvice but it did not help. The only way to avoid such issues > > > is with O_SYNC or O_DIRECT. O_SYNC is much slower but this is the path > > > we took for now in this flow. > > > > I'm interested in more background about this, because while it is true > > that O_DIRECT and POSIX_FADV_DONTNEED are not exactly equivalent, I > > think I've...
2013 Mar 18
2
How to evaluate the glusterfs performance with small file workload?
Hi guys I have met some troubles when I want to evaluate the glusterfs performance with small file workload. 1: What kind of benchmark should I use to test the small file operation ? As we all know, we can use iozone tools to test the large file operation, while for the sake of memory cache, if we testing small file operation with iozone, the result will not correct.
2010 Nov 18
9
WarpDrive SLP-300
http://www.lsi.com/channel/about_channel/whatsnew/warpdrive_slp300/index.html Good stuff for ZFS. Fred -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20101117/d48186f0/attachment.html>
2015 Nov 13
4
[PATCH 1/4] extlinux: simplification
...int fd, rv; + + rv = asprintf(&file, "%s%sldlinux.c32", + path, path[0] && path[strlen(path) - 1] == '/' ? "" : "/"); + if (rv < 0 || !file) { + perror(program); + return 1; + } + + fd = open(file, O_WRONLY | O_TRUNC | O_CREAT | O_SYNC, + S_IRUSR | S_IRGRP | S_IROTH); + if (fd < 0) { + perror(file); + free(file); + return 1; + } + + rv = xpwrite(fd, (const char _force *)syslinux_ldlinuxc32, + syslinux_ldlinuxc32_len, 0); + if (rv != (int)syslinux_ldlinuxc32_len) { + fprintf(stderr, "%s: write failure o...
2008 Oct 07
2
1.1.4 and trouble over NFS
Hello, I have some trouble with the current setup (it's a testing environment): 2 server with Dovecot 1.1.4 from source (OS Debian testing 2.6.26) (name: "exim" and "exim2") 1 NFS server (OS Debian testing 2.6.26) I use NFS v.4, indexes shared over NFS. The relavant part of Dovecot configuration: dotlock_use_excl = yes mail_nfs_storage = yes mail_nfs_index = yes
2020 Aug 07
0
Re: [PATCH nbdkit] file: Implement cache=none and fadvise=normal|random|sequential.
...estfs/2020-August/msg00078.html > > > > No, we had a bug when copying image from glance caused sanlock timeouts > > because of the unpredictable page cache flushes. > > > > We tried to use fadvice but it did not help. The only way to avoid such issues > > is with O_SYNC or O_DIRECT. O_SYNC is much slower but this is the path > > we took for now in this flow. > > I'm interested in more background about this, because while it is true > that O_DIRECT and POSIX_FADV_DONTNEED are not exactly equivalent, I > think I've shown here that DONTNEED...
2020 Aug 07
0
Re: [PATCH nbdkit] file: Implement cache=none and fadvise=normal|random|sequential.
...gt; These ones? > https://www.redhat.com/archives/libguestfs/2020-August/msg00078.html No, we had a bug when copying image from glance caused sanlock timeouts because of the unpredictable page cache flushes. We tried to use fadvice but it did not help. The only way to avoid such issues is with O_SYNC or O_DIRECT. O_SYNC is much slower but this is the path we took for now in this flow. > > Rich. > > -- > Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones > Read my programming and virtualization blog: http://rwmj.wordpress.com > virt-builder quickl...
2007 Jul 13
1
What are my smbd's doing ? (was Re: secrets.tdb locking fun!)
...#39;read' its way through the entire file, then writes 81 (ish) bytes out to the end of the file. These bytes look a bit like a machine trust account name. Here's an extract from the 'truss' output. write(29, "\0\0\018 g k m t x 2 j $".., 81) = 81 fdsync(29, O_RDONLY|O_SYNC) = 0 Repeated 'ls -l' of the file in /var/tmp reveals that it both shrinks and grows over time. >Try running 'smbd -b' to look at the build paths in your binary (I'm >assuming it's one you've built in-house, but it may be worthwhile >check...
2004 Mar 06
1
Desktop Filesystem Benchmarks in 2.6.3
...ta updates. After a crash you are supposed to get a consistent filesystem which looks like the state sometime shortly before the crash, NOT what the in memory image looked like the instant before the crash. Since XFS does not write data out to disk immediately unless you tell it to with fsync or an O_SYNC open (the same is true of other filesystems), you are looking at an inode which was flushed out to disk, but for which the data was never flushed to disk. You will find that the inode is not taking any disk space since all it has is a size, there are no disk blocks allocated for it yet. This same...