Displaying 20 results from an estimated 22 matches for "o_dsync".
Did you mean:
o_async
2006 Oct 31
0
6238533 UFS O_DSYNC Logging Performance
Author: swilcox
Repository: /hg/zfs-crypto/gate
Revision: 063e12129ee21a72ae625b3e10b3169e8906fe77
Log message:
6238533 UFS O_DSYNC Logging Performance
Files:
update: usr/src/uts/common/fs/ufs/ufs_vnops.c
2010 Feb 03
0
[PATCH] ocfs2: Add parenthesis to wrap the check for O_DIRECT.
...c b/fs/ocfs2/file.c
index 06ccf6a..b2ca980 100644
--- a/fs/ocfs2/file.c
+++ b/fs/ocfs2/file.c
@@ -2013,8 +2013,8 @@ out_dio:
/* buffered aio wouldn't have proper lock coverage today */
BUG_ON(ret == -EIOCBQUEUED && !(file->f_flags & O_DIRECT));
- if ((file->f_flags & O_DSYNC && !direct_io) || IS_SYNC(inode) ||
- (file->f_flags & O_DIRECT && has_refcount)) {
+ if (((file->f_flags & O_DSYNC) && !direct_io) || IS_SYNC(inode) ||
+ ((file->f_flags & O_DIRECT) && has_refcount)) {
ret = filemap_fdatawrite_range(f...
2007 Aug 02
3
ZFS, ZIL, vq_max_pending and OSCON
The slides from my ZFS presentation at OSCON (as well as some
additional information) are available at http://www.meangrape.com/
2007/08/oscon-zfs/
Jay Edwards
jay at meangrape.com
http://www.meangrape.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070802/f2fa7b08/attachment.html>
2010 Jan 12
11
How do separate ZFS filesystems affect performance?
I''m working with a Cyrus IMAP server running on a T2000 box under
Solaris 10 10/09 with current patches. Mailboxes reside on six ZFS
filesystems, each containing about 200 gigabytes of data. These are
part of a single zpool built on four Iscsi devices from our Netapp
filer.
One of these ZFS filesystems contains a number of global and per-user
databases in addition to one sixth of the
2009 Apr 09
8
ZIL SSD performance testing... -IOzone works great, others not so great
Hi folks,
I would appreciate it if someone can help me understand some weird
results I''m seeing with trying to do performance testing with an SSD
offloaded ZIL.
I''m attempting to improve my infrastructure''s burstable write capacity
(ZFS based WebDav servers), and naturally I''m looking at implementing
SSD based ZIL devices.
I have a test machine with the
2006 Aug 21
12
SCSI synchronize cache cmd
Hi,
I work on a support team for the Sun StorEdge 6920 and have a
question about the use of the SCSI sync cache command in Solaris
and ZFS. We have a bug in our 6920 software that exposes us to a
memory leak when we receive the SCSI sync cache command:
6456312 - SCSI Synchronize Cache Command is flawed
It will take some time for this bug fix to role out to the field so we
need to understand
2014 Sep 26
0
[RFC PATCH 7/7] drm/prime: Support explicit fence on export
..._ioctl(struct drm_device *dev, void *data,
diff --git a/include/uapi/drm/drm.h b/include/uapi/drm/drm.h
index b0b8556..a11b893 100644
--- a/include/uapi/drm/drm.h
+++ b/include/uapi/drm/drm.h
@@ -661,13 +661,20 @@ struct drm_set_client_cap {
};
#define DRM_CLOEXEC O_CLOEXEC
+#define DRM_SYNC_FD O_DSYNC
struct drm_prime_handle {
__u32 handle;
/** Flags.. only applicable for handle->fd */
__u32 flags;
- /** Returned dmabuf file descriptor */
+ /**
+ * DRM_IOCTL_PRIME_FD_TO_HANDLE:
+ * in: dma-buf fd
+ * DRM_IOCTL_PRIME_HANDLE_TO_FD:
+ * in: sync fence fd if DRM_SYNC_FD flag is...
2009 Mar 30
1
fsflush writes very slow
I''m troubleshooting an I/O performance problem with one of our applications that does a lot of writing, generally blocks just over 32K, sequentially writing large files. It''s a Solaris 10 x86 system with UFS disk. We''re often only seeing disk write throughput of around 6-8MB/s, even when there is minimal read activity. Running iosnoop shows that most of the physical
2018 Feb 26
2
v4.16-rc2: virtio-block + ext4 lockdep splats / sleeping from invalid context
...93 100644
> --- a/fs/direct-io.c
> +++ b/fs/direct-io.c
> @@ -1274,8 +1274,7 @@ do_blockdev_direct_IO(struct kiocb *iocb, struct inode *inode,
> */
> if (dio->is_async && iov_iter_rw(iter) == WRITE) {
> retval = 0;
> - if ((iocb->ki_filp->f_flags & O_DSYNC) ||
> - IS_SYNC(iocb->ki_filp->f_mapping->host))
> + if (iocb->ki_flags & IOCB_DSYNC)
> retval = dio_set_defer_completion(dio);
> else if (!dio->inode->i_sb->s_dio_done_wq) {
> /*
> --
> 2.13.6
>
2018 Feb 26
2
v4.16-rc2: virtio-block + ext4 lockdep splats / sleeping from invalid context
...93 100644
> --- a/fs/direct-io.c
> +++ b/fs/direct-io.c
> @@ -1274,8 +1274,7 @@ do_blockdev_direct_IO(struct kiocb *iocb, struct inode *inode,
> */
> if (dio->is_async && iov_iter_rw(iter) == WRITE) {
> retval = 0;
> - if ((iocb->ki_filp->f_flags & O_DSYNC) ||
> - IS_SYNC(iocb->ki_filp->f_mapping->host))
> + if (iocb->ki_flags & IOCB_DSYNC)
> retval = dio_set_defer_completion(dio);
> else if (!dio->inode->i_sb->s_dio_done_wq) {
> /*
> --
> 2.13.6
>
2014 Sep 29
1
[RFC PATCH 7/7] drm/prime: Support explicit fence on export
...t; diff --git a/include/uapi/drm/drm.h b/include/uapi/drm/drm.h
> index b0b8556..a11b893 100644
> --- a/include/uapi/drm/drm.h
> +++ b/include/uapi/drm/drm.h
> @@ -661,13 +661,20 @@ struct drm_set_client_cap {
> };
>
> #define DRM_CLOEXEC O_CLOEXEC
> +#define DRM_SYNC_FD O_DSYNC
> struct drm_prime_handle {
> __u32 handle;
>
> /** Flags.. only applicable for handle->fd */
> __u32 flags;
>
> - /** Returned dmabuf file descriptor */
> + /**
> + * DRM_IOCTL_PRIME_FD_TO_HANDLE:
> + * in: dma-buf fd
> + * DRM_IOCTL_PRIME_HANDLE...
2005 Nov 25
28
ZFS and memcntl(..., MC_SYNC, ...)
...requests (like MC_INVALIDATE)". It sounds like this means
what I''m trying to do isn''t implemented yet - is this correct, or have I
found a bug? Presumably it will be implemented in the future?
[As a side-note, you''re probably wondering why we don''t just use O_DSYNC
when opening the file, and then just write(2) to it. The reason for this is
because it''s very slow on UFS for large buffers - effectively linear with
the number of pages crossed, or about 6ms per 8KBytes on an otherwise idle
SCSI disk on SPARC. The good news is that this is very fast on ZF...
2006 Feb 24
17
Re: [nfs-discuss] bug 6344186
Joseph Little wrote:
> I''d love to "vote" to have this addressed, but apparently votes for
> bugs are no available to outsiders.
>
> What''s limiting Stanford EE''s move to using ZFS entirely for our
> snapshoting filesystems and multi-tier storage is the inability to
> access .zfs directories and snapshots in particular on NFSv3 clients.
2018 Feb 23
2
v4.16-rc2: virtio-block + ext4 lockdep splats / sleeping from invalid context
Hi all,
While fuzzing arm64/v4.16-rc2 with syzkaller, I simultaneously hit a
number of splats in the block layer:
* inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-R} usage in
jbd2_trans_will_send_data_barrier
* BUG: sleeping function called from invalid context at mm/mempool.c:320
* WARNING: CPU: 0 PID: 0 at block/blk.h:297 generic_make_request_checks+0x670/0x750
... I've included the
2018 Feb 23
2
v4.16-rc2: virtio-block + ext4 lockdep splats / sleeping from invalid context
Hi all,
While fuzzing arm64/v4.16-rc2 with syzkaller, I simultaneously hit a
number of splats in the block layer:
* inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-R} usage in
jbd2_trans_will_send_data_barrier
* BUG: sleeping function called from invalid context at mm/mempool.c:320
* WARNING: CPU: 0 PID: 0 at block/blk.h:297 generic_make_request_checks+0x670/0x750
... I've included the
2008 Jun 23
9
Oracle and ZFS
Hi All ;
One of our customer is suffered from FS being corrupted after an unattanded
shutdonw due to power problem.
They want to switch to ZFS.
>From what I read on, ZFS will most probably not be corrupted from the same
event. But I am not sure how will Oracle be affected from a sudden power
outage when placed over ZFS ?
Any comments ?
PS: I am aware of UPS''s and
2006 Dec 21
12
Difference between ZFS and UFS with one LUN from a SAN
All,
I understand that ZFS gives you more error correction when using two LUNS from a SAN. But, does it provide you with less features than UFS does on one LUN from a SAN (i.e is it less stable).
Thanks,
Shawn
This message posted from opensolaris.org
2009 Apr 20
6
simulating directio on zfs?
I had to let this go and get on with testing DB2 on Solaris. I had to
abandon zfs on local discs in x64 Solaris 10 5/08.
The situation was that:
* DB2 buffer pools occupied up to 90% of 32GB RAM on each host
* DB2 cached the entire database in its buffer pools
o having the file system repeat this was not helpful
* running high-load DB2 tests for 2 weeks showed 100%
2014 Sep 26
14
[RFC] Explicit synchronization for Nouveau
Hi guys,
I'd like to start a new thread about explicit fence synchronization. This time
with a Nouveau twist. :-)
First, let me define what I understand by implicit/explicit sync:
Implicit synchronization
* Fences are attached to buffers
* Kernel manages fences automatically based on buffer read/write access
Explicit synchronization
* Fences are passed around independently
* Kernel takes
2009 Mar 18
24
rename(2), atomicity, crashes and fsync()
Hi all,
Recently there''s been discussion [1] in the Linux community about how
filesystems should deal with rename(2), particularly in the case of a crash.
ext4 was found to truncate files after a crash, that had been written with
open("foo.tmp"), write(), close() and then rename("foo.tmp", "foo"). This is
because ext4 uses delayed allocation and may not