Displaying 20 results from an estimated 1200 matches similar to: "ZFS and Caching - write() syscall with O_SYNC"
2008 Jun 13
1
Sun xVM Server Roadmap
Dear Experts,
IHAC who interested in Sun''s xVM server. They would like to know when
the xVM server will be available on Solaris. Thanks.
Regards,
Ray
2020 Aug 07
2
Re: [PATCH nbdkit] file: Implement cache=none and fadvise=normal|random|sequential.
On Fri, Aug 07, 2020 at 05:29:24PM +0300, Nir Soffer wrote:
> On Fri, Aug 7, 2020 at 5:07 PM Richard W.M. Jones <rjones@redhat.com> wrote:
> > These ones?
> > https://www.redhat.com/archives/libguestfs/2020-August/msg00078.html
>
> No, we had a bug when copying image from glance caused sanlock timeouts
> because of the unpredictable page cache flushes.
>
> We
2010 Jun 07
2
NOTICE: spa_import_rootpool: error 5
IHAC
Who has an x4500(x86 box) who has a zfs root filesystem. They installed
patches today,
the latest solaris 10 x86 recommended patch cluster and the patching
seemed to complete
successfully. Then when they tried to reboot the box the machine would
not boot? They
get the following error
NOTICE: spa_import_rootpool: error 5, Inc. All rights reserved.
Cannot mount root on
/pci at
2006 Aug 21
2
ZFS questions with mirrors
IHAC that is asking the following. any thoughts would be appreciated
Take two drives, zpool to make a mirror.
Remove a drive - and the server HANGS. Power off and reboot the server,
and everything comes up cleanly.
Take the same two drives (still Solaris 10). Install Veritas Volume
Manager (4.1). Mirror the two drives. Remove a drive - everything is
still running. Replace the drive, everything
2003 Nov 22
1
performance gain in data journalling mode
hi,
If I understand correctly, full journalling mode gives better
performance for applications that do a lot of updates in O_SYNC. Could
you please explain how this is possible? Doesn't full data journalling
do twice as many writes as meta data journalling?
han
2002 Jun 03
1
64 K write access grouped in a single disk access ?
Hi Stephen,
I would like to know the behavior of ext3 for a write() request used
with
O_SYNC
for 64 K in term of disk access method:
- is there a chance to have only one disk access (instead of 16 x 4 K
corresponding to each mapped page ) or at maximum two in journaled mode
?
- if this is possible then how can i force such a grouping of data in a
single disk output ?
NB: i disable cached write
2020 Aug 10
1
Re: [PATCH nbdkit] file: Implement cache=none and fadvise=normal|random|sequential.
On Sat, Aug 08, 2020 at 02:14:22AM +0300, Nir Soffer wrote:
> On Fri, Aug 7, 2020 at 5:36 PM Richard W.M. Jones <rjones@redhat.com> wrote:
> >
> > On Fri, Aug 07, 2020 at 05:29:24PM +0300, Nir Soffer wrote:
> > > On Fri, Aug 7, 2020 at 5:07 PM Richard W.M. Jones <rjones@redhat.com> wrote:
> > > > These ones?
> > > >
2001 Jul 26
5
ext3-2.4-0.9.4
An update to the ext3 filesystem for 2.4 kernels is available at
http://www.uow.edu.au/~andrewm/linux/ext3/
The diffs are against linux-2.4.7 and linux-2.4.6-ac5.
The changelog is there. One rarely-occurring but oopsable bug
was fixed and several quite significant performance enhancements
have been made. These are in addition to the performance fixes
which went into 0.9.3.
Ted has put out a
2007 Jul 13
1
What are my smbd's doing ? (was Re: secrets.tdb locking fun!)
James R Grinter wrote:-
>
>On Fri, Jul 13, 2007 at 09:39:37AM +0100, Mac wrote:
>> On one previous occasion, the whole thing seemed to grind to a virtual
>> halt, and we suspected (but couldn't prove) that a locking battle over
>> (something like) secrets.tdb was to blame.
>
>(I recognise that symptom, see below)
>
>> Something that stands out is a huge
2007 Mar 21
1
EXT2 vs. EXT3: mount w/sync or fdatasync
My application always needs to sync file data after writing. I don't want anything handing around in the kernel buffers. I am wondering what is the best method to accomplish this.
1. Do I use EXT2 and use fdatasync() or fsync()?
2. Do I use EXT2 and mount with the "sync" option?
3. Do I use EXT2 and use the O_DIRECT flag on open()?
4. Do I use EXT3 in full journaled mode,
2004 Mar 06
1
Desktop Filesystem Benchmarks in 2.6.3
I don't think that XFS is a desktop filesystem at all.
This is from XFS FAQ:
qoute
------------
Q: Why do I see binary NULLS in some files after recovery when I
unplugged the power?
If it hurts don't do that!
* NOTE: XFS 1.1 and kernels => 2.4.18 has the asynchronous delete path
which means that you will see a lot less of these problems. If you still
have not updated to the 1.1
2007 Dec 21
1
Odd behavior of NFS of ZFS versus UFS
I have a test cluster running HA-NFS that shares both ufs and zfs based file systems. However, the behavior that I am seeing is a little perplexing.
The Setup: I have Sun Cluster 3.2 on a pair of SunBlade 1000''s connecting to two T3B partner groups through a QLogic switch. All four bricks of the T3B are configured as RAID-5 with a hot spare. One brick from each pair is mirrored with VxVM
2006 Apr 21
2
ext3 data=ordered - good enough for oracle?
Given that the default journaling mode of ext3 (i.e. ordered), does not
guarantee write ordering after a crash, is this journaling mode safe
enough to use for a database such as Oracle? If so, how are out of sync
writes delt with?
Kind regards,
Herta
Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm
2019 Jul 19
3
Samba async performance - bottleneck or bug?
Hi David,
Thanks for your reply.
> Hmm, so this "async" (sync=disabled?) ZFS tunable means that it
> completely ignores O_SYNC and O_DIRECT and runs the entire workload in
> RAM? I know nothing about ZFS, but that sounds like a mighty dangerous
> setting for production deployments.
Yes, you are correct - sync writes will flush to RAM, just like async, will stay in RAM for
2010 Oct 06
14
Bursty writes - why?
I have a 24 x 1TB system being used as an NFS file server. Seagate SAS disks connected via an LSI 9211-8i SAS controller, disk layout 2 x 11 disk RAIDZ2 + 2 spares. I am using 2 x DDR Drive X1s as the ZIL. When we write anything to it, the writes are always very bursty like this:
ool 488K 20.0T 0 0 0 0
xpool 488K 20.0T 0 0 0 0
xpool
2016 Jan 06
0
[klibc:master] MIPS: Update archfcntl.h
Commit-ID: 3fefc6a404a970a911417d0345618a7e9abfef70
Gitweb: http://git.kernel.org/?p=libs/klibc/klibc.git;a=commit;h=3fefc6a404a970a911417d0345618a7e9abfef70
Author: Ben Hutchings <ben at decadent.org.uk>
AuthorDate: Wed, 6 Jan 2016 00:43:25 +0000
Committer: H. Peter Anvin <hpa at linux.intel.com>
CommitDate: Tue, 5 Jan 2016 17:45:36 -0800
[klibc] MIPS: Update archfcntl.h
2017 Jun 02
2
Slow write times to gluster disk
Are you sure using conv=sync is what you want? I normally use conv=fdatasync, I'll look up the difference between the two and see if it affects your test.
-b
----- Original Message -----
> From: "Pat Haley" <phaley at mit.edu>
> To: "Pranith Kumar Karampuri" <pkarampu at redhat.com>
> Cc: "Ravishankar N" <ravishankar at redhat.com>,
2002 Apr 30
2
writing processes are blocking in log_wait_common with data=ordered
I have a system with many processes writing to a common data file using
pwrite. These processes are each writing to existing blocks in the file,
not changing the file size, and the file has no holes.
When the processes get going, they seem to bottleneck at log_wait_common
(according to ps alnx). That is, one process is uninterruptible in
log_wait_common, the rest are uninterruptible in down.
2020 Aug 07
2
Re: [PATCH nbdkit] file: Implement cache=none and fadvise=normal|random|sequential.
On Fri, Aug 07, 2020 at 04:43:12PM +0300, Nir Soffer wrote:
> On Fri, Aug 7, 2020, 16:16 Richard W.M. Jones <rjones@redhat.com> wrote:
> > I'm not sure if or even how we could ever do a robust O_DIRECT
> >
>
> We can let the plugin an filter deal with that. The simplest solution is to
> drop it on the user and require aligned requests.
I mean this is very error
2017 Jun 12
0
Slow write times to gluster disk
Hi Guys,
I was wondering what our next steps should be to solve the slow write times.
Recently I was debugging a large code and writing a lot of output at
every time step. When I tried writing to our gluster disks, it was
taking over a day to do a single time step whereas if I had the same
program (same hardware, network) write to our nfs disk the time per
time-step was about 45 minutes.