Displaying 20 results from an estimated 6000 matches similar to: "[PATCH] Don't use cache=off if device is on tmpfs"
2020 Mar 27
2
Create VM w/ cache=none on tmpfs
Hi,
I've seen that in the past, libvirt couldn't start VMs when the disk
image was stored on a file system that doesn't support direct I/O
having the 'cache=none' configuration [0].
On the KubeVirt project, we have some storage tests on a particular
provider which does just that - try to create / start a VM whose disk
is on tmpfs and whose definition features
2020 Mar 27
0
Re: Create VM w/ cache=none on tmpfs
On Fri, Mar 27, 2020 at 12:31:07PM +0100, Miguel Duarte de Mora Barroso wrote:
> Hi,
>
> I've seen that in the past, libvirt couldn't start VMs when the disk
> image was stored on a file system that doesn't support direct I/O
> having the 'cache=none' configuration [0].
>
> On the KubeVirt project, we have some storage tests on a particular
> provider
2020 Aug 07
2
Re: [PATCH nbdkit] file: Implement cache=none and fadvise=normal|random|sequential.
On Fri, Aug 07, 2020 at 04:43:12PM +0300, Nir Soffer wrote:
> On Fri, Aug 7, 2020, 16:16 Richard W.M. Jones <rjones@redhat.com> wrote:
> > I'm not sure if or even how we could ever do a robust O_DIRECT
> >
>
> We can let the plugin an filter deal with that. The simplest solution is to
> drop it on the user and require aligned requests.
I mean this is very error
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Thu, Mar 07, 2019 at 02:38:38PM -0500, Andrea Arcangeli wrote:
> On Thu, Mar 07, 2019 at 02:09:10PM -0500, Jerome Glisse wrote:
> > I thought this patch was only for anonymous memory ie not file back ?
>
> Yes, the other common usages are on hugetlbfs/tmpfs that also don't
> need to implement writeback and are obviously safe too.
>
> > If so then set dirty is
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Thu, Mar 07, 2019 at 02:38:38PM -0500, Andrea Arcangeli wrote:
> On Thu, Mar 07, 2019 at 02:09:10PM -0500, Jerome Glisse wrote:
> > I thought this patch was only for anonymous memory ie not file back ?
>
> Yes, the other common usages are on hugetlbfs/tmpfs that also don't
> need to implement writeback and are obviously safe too.
>
> > If so then set dirty is
2020 Aug 07
3
Re: [PATCH nbdkit] file: Implement cache=none and fadvise=normal|random|sequential.
On Fri, Aug 07, 2020 at 07:53:13AM -0500, Eric Blake wrote:
> >$ free -m; time ./nbdkit file /var/tmp/random fadvise=sequential cache=none --run 'qemu-img convert -n -p -m 16 -W $nbd "json:{\"file.driver\":\"null-co\",\"file.size\":\"1E\"}"' ; free -m ; cachestats /var/tmp/random
>
> Hmm - the -W actually says that qemu-img is
2010 Oct 09
2
[PATCH 1/2] Ocfs2: Add a mount option "coherency=*" for O_DIRECT writes.
Currently, default behavior of O_DIRECT writes was allowing
concurrent writing among nodes, no cluster coherency guaranteed
(no EX locks was taken), it hurts buffered reads on other nodes
by reading stale data from cache.
The new mount option introduce a chance to choose two different
behaviors for O_DIRECT writes:
* coherency=full, as the default value, will disallow
concurrent
2004 Sep 03
2
From OCFS to tape via tar (and back again)
We're using RMAN to back up our 9.2 RAC database to an OCFS v1 volume.
We have an existing shell script that we use for copying files from disk
to tape via tar, one file at a time. (Don't ask why. It's a legacy
script. Long story.) We're tweaking this script to use --o_direct when
tarring the file to tape and that seems to be working fine:
# tape device is /dev/nst0
$ tar
2009 Mar 11
1
Enterprise Application with O_DIRECT access
Hello everyone,
I am learning and evaluating a glusterfs for film/video editing facilities.
Some major film/video editing realtime applications are using the
O_DIRECT file access for video/audio data files.
The GLFS client via fuse mechanism is disallow the open file with
O_DIRECT flag.
I made a little sample code for read a file with O_DIRECT flag, and
tried open the files on GLFS volumes.
It
2006 Oct 15
3
open(2) O_DIRECT on smbmount gives EINVAL
Does samba 3.0.23c not support the use of O_DIRECT? When I try to open an
smbmount'd file using O_DIRECT, I get EINVAL. I am able to use O_DIRECT with no
problems on a block device and nfs mounts, so I know the kernel supports it.
samba: 3.0.23c
kernel: 2.6.9-42.0.3.EL (32-bit)
I am using the below code for my test. smb fails on open(2).
#include <fcntl.h>
#include
2020 Aug 07
2
Re: [PATCH nbdkit] file: Implement cache=none and fadvise=normal|random|sequential.
On Fri, Aug 07, 2020 at 05:29:24PM +0300, Nir Soffer wrote:
> On Fri, Aug 7, 2020 at 5:07 PM Richard W.M. Jones <rjones@redhat.com> wrote:
> > These ones?
> > https://www.redhat.com/archives/libguestfs/2020-August/msg00078.html
>
> No, we had a bug when copying image from glance caused sanlock timeouts
> because of the unpredictable page cache flushes.
>
> We
2005 Oct 19
2
rsync and o_direct
Hi
We currently use rsync for various jobs at our company. We are now
looking at using it to create an offsite synchonised copy of an Oracle 10g
RAC archive logs area. The source area is on Oracle OCFS filesystem.
The OCFS filesystem requires all reads/writes to be performed with the
O_DIRECT option, thus bypassing cache. Oracle provide an updated
coreutils package which includes the
2010 Dec 01
2
tmpfs says "No space left on device"
I have a server where we use tmpfs as a cache for temporary files used
by a web application. But occasionally this tmpfs thinks it is full
when it isn't.
[root at flask-yellow tmpfs]# touch file
touch: cannot touch `file': No space left on device
[root at flask-yellow tmpfs]# df -k
Filesystem 1K-blocks Used Available Use% Mounted on
none 393216 19296
2004 Mar 10
1
copy error + control file corruption in ocfs 1.10
Wim,
Below are two problems I found in testing the newly released ocfs version.
Just for your information.
Gr,
Robert.
- Copying of multiple files gives errors
[oracle@prac01 test]$ cp --o_direct -R ./a2/* ./backup/a2/.
cp: writing `./backup/a2/./ccdata.dbf.bck': Invalid argument
cp: writing `./backup/a2/./ccindex.dbf': Invalid argument
cp: writing `./backup/a2/./control02.ctl':
2019 Apr 20
3
Does devtmps and tmpfs use underlying hard disk storage or Physical Memory (RAM)
Hi,
I am running the below command on CentOS Linux release 7.6.1810 (Core)
# df -hT --total
Filesystem Type Size Used Avail Use% Mounted on
/dev/xvda1 xfs 150G 8.0G 143G 6% /
devtmpfs devtmpfs 7.8G 0 7.8G 0% /dev
tmpfs tmpfs 7.8G 0 7.8G 0% /dev/shm
tmpfs tmpfs 7.8G 817M 7.0G 11% /run
tmpfs tmpfs 7.8G 0
2019 Oct 22
0
C8 regression / tmp on tmpfs
On 10/22/19 7:04 AM, Leon Fauster via CentOS wrote:
> Am 22.10.19 um 04:52 schrieb Orion Poplawski:
>> On 10/21/19 3:42 PM, Leon Fauster via CentOS wrote:
>>> Does someone have a working tmp on tmpfs via
>>>
>>> systemctl enable tmp.mount
>>>
>>> under CentOS8/RHEL8? This seems to work straight in EL7 ...
>>>
>>>
>>>
2020 Aug 08
1
Re: [PATCH nbdkit] plugins: file: More standard cache mode names
On Sun, Aug 9, 2020 at 12:28 AM Richard W.M. Jones <rjones@redhat.com> wrote:
>
> On Sat, Aug 08, 2020 at 01:24:02AM +0300, Nir Soffer wrote:
> > The new cache=none mode is misleading since it does not avoid usage of
> > the page cache. When using shared storage, we may get stale data from
> > the page cache. When writing, we flush after every write which is
> >
2011 Mar 11
1
run-init in tmpfs
Dear Sirs,
I've a question belonging to the run-init utility.
I'm trying to boot a full linux system from ram.
Therefore I provide a kernel and initrd from a tftp server.
The full rootfs is provided through a nfs-server and is at time a
cpio-archive. That archive shall be copied to the local client and
mounted in a tmpfs partition. After that, I want replace the oldroot bei
the root
2017 Jul 25
0
[Questions] About small files performance
Dear all
Recently, i did some work to test small files performance for gnfsv3
transport. Following is my scenario.
#####environment#####
==2 cluster nodes(nodeA/nodeB)==
each is equipped with E5-2650*2, 128G memory and 10GB*2 netcard
nodeA: 10.254.3.77 10.128.3.77
nodeB: 10.254.3.78 10.128.3.78
==2 stress nodes(clientA/clientB)==
each is equipped with E5-2650*2, 128G memory and 10GB*2
2020 Jul 26
0
tmpfs / selinux issue
Hi Leon,
have you tried mounting with 'httpd_sys_rw_content_t' instead of 'httpd_var_run_t' ?
Best Regards,
Strahil Nikolov
?? 25 ??? 2020 ?. 14:20:19 GMT+03:00, Leon Fauster via CentOS <centos at centos.org> ??????:
>Hi all,
>
>I have some AVC in the logs and wonder how to resolve this: Under
>EL8 (enforcing SElinux) I have /var/lib/php/session mounted as