Displaying 20 results from an estimated 11000 matches similar to: "[PROBLEM] setting data=writeback in /etc/fstab for /"
2001 Jul 27
2
Strane remount behaviour with ext3-2.4-0.9.4
Following the announcement on lkml, I have started using ext3 on one of my
servers. Since the server in question is a farily security-sensitive box, my
/usr partition is mounted read only except when I remount rw to install
packages.
I converted this partition to run ext3 with the mount options
"nodev,ro,data=writeback,defaults" figuring that when I need to install new
packages etc,
2003 Mar 31
2
data=writeback option on root partition - RH 8.0
Well, I had the first use of the recovery mode on the RH 8.0 install CD
today on a dev system. I changed the mount option for the root
partition to data=writeback to see what all the claimed speed increases
are about, and the system failed to mount the partition r/w, so the
resulting boot failed.
After search usenet, it seems other RH folks have had this problem,
possibly because it is
2012 Jul 04
1
[PATCH] virtio-blk: allow toggling host cache between writeback and writethrough
On Tue, Jul 03, 2012 at 03:19:37PM +0200, Paolo Bonzini wrote:
> This patch adds support for the new VIRTIO_BLK_F_CONFIG_WCE feature,
> which exposes the cache mode in the configuration space and lets the
> driver modify it. The cache mode is exposed via sysfs.
>
> Even if the host does not support the new feature, the cache mode is
> visible (thanks to the existing
2012 Jul 04
1
[PATCH] virtio-blk: allow toggling host cache between writeback and writethrough
On Tue, Jul 03, 2012 at 03:19:37PM +0200, Paolo Bonzini wrote:
> This patch adds support for the new VIRTIO_BLK_F_CONFIG_WCE feature,
> which exposes the cache mode in the configuration space and lets the
> driver modify it. The cache mode is exposed via sysfs.
>
> Even if the host does not support the new feature, the cache mode is
> visible (thanks to the existing
2009 May 20
1
cannot mount ext3 boot partition as r/w since 2.6.30
Hi all,
I am testing new kernel on a mips machine (64 bits for kernel, 32 bits
userland) and I found a problem when mounting the root file system. It
is an ext3 file system that is correctly mounted as read only. While
booting the system remount the file system as read/write and keep
starting all daemons.
Moving from 2.6.26 to 2.6.30 kernel, I get this error while remounting
the file system
2017 Feb 17
2
Unsafe migration with copy-storage-all (non shared storage) and writeback cache
Hi list,
I would like to understand if, and why, the --unsafe flag is needed when
using --copy-storage-all when migrating guests which uses writeback
cache mode.
Background: I want to live migrate guests with writeback cache from host
A to host B and these hosts only have local storage (ie: no shared
storage at all).
From my understanding, --unsafe should be only required when migrating
2005 May 04
1
ext3 writeback mode for root parition
I'm trying to enable writeback mode for the root partition. However, adding
the normal option "rootflags=data=writeback" to my grub.conf kernel line
doesn't result in writeback mode (according to `dmesg`).
Does anyone know of a way to enable this?
-Doug
--
Douglas E. Warner <dwarner at ctinetworks.com> Network Engineer
CTI Networks, Inc.
2017 Feb 17
2
Libvirt behavior when mixing io=native and cache=writeback
Hi all,
I write about libvirt inconsistent behavior when mixing io=native and
cache=writeback. This post can be regarded as an extension, or
clarification request, of BZ 1086704
(https://bugzilla.redhat.com/show_bug.cgi?id=1086704)
On a fully upgraded CentOS6 x86-64 machine, starting a guest with
io=native and cache=writeback is permitted: no errors are raised and the
VM (qemu, really)
2003 Mar 11
2
writeback on /tmp and laptop?
Hello,
I read some threads (some of them not very recent, and perhaps not up to date) and I have not found a clear answer to these:
Is it advisable to have /tmp set up with data=writeback option in general?
Is it again true if we are considering a laptop?
Do I have also to consider writecache of disks in this case or are they totally non related?
How to check/set the state of writecache for
2011 Aug 26
1
Reg: Workaround to use pivot_root while using "rootfs" for "/" ?
Hi All,
I am trying to start a lxc container using libvirt and I am facing an
issue due to pivot_root.
The return code from the system call ?pivot_root? is EINVAL (Invalid
arguments).
I isolated the issue to this specific condition check within the system
call.
This is an code snippet from fs/namespace.c::pivot_root
* error = -EINVAL;
if (root.mnt->mnt_root != root.dentry)
2005 Aug 26
1
lvm initrd -> initramfs
I converted my lvm root initrd to an initramfs by putting glibc, lvm,
pivot_root, my linuxrc, etc. in my initramfs source file. I use ash
compiled against klibc to run my linuxrc
Unfortunately -
pivot_root . initrd
- complains -
pivot_root: Invalid argument
I suspect this may be because you can't pivot_root using a cpio
initramfs root?
If so, what should I do instead? Should I
2004 Dec 15
1
only pivot_root supported? [signed]
Hi,
I hope this is the right list for initramfs questions.
First I noticed: with initrd I can use real-root-device and pivot_root
mechanisms. with initramfs only pivot_root works. My init (or linuxrc)
scripts end like this:
mount -t xfs -n -o ro /dev/mapper/root /new-root
umount -n /sys || true
umount -n /dev || true
umount -n /proc || true
cd /new-root
pivot_root . initrd
exec chroot .
2007 Sep 29
3
Silly question - Anything faster than rm?
Maybe this is a silly question, but i have a few million files i need
to delete but i can't just reformat the volume.
Right now the fastest thing i can think of is
nice -20 rm -Rf /folder-i-want-to-delete
is there a better or faster way to do this?
Thanks,
Jamie
2000 Nov 02
3
kernel oops
Hi,
This is probably way far unsupported, but here goes:
I had been running ext3 0.0.3b for a while and found that I wanted to
convert back to ext2 so I could still play with 2.4test kernels.
Here is what I did:
1) boot the machine with kernel args init=/bin/bash
2) run debugfs on the root partition to remove the have_journal
ext2 feature flag.
3) remount the root filesystem rw as ext2
2004 Feb 13
1
fsync in ext3: A question
Hi,
I have a question on fsync() and ext3's journaling modes.
Assume that I call fsync(fd) on a file.
If that file is in 'data=journal' mode, would the fsync() return once the
data gets safely into the journal ?
On the other hand, if that file is in 'data=writeback' mode, would the
fsync() return only when the data gets safely into its actual location ?
Any help is
2005 Jan 05
1
[PATCH] kinit/kinit.c
A patch for a few more hiccups and trivialities in kinit.c:
* The check_path() calls check for "/root" and "/old_root" - I believe
that should be "/root" and "/root/old_root".
* chdir("/") is recommended after pivot_root()
* init_argv[0] isn't set properly to the basename pointed to by char *s
- this fix also eliminates six lines of
2011 Sep 06
2
Reg: Difference between chroot & pivot_root
Hi,
What is the difference between chroot & pivot_root.
They don't seem obvious based on the man pages apart from the below
mentioned
caveats.
1) Inherited Open file descriptors, have to be explicitly closed.
2) Does not change CWD of the process, which can be overcome by doing a
chdir before & after chroot call.
Any information on this would be useful.
Thanks,
2002 Apr 08
2
ext3 on an logical volume - snapshot using how?
Hi everybody!
I'm using LVM and ext3 and am very satisfied with it.
But now I have the following problem:
I have a filesystem on a LV which is in use -
files are created, modified and so on.
To do a backup I'd like to create a snapshot
of this volume which gets backup'd and then dropped.
# lvcreate -s -L 20M -n snapshot /dev/vg/data
lvcreate -- WARNING: the snapshot will be
2007 Jul 01
5
Mount and fstab problems with large devices?
I''m trying to get a new file server managed by puppet from day 1, at
least as much as possible. At the moment, though, there''s two issues I''m
running into:
1. fstab should have entries for my comically-large RAID, but doesn''t.
2. each puppet run appears to remount the RAID, even when no rules in
the manifest change.
I suspect the issue may be in parsing
2011 Mar 11
3
What could cause slow down betwen OCFS2 1.2.9 and 1.4.4
We upgraded our production database cluster (6 node) from EL4 Update 5 to EL5 Update 5, including upgrading OCFS2 from 1.2.9 to 1.4.4.
We are now noticing slowdown of batch jobs in Oracle, while hotbackup runs faster. One thing we saw is that journal mode changed from write-back to ordered, as we don't specify journal mode during mount. Oracle sees this as slowdown based on higher IO latency,