Displaying 20 results from an estimated 4000 matches similar to: "FSID and NFS"
2012 Jan 11
5
Warning: bad fsid on block 20971520
Hi,
the $subj warning appears sometimes in syslog, in my case when
xfstests/209 runs looped. The minimal reproducer is looped mkfs+mount.
The message comes from disk-io.c btree_readpage_end_io_hook():
581 if (check_tree_block_fsid(root, eb)) {
582 printk_ratelimited(KERN_INFO "btrfs bad fsid on block %llu\n",
583 (unsigned long
2019 Mar 28
2
NFSv4: Using fsid=0 but *not* exporting the root filesystem
Hi,
I would like to use the NFSv4 ability to create a "root" filesystem with
fsid=0, so that I don't have to refer to the whole path of the exported
filesystem when I mount it. However I do *not* want this root
filesystem to be mountable by any host. Is that possible and how?
E.g
Filesystem:
/exports/data1
/exports/data2
/exports/data3
/etc/exports:
/exports
2010 Jun 17
4
[PATCH] Improve support for exporting btrfs subvolumes.
If you export two subvolumes of a btrfs filesystem, they will both be
given the same uuid so lookups will be confused.
blkid cannot differentiate the two, so we must use the fsid from
statfs64 to identify the filesystem.
We cannot tell if blkid or statfs is best without knowing internal
details of the filesystem in question, so we need to encode specific
knowledge of btrfs in mountd. This is
2015 Jun 10
2
[PATCH] New API: btrfs_replace_start
Signed-off-by: Pino Tsao <caoj.fnst@cn.fujitsu.com>
---
daemon/btrfs.c | 40 +++++++++++++++++++++++++++++++++++++++
generator/actions.ml | 19 +++++++++++++++++++
tests/btrfs/test-btrfs-devices.sh | 8 ++++++++
3 files changed, 67 insertions(+)
diff --git a/daemon/btrfs.c b/daemon/btrfs.c
index 39392f7..acc300d 100644
--- a/daemon/btrfs.c
+++
2015 Jun 12
2
Re: [PATCH] New API: btrfs_replace_start
在 2015年06月12日 17:12, Pino Toscano 写道:
> On Friday 12 June 2015 10:58:34 Pino Tsao wrote:
>> Hi,
>>
>> 在 2015年06月11日 17:43, Pino Toscano 写道:
>>> Hi,
>>>
>>> On Wednesday 10 June 2015 17:54:18 Pino Tsao wrote:
>>>> Signed-off-by: Pino Tsao <caoj.fnst@cn.fujitsu.com>
>>>> ---
>>>> daemon/btrfs.c
2015 Jun 12
2
Re: [PATCH] New API: btrfs_replace_start
Hi,
在 2015年06月11日 17:43, Pino Toscano 写道:
> Hi,
>
> On Wednesday 10 June 2015 17:54:18 Pino Tsao wrote:
>> Signed-off-by: Pino Tsao <caoj.fnst@cn.fujitsu.com>
>> ---
>> daemon/btrfs.c | 40 +++++++++++++++++++++++++++++++++++++++
>> generator/actions.ml | 19 +++++++++++++++++++
>> tests/btrfs/test-btrfs-devices.sh |
2010 Jun 02
2
NFS exporting btrfs subvolumes.
NFS needs a unique identifier for a filesystem to be able to export it.
This can be set by the admin (fsid= in /etc/exports) but that is a hassle
and it is best to set it automatically.
nfs-utils currently uses the UUID returned by libblkid if that works,
or the fsid returned by statfs64 if libblkid finds nothings and
fsid is non-zero. Otherwise it uses device major/minor.
This
2006 Aug 04
3
OCFS2 and ASM Question
Ok guys & gals here is the scenario:
1.) Host RHEL 4 U3 2.6.9-34.0.2.EL
2.) OCFS2 latest version
3.) Successfully formatted & mounted OCFS2 filesystems on 2 nodes
/dev/sdb1 /u02/oradata/usdev/voting
/dev/sdc1 /u02/oradata/usdev/data01
/dev/sdd1 /u02/oradata/usdev/data02
/dev/sde1 /u02/oradata/usdev/data03
4.) Downloaded & installed ASMLib 2.0 on both nodes
5.) Ran
2017 Sep 28
1
upgrade to 3.12.1 from 3.10: df returns wrong numbers
Hi,
When I upgraded my cluster, df started returning some odd numbers for my
legacy volumes.
Newly created volumes after the upgrade, df works just fine.
I have been researching since Monday and have not found any reference to
this symptom.
"vm-images" is the old legacy volume, "test" is the new one.
[root at st-srv-03 ~]# (df -h|grep bricks;ssh st-srv-02 'df -h|grep
2013 Oct 05
10
Linux Arch: kernel BUG at fs/btrfs/inode.c:873!
Hi,
I have a home server on Linux Arch (kernel 3.11.2) that uses
multi-device btrfs on root filesystem.
Until recently it worked completely fine. And yesterday I rebooted it
and the machine did not wake up.
I booted from a USB (kernel 3.10) and tried to mount the filesystem.
Here is OOPs I see
[ 41.676217] device fsid 25e6a6fa-fe1f-4be5-a638-eeac948f8c21 devid
8 transid 164237 /dev/sda
[
2015 Feb 28
9
Looking for a life-save LVM Guru
Dear All,
I am in desperate need for LVM data rescue for my server.
I have an VG call vg_hosting consisting of 4 PVs each contained in a
separate hard drive (/dev/sda1, /dev/sdb1, /dev/sdc1, and /dev/sdd1).
And this LV: lv_home was created to use all the space of the 4 PVs.
Right now, the third hard drive is damaged; and therefore the third PV
(/dev/sdc1) cannot be accessed anymore. I would like
2008 Nov 26
8
disk space issues...any help is greatly appreciated
Hi all,
Please pardon my newbie-ness on this issue....I've a / partition which
is full (quite suddenly, actually) and I'm not sure how to fix this.
I've searched for uneeded logs, etc in /var/log and /tmp to no avail.
The system is CentOS 5.2 and is not connected to the internet, serves as
a local LAN server running stock stuff...sendmail, dovecot,
apache..nothing strange or
2013 May 13
7
Remove a materially failed device from a Btrfs "single-raid" using partitions
Hello,
I am on Ubuntu Server 13.04 with Linux 3.8.
I''ve created a "single-raid" using /dev/sd{a,b,c,d}{1,3}. One of my hard
drives has failed, I mean it''s materially dead.
:~$ sudo btrfs filesystem show
Label: none uuid: 40886f51-8c9b-4be1-8721-83bf5653d2a0
Total devices 5 FS bytes used 226.90GB
devid 4 size 37.27GB used 31.01GB path /dev/sdd1
2010 Apr 29
1
nfs4 help needed
Fedora 13 is using nfs4, and there is a problem in opening files requiring
OpenOffice if accessed over an nfs3 mount, so it's time to change. I found a
couple of tutorials, and got it *almost* working correctly. This is where I
need help.
Logwatch tells me
/nfs4exports/Data1 and /Data1 have same filehandle for
*,192.168.0.0/24,192.168.0.0/255.255.255.0, using first
The tutorial I was
2015 Feb 28
1
Looking for a life-save LVM Guru
Dear James,
Thank you for being quick to help.
Yes, I could see all of them:
# vgs
# lvs
# pvs
Regards,
Khem
On Sat, February 28, 2015 7:37 am, James A. Peltier wrote:
>
>
> ----- Original Message -----
> | Dear All,
> |
> | I am in desperate need for LVM data rescue for my server.
> | I have an VG call vg_hosting consisting of 4 PVs each contained in a
> | separate
2019 Jan 29
2
C7, mdadm issues
Alessandro Baggi wrote:
> Il 29/01/19 15:03, mark ha scritto:
>
>> I've no idea what happened, but the box I was working on last week has
>> a *second* bad drive. Actually, I'm starting to wonder about that
>> particulare hot-swap bay.
>>
>> Anyway, mdadm --detail shows /dev/sdb1 remove. I've added /dev/sdi1...
>> but see both /dev/sdh1 and
2008 May 09
1
disk partitioning - I'm missing something simple, I think
Hi all,
Excuse the question as I'm sure those more experienced will find it simple.
I've a CentOS5.1 box with six physical drives, two of which are used for
nightly rsync
backups. Contents of /etc/mtab, /etc/fstab, df and a brief narrative
follow:
======================================================
# cat ./fstab
/dev/VolGroup00/LogVol00 / ext3 defaults
2010 Nov 09
2
time for "balance"
Hallo, linux-btrfs,
I''m working with btrfs for some days.
btrfs-progs-20101101, kernel 2.6.35.8 (both self compiled).
First step:
mkfs.btrfs /dev/sdd1
mount /dev/sdd1 /srv/MM
for a 2 TByte partition, worked well.
Copying about 1,5 TByte data to this partition worked well.
Second step:
btrfs device add /dev/sdc1 /srv/MM
btrfs filesystem balance
2019 Jan 29
2
C7, mdadm issues
Alessandro Baggi wrote:
> Il 29/01/19 18:47, mark ha scritto:
>> Alessandro Baggi wrote:
>>> Il 29/01/19 15:03, mark ha scritto:
>>>
>>>> I've no idea what happened, but the box I was working on last week
>>>> has a *second* bad drive. Actually, I'm starting to wonder about
>>>> that particulare hot-swap bay.
>>>>
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
My initial setup was composed of 2 similar nodes: stor1data and stor2data.
A month ago I expanded both volumes with a new node: stor3data (2 bricks
per volume).
Of course, then to add the new peer with the bricks I did the 'balance
force' operation. This task finished successfully (you can see info below)
and number of files on the 3 nodes were very similar .
For volumedisk1 I