search for: dev_static

Displaying 14 results from an estimated 14 matches for "dev_static".

Did you mean: dev_state
2017 Nov 07
0
Gluster clients can't see directories that exist or are created within a mounted volume, but can enter them.
...andran <nbalacha at redhat.com <mailto:nbalacha at redhat.com>> wrote: > Hi, > > Please provide the gluster volume info. Do you see any errors in the client mount log file (/var/log/glusterfs/var-lib-mountedgluster.log)? root at int-gluster-01:/var/log/glusterfs # grep 'dev_static' *.log|grep -v cmd_history glusterd.log:[2017-11-05 22:37:06.934787] W [glusterd-locks.c:675:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/3.12.2/xlator/mgmt/glusterd.so(+0x22e5a) [0x7f5047169e5a] -->/usr/lib64/glusterfs/3.12.2/xlator/mgmt/glusterd.so(+0x2cdc8) [0x7f5047173dc8] --&gt...
2017 Nov 08
2
Gluster clients can't see directories that exist or are created within a mounted volume, but can enter them.
...lacha at redhat.com> > wrote: > >> Hi, >> >> Please provide the gluster volume info. Do you see any errors in the >> client mount log file (/var/log/glusterfs/var-lib-mountedgluster.log)? >> > > > root at int-gluster-01:/var/log/glusterfs # grep 'dev_static' *.log|grep -v > cmd_history > > glusterd.log:[2017-11-05 22:37:06.934787] W [glusterd-locks.c:675:glusterd_mgmt_v3_unlock] > (-->/usr/lib64/glusterfs/3.12.2/xlator/mgmt/glusterd.so(+0x22e5a) > [0x7f5047169e5a] -->/usr/lib64/glusterfs/3.12.2/xlator/mgmt/glusterd.so(+0x2cdc8...
2017 Nov 06
2
Gluster clients can't see directories that exist or are created within a mounted volume, but can enter them.
Do the users have permission to see/interact with the directories, in addition to the files? On Mon, Nov 6, 2017 at 1:55 PM, Nithya Balachandran <nbalacha at redhat.com> wrote: > Hi, > > Please provide the gluster volume info. Do you see any errors in the > client mount log file (/var/log/glusterfs/var-lib-mountedgluster.log)? > > > Thanks, > Nithya > > On 6
2017 Nov 08
0
Gluster clients can't see directories that exist or are created within a mounted volume, but can enter them.
...on the system on which you are running the mount process. > > Please provide the volume config details as well (gluster volume info) from one of the server nodes. > Oh I'm sorry, I totally misread that - didn't realise it was on the client. Clarification for below logs: - 'dev_static' is the gluster volume. - 'int-kube-01' is the gluster client. - '10.51.70.151' is the first node in a three node (2 replica, 1 arbiter) gluster cluster. - '/var/lib/kubelet/...../iss3dev-static' is a directory on the client that should be mounting '10.51.70.151:/dev...
2017 Nov 12
1
Gluster clients can't see directories that exist or are created within a mounted volume, but can enter them.
> Clarification for below logs: > > - 'dev_static' is the gluster volume. > - 'int-kube-01' is the gluster client. > - '10.51.70.151' is the first node in a three node (2 replica, 1 arbiter) gluster cluster. > - '/var/lib/kubelet/...../iss3dev-static' is a directory on the client that should be mounting '10...
2012 May 25
6
[PATCH v5 0/3] Btrfs: add IO error device stats
Changes v1-v2: - Remove restriction that BTRFS_IOC_GET_DEVICE_STATS is a privileged operation - Cast u64 to unsigned long long for printf() Changes v2-v3: - Rebased on Chris'' current master Changes v3-v4: - Add padding at end of ioctl structure Changes v4-v5: - The statistic members in the ioctl are now organized as an array of 64 bit values. Symbolic names for the array indexes
2010 Oct 26
5
Mobile Phones and Asterisk
Hi, Is the dev_state can also be used to track a mobile phone's status via SIP? I tried it on several phones(nokia, samsung) but it returns NOANSWER but i can hear a beep beep beep sound indicating that it is currently busy. regards, RYAN ICASIANO
2008 Jan 08
2
disable call waiting by default
I've connected some analogic phone to some fxs modules on an analogic card. I want to disable by default the call waiting sound. I know that dialing *70 before to call the call waiting is disabled until the next call, but isn't there a setting or a dialplan command to set up this automatically? Thanks -- /*************/ nik600 https://sourceforge.net/projects/ccmanager
2007 Dec 12
0
OpenSUSE 10.3 HVM installer ATA difficulties
I''ve been trying to install OpenSUSE 10.3 as an HVM guest without much success. I''ve been using the mini network installation image `openSUSE-10.3-GM-i386-mini.iso''. First I had to binary-edit the iso to turn off gfxboot, which breaks with current xen-unstable. (In theory you can turn off gfxboot by holding down shift but it is almost impossible to get the timing
2012 Jul 09
6
3.5.0-rc6: btrfs and LVM snapshots -> wrong devicename in /proc/mounts
Hi, using btrfs with LVM snapshots seems to be confusing /proc/mounts After mounting a snapshot of an original filesystem, the devicename of the original filesystem is overwritten with that of the snapshot in /proc/mounts. Steps to reproduce: arnd@kallisto:/mnt$ sudo mount /dev/vg0/original /mnt/original [ 107.041432] device fsid 5c3e8ca2-da56-4ade-9fef-103a6a8a70c2 devid 1 transid 4
2003 Aug 20
1
(Fwd) Lost data on FreeBSD tape (fwd)
This appears to be a pthreads problem, not scsi. Anyone care to look at it? ---------- Forwarded message ---------- Date: Wed, 20 Aug 2003 11:24:42 -0400 From: Dan Langille <dan@langille.org> To: freebsd-scsi@freebsd.org Cc: Kern Sibbald <kern@sibbald.com> Subject: (Fwd) Lost data on FreeBSD tape I've been working with Kern Sibbald, author of Bacula (http://www.bacula.org/) to
2012 Jul 31
4
BTRFS crash on mount with 3.4.4
My kernel crashed for some other reason, and now I can''t mount my btrfs filesystem. I don''t care about the data, it''s backed up. I''ll compile a 3.5 kernel, but is there any info you''d like off that filesystem to see why btrfs is crashing on mount? Marc [ 313.152857] device label btrfs_pool1 devid 1 transid 20769 /dev/mapper/disk1 [ 313.171318]
2020 Oct 17
10
[RFC] treewide: cleanup unreachable breaks
From: Tom Rix <trix at redhat.com> This is a upcoming change to clean up a new warning treewide. I am wondering if the change could be one mega patch (see below) or normal patch per file about 100 patches or somewhere half way by collecting early acks. clang has a number of useful, new warnings see
2020 Oct 17
10
[RFC] treewide: cleanup unreachable breaks
From: Tom Rix <trix at redhat.com> This is a upcoming change to clean up a new warning treewide. I am wondering if the change could be one mega patch (see below) or normal patch per file about 100 patches or somewhere half way by collecting early acks. clang has a number of useful, new warnings see