Displaying 20 results from an estimated 218 matches for "_netdev".
Did you mean:
netdev
2008 Apr 21
0
iSCSI + CentOS 5.1 +_netdev problem
...10 retries
in the parameter, but the partition is not automounted.
Start scripts order appears OK:
# ls -1 /etc/rc3.d/S*
/etc/rc3.d/S07iscsid
/etc/rc3.d/S10network
/etc/rc3.d/S12syslog
/etc/rc3.d/S13iscsi
/etc/rc3.d/S25netfs
/etc/rc3.d/S55sshd
/etc/rc3.d/S90crond
/etc/rc3.d/S99local
But "_netdev" marked iSCSI device is not mounted, this is the fstab
file:
LABEL=/vz /vz ext3
_netdev,rw,noatime,nodiratime,commit=60,data=writeback 0 0
I have tried using LABELs, UUIDs, device names in fstab file.. and
the problem is the same.
If I put a "mount /vz" into "/etc/rc3.d...
2005 Jun 24
1
mounting ocfs2 partitions on boot
Hi list,
today, I installed a two node Oracle 10g RAC system on SLES9 SP2 RC2 which
shares
(via a SAN) a database volume and a quorum (ocr and voting). This volumes
are formated
with ocfs2 - so far no problem.
At the moment I'm stuck with the mounting of these volumes at boot time.
The user-guide
of ocfs2-tools mentions to add the corresponding lines to fstab - but at
the time when
fstab
2008 Mar 11
2
Problems mountine lustre thru an ib2ip gateway
...m thru an ib2ip gateway.
The MDS''s have infiniband connections. The client nodes are tcp/ip
connections. I am able to route between the client nodes and the MDS''s.
I have the following in /etc/fstab:
abe-mds1 at o2ib0,abe-mds2 at o2ib0:/home/client /abehome lustre
_netdev,flock 0 0
I get the following when trying to mount:
[root at t3honest5 lustre]# mount -v /abehome
verbose: 1
arg[0] = /sbin/mount.lustre
arg[1] = abe-mds1 at o2ib0,abe-mds2 at o2ib0:/home/client
arg[2] = /abehome
arg[3] = -v
arg[4] = -o
arg[5] = rw,_netdev,flock
mds nid 0: 141.142.69.7 at o...
2010 Apr 23
1
client mount fails on boot under debian lenny...
...to mount the filesystem before the backend 10ge
interface comes up so it gets a "No route to host" and immediately
aborts.
I know I can dump a mount -a into /etc/rc.local but I'm hoping there's a
more elegant way to handle this scenario. The fstab entry contains
options noatime,_netdev already.
Thanks.
Mohan
2009 Oct 27
1
/etc/rc.local and /etc/fstab
...r* all the other init scripts.
# You can put your own initialization stuff in here if you don't
# want to do the full Sys V style init stuff.
touch /var/lock/subsys/local
modprobe hangcheck-timer hangcheck_tick=1 hangcheck_margin=10
hangcheck_reboot=1
mount -t ocfs2 -o datavolume,nointr,_netdev,noatime /dev/mapper/mpath0
/u02
mount -t ocfs2 -o datavolume,nointr,_netdev /dev/mapper/mpath1 /u03
Thanks!
-Reid
Reid McKinley
********************************************************************************************
This message, including any attachments, contains confident...
2007 Aug 22
5
Slow concurrent actions on the same LVM logical volume
Hi 2 all !
I have problems with concurrent filesystem actions on a ocfs2
filesystem which is mounted by 2 nodes. OS=RH5ES and OCFS2=1.2.6
F.e.: If I have a LV called testlv which is mounted on /mnt on both
servers and I do a "dd if=/dev/zero of=/mnt/test.a bs=1024
count=1000000" on server 1 and do at the same time a du -hs
/mnt/test.a it takes about 5 seconds for du -hs to execute:
270M
2008 Feb 14
9
how do you mount mountconf (i.e. 1.6) lustre on your servers?
...kle in that you cannot
mount the devices until the network is up and yet most distributions
do /etc/fstab (i.e. "local") mounting before the network unless some
mechanism is used to filter out entries that need network connectivity
first. The traditional way of doing this has been to add _netdev to
those entries and they are then delayed until the network is up.
Cheers,
b.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: This is a digitally signed message part
Url : http://lists.lustre....
2005 Jun 28
3
OCFS2 volumes do no mount automatically on RHEL4 also
On RHEL4 also, the service o2cb starts later than _netdev device mounts.
So mount fails. Should the o2cb service start earlier to fix this?
Zafar
-----Original Message-----
From: ocfs2-users-bounces@oss.oracle.com
[mailto:ocfs2-users-bounces@oss.oracle.com] On Behalf Of
ocfs2-users-request@oss.oracle.com
Sent: Tuesday, June 28, 2005 12:00 PM
To: ocfs2-...
2009 May 27
2
Problem with OCFS2 on RHEL5.0 while installing CRS 10.2.01
...r /usr ext3 defaults 1 2
LABEL=/boot /boot ext3 defaults 1 2
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
LABEL=SWAP-sda5 swap swap defaults 0 0
LABEL=SWAP-sdb1 swap swap defaults 0 0
/dev/sdc1 /CRS_DISK ocfs2 _netdev,defaults 0 0
/dev/sdd1 /ORA_DATA ocfs2 _netdev,defaults 0 0
[root at eregtest1 client]# cat /etc/mtab
/dev/sda6 / ext3 rw 0 0
proc /proc proc rw 0 0
sysfs /sys sysfs rw 0 0
devpts /dev/pts devpts rw,gid=5,mode=620 0 0
/dev/sda9 /software ext3 rw 0 0
/dev/sda8 /opt ext3 rw 0 0
/dev/sda7 /tmp ext3...
2018 Apr 16
2
Getting glusterfs to expand volume size to brick size
..._pylon_block1 /mnt/pylon_block1 ext4
defaults 0 2
/dev/disk/by-id/scsi-0Linode_Volume_pylon_block2 /mnt/pylon_block2 ext4
defaults 0 2
/dev/disk/by-id/scsi-0Linode_Volume_pylon_block3 /mnt/pylon_block3 ext4
defaults 0 2
localhost:/dev_apkmirror_data /mnt/dev_apkmirror_data1 glusterfs
defaults,_netdev,fopen-keep-cache,direct-io-mode=enable 0 0
localhost:/dev_apkmirror_data /mnt/dev_apkmirror_data2 glusterfs
defaults,_netdev,fopen-keep-cache,direct-io-mode=enable 0 0
localhost:/dev_apkmirror_data /mnt/dev_apkmirror_data3 glusterfs
defaults,_netdev,fopen-keep-cache,direct-io-mode=enable...
2018 Apr 17
5
Getting glusterfs to expand volume size to brick size
...v/disk/by-id/scsi-0Linode_Volume_pylon_block2 /mnt/pylon_block2 ext4
>> defaults 0 2
>> /dev/disk/by-id/scsi-0Linode_Volume_pylon_block3 /mnt/pylon_block3 ext4
>> defaults 0 2
>>
>> localhost:/dev_apkmirror_data /mnt/dev_apkmirror_data1 glusterfs
>> defaults,_netdev,fopen-keep-cache,direct-io-mode=enable 0 0
>> localhost:/dev_apkmirror_data /mnt/dev_apkmirror_data2 glusterfs
>> defaults,_netdev,fopen-keep-cache,direct-io-mode=enable 0 0
>> localhost:/dev_apkmirror_data /mnt/dev_apkmirror_data3 glusterfs
>> defaults,_netdev,fop...
2015 Oct 10
4
filesystem mounting fails at boot
_netdev
The filesystem resides on a device that requires network
access (used to prevent the system from attempting to mount these
filesystems until the network has been enabled on the system).
This device is not a network device (this a SAN not a NAS). To the OS it
looks like a normal SCSI...
2011 Mar 11
1
RedHat 5.5 and automounting with fstab
...quite work.
I have two servers 192.168.104.151 and 192.168.104.152. Both are running
GlusterD and both offer one of the bricks for the volume. Both mount also
the volume locally.
The working /etc/fstab entry on 192.168.104.151 is:
192.168.104.151:/gluster-data /var/gluster/data glusterfs
auto,_netdev,rw,allow_other,default_permissions,max_read=131072 0 0
Likewise, the working /etc/fstab entry on 192.168.104.152 is:
192.168.104.152:/gluster-data /var/gluster/data glusterfs
auto,_netdev,rw,allow_other,default_permissions,max_read=131072 0 0
If you don't have the "auto,_netdev" o...
2009 Mar 25
0
CentOS won't shutdown ... or do anything else
...e devpts (rw,gid=5,mode=620)
usbfs on /proc/bus/usb type usbfs (rw)
/dev/sda2 on /boot type ext3 (rw)
none on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
/dev/emcpowerd on /EMC/SATA/AX4-5i/LUN0 type ext3
(rw,_netdev,noatime)
/EMC/SATA/AX4-5i/LUN0/path/var/spool/postfix on /var/spool/postfix
type ext3 (rw,bind,_netdev,noatime)
/dev/emcpowerk on /EMC/SATA/AX4-5i/LUN4 type ext3
(rw,_netdev,noatime)
/EMC/SATA/AX4-5i/LUN4 on <path> type ext3 (rw,bind,_netdev,noatime)
/dev/emcpowerg on /EMC/SATA/AX4-5i/LUN5...
2018 Apr 18
1
Replicated volume read request are served by remote brick
...erfs/bricks/storage/mountpoint
49153 0 Y 5301
Brick worker1:/glusterfs/bricks/storage/mountpoint
49153 0 Y 3002
The volume is mounted like this:
On worker1 node /etc/fstab
worker1:/storage /data/storage/ glusterfs defaults,_netdev
0 0
On master node /etc/fstab
master:/storage /data/storage/ glusterfs defaults,_netdev
0 0
When I add read load(many small files) on the volume mounted on the master
node CPU usage looks like this:
On master node: glusterfs ~ 50%
On master node: glusterfsd ~ 25%
On worker1 node...
2018 Apr 17
0
Getting glusterfs to expand volume size to brick size
...t; defaults 0 2
> /dev/disk/by-id/scsi-0Linode_Volume_pylon_block2 /mnt/pylon_block2 ext4
> defaults 0 2
> /dev/disk/by-id/scsi-0Linode_Volume_pylon_block3 /mnt/pylon_block3 ext4
> defaults 0 2
>
> localhost:/dev_apkmirror_data /mnt/dev_apkmirror_data1 glusterfs
> defaults,_netdev,fopen-keep-cache,direct-io-mode=enable 0 0
> localhost:/dev_apkmirror_data /mnt/dev_apkmirror_data2 glusterfs
> defaults,_netdev,fopen-keep-cache,direct-io-mode=enable 0 0
> localhost:/dev_apkmirror_data /mnt/dev_apkmirror_data3 glusterfs
> defaults,_netdev,fopen-keep-cache,di...
2018 Apr 17
0
Getting glusterfs to expand volume size to brick size
...e_Volume_pylon_block2 /mnt/pylon_block2 ext4
>>> defaults 0 2
>>> /dev/disk/by-id/scsi-0Linode_Volume_pylon_block3 /mnt/pylon_block3 ext4
>>> defaults 0 2
>>>
>>> localhost:/dev_apkmirror_data /mnt/dev_apkmirror_data1 glusterfs
>>> defaults,_netdev,fopen-keep-cache,direct-io-mode=enable 0 0
>>> localhost:/dev_apkmirror_data /mnt/dev_apkmirror_data2 glusterfs
>>> defaults,_netdev,fopen-keep-cache,direct-io-mode=enable 0 0
>>> localhost:/dev_apkmirror_data /mnt/dev_apkmirror_data3 glusterfs
>>> defa...
2008 Nov 19
2
noauto option ignored in CentOS 5.1?
I have worked quite a bit with CentOS 4.x with
SAN, multipathing, LVM etc. The way I mount my
file systems is using a script that is called during
startup that runs fsck, imports the physical volumes,
and volume groups, activates the logical volumes, creates
the mount point if needed then mounts the volume, I mainly
made it for software iSCSI due to the iscsi stack loading
after the system mount
2020 Sep 28
1
Centos8: Glusterd do not start correctly when I startup or reboot all server together
...create gfsvol1 replica 2 virt1:/gfsvol1/brick1
virt2:/gfsvol1/brick1 force
gluster volume start gfsvol1
gluster volume info gfsvol1
gluster volume status gfsvol1
gluster volume heal gfsvol1
# add to /etc/fstab
vi /etc/fstab
virt1:/gfsvol2 /virt-gfs glusterfs defaults,noatime,_netdev 0
0
mkdir /virt-gfs
mount -a
All work fine but when I start or restart all server together glusterd
server start but if I try mount the volume, gluster is not working.
At this point if I restart the glusterd service and run "mount -a" all
work fine.
Seem is a boot network prob...
2018 Apr 17
0
Getting glusterfs to expand volume size to brick size
...e_Volume_pylon_block2 /mnt/pylon_block2 ext4
>>> defaults 0 2
>>> /dev/disk/by-id/scsi-0Linode_Volume_pylon_block3 /mnt/pylon_block3 ext4
>>> defaults 0 2
>>>
>>> localhost:/dev_apkmirror_data /mnt/dev_apkmirror_data1 glusterfs
>>> defaults,_netdev,fopen-keep-cache,direct-io-mode=enable 0 0
>>> localhost:/dev_apkmirror_data /mnt/dev_apkmirror_data2 glusterfs
>>> defaults,_netdev,fopen-keep-cache,direct-io-mode=enable 0 0
>>> localhost:/dev_apkmirror_data /mnt/dev_apkmirror_data3 glusterfs
>>> defa...