Displaying 15 results from an estimated 15 matches for "nod1".
Did you mean:
nod
2010 Jul 09
1
installing packages over ssh without X forwarding
...Rtmpk1XxTl/downloaded_packages’
Updating HTML index of packages in '.Library'
Warning message:
In install.packages("cairoDevice", dep = T) :
installation of package 'cairoDevice' had non-zero exit status
When I connect with remote computer using ssh -X root at nod1 to enable X11
forwarding then installation works without problems. This would however
require manually connect with each administred computer a do the
installation. cssh which I use now to install packages on multiple
computers does not enable X11 forwarding. I have also tested installation
using R...
2017 Jul 30
2
Possible stale .glusterfs/indices/xattrop file?
...e files which need to be healing using the "heal <volume> info" command and it still shows that very same GFID on node2 to be healed. So nothing changed here.
The file /data/myvolume/brick/.glusterfs/indices/xattrop/29e0d13e-1217-41cc-9bda-1fbbf781c397 is only on node2 and not on my nod1 nor on my arbiternode. This file seems to be a regular file and not a symlink. Here is the output of the stat command on it from my node2:
File: ?/data/myvolume/brick/.glusterfs/indices/xattrop/29e0d13e-1217-41cc-9bda-1fbbf781c397?
Size: 0 Blocks: 1 IO Block: 512 regular empty file
Device: 25h/37d...
2017 Jul 31
0
Possible stale .glusterfs/indices/xattrop file?
...sing the "heal <volume>
> info" command and it still shows that very same GFID on node2 to be
> healed. So nothing changed here.
>
> The file
> /data/myvolume/brick/.glusterfs/indices/xattrop/29e0d13e-1217-41cc-9bda-1fbbf781c397
> is only on node2 and not on my nod1 nor on my arbiternode. This file
> seems to be a regular file and not a symlink. Here is the output of
> the stat command on it from my node2:
>
> File:
> ?/data/myvolume/brick/.glusterfs/indices/xattrop/29e0d13e-1217-41cc-9bda-1fbbf781c397?
> Size: 0 Blocks: 1...
2010 Oct 20
1
OCFS2 + iscsi: another node is heartbeating in our slot (over scst)
Hi,
I'm building a cluster containing two nodes with seperate common storage
server.
On storage server i have volume with ocfs2 fs which is sharing this
volume via iscsi target.
When node connected to the target i can local mount volume on node and
using it.
Unfortunately. on storage server ocfs2 logged to dmesg:
Oct 19 22:21:02 storage kernel: [ 1510.424144]
2017 Jul 31
2
Possible stale .glusterfs/indices/xattrop file?
...hich need to be healing using the "heal <volume> info" command and it still shows that very same GFID on node2 to be healed. So nothing changed here.
>> The file /data/myvolume/brick/.glusterfs/indices/xattrop/29e0d13e-1217-41cc-9bda-1fbbf781c397 is only on node2 and not on my nod1 nor on my arbiternode. This file seems to be a regular file and not a symlink. Here is the output of the stat command on it from my node2:
>> File: ?/data/myvolume/brick/.glusterfs/indices/xattrop/29e0d13e-1217-41cc-9bda-1fbbf781c397?
>> Size: 0 Blocks: 1 IO Block: 512 regular empty fil...
2017 Jul 31
0
Possible stale .glusterfs/indices/xattrop file?
...info" command and it still shows that very same GFID
>>> on node2 to be healed. So nothing changed here.
>>>
>>> The file
>>> /data/myvolume/brick/.glusterfs/indices/xattrop/29e0d13e-1217-41cc-9bda-1fbbf781c397
>>> is only on node2 and not on my nod1 nor on my arbiternode. This file
>>> seems to be a regular file and not a symlink. Here is the output of
>>> the stat command on it from my node2:
>>>
>>> File:
>>> ?/data/myvolume/brick/.glusterfs/indices/xattrop/29e0d13e-1217-41cc-9bda-1fbbf781c39...
2017 Jul 31
2
Possible stale .glusterfs/indices/xattrop file?
...d to be healing using the "heal <volume> info" command and it still shows that very same GFID on node2 to be healed. So nothing changed here.
>>>> The file /data/myvolume/brick/.glusterfs/indices/xattrop/29e0d13e-1217-41cc-9bda-1fbbf781c397 is only on node2 and not on my nod1 nor on my arbiternode. This file seems to be a regular file and not a symlink. Here is the output of the stat command on it from my node2:
>>>> File: ?/data/myvolume/brick/.glusterfs/indices/xattrop/29e0d13e-1217-41cc-9bda-1fbbf781c397?
>>>> Size: 0 Blocks: 1 IO Block: 512 r...
2017 Jul 31
0
Possible stale .glusterfs/indices/xattrop file?
...hat very same
>>>>> GFID on node2 to be healed. So nothing changed here.
>>>>>
>>>>> The file
>>>>> /data/myvolume/brick/.glusterfs/indices/xattrop/29e0d13e-1217-41cc-9bda-1fbbf781c397
>>>>> is only on node2 and not on my nod1 nor on my arbiternode. This
>>>>> file seems to be a regular file and not a symlink. Here is the
>>>>> output of the stat command on it from my node2:
>>>>>
>>>>> File:
>>>>> ?/data/myvolume/brick/.glusterfs/indices/xat...
2017 Jul 30
0
Possible stale .glusterfs/indices/xattrop file?
On 07/29/2017 04:36 PM, mabi wrote:
> Hi,
>
> Sorry for mailing again but as mentioned in my previous mail, I have
> added an arbiter node to my replica 2 volume and it seem to have gone
> fine except for the fact that there is one single file which needs
> healing and does not get healed as you can see here from the output of
> a "heal info":
>
> Brick
2017 Jul 31
2
Possible stale .glusterfs/indices/xattrop file?
...healing using the "heal <volume> info" command and it still shows that very same GFID on node2 to be healed. So nothing changed here.
>>>>>> The file /data/myvolume/brick/.glusterfs/indices/xattrop/29e0d13e-1217-41cc-9bda-1fbbf781c397 is only on node2 and not on my nod1 nor on my arbiternode. This file seems to be a regular file and not a symlink. Here is the output of the stat command on it from my node2:
>>>>>> File: ?/data/myvolume/brick/.glusterfs/indices/xattrop/29e0d13e-1217-41cc-9bda-1fbbf781c397?
>>>>>> Size: 0 Blocks: 1...
2017 Jul 29
2
Possible stale .glusterfs/indices/xattrop file?
Hi,
Sorry for mailing again but as mentioned in my previous mail, I have added an arbiter node to my replica 2 volume and it seem to have gone fine except for the fact that there is one single file which needs healing and does not get healed as you can see here from the output of a "heal info":
Brick node1.domain.tld:/data/myvolume/brick
Status: Connected
Number of entries: 0
Brick
2017 Jul 31
0
Possible stale .glusterfs/indices/xattrop file?
...ame GFID on node2 to be healed. So nothing changed here.
>>>>>>>
>>>>>>> The file
>>>>>>> /data/myvolume/brick/.glusterfs/indices/xattrop/29e0d13e-1217-41cc-9bda-1fbbf781c397
>>>>>>> is only on node2 and not on my nod1 nor on my arbiternode. This
>>>>>>> file seems to be a regular file and not a symlink. Here is the
>>>>>>> output of the stat command on it from my node2:
>>>>>>>
>>>>>>> File:
>>>>>>> ?/da...
2010 Aug 09
2
HOWTO samba4 centos5.5 named dnsupdate drbd simple failover
...:gfs2"
UUID: 885E2E87-90CE-B916-8A73-D66336CD98C0
Now start on node2 gfs: service gfs start.
No make on both nodes in / the directory windows: mkdir windows.
Then mount on both nodes the gfs filesystem to windows:
mount -t gfs2 /dev/drbd0 /windows.
Now lets do testing, on nod1:
cd /windows.
touch test.txt
[root at node1 windows]# ls
test.txt
On node2 you must see in /windows:
[root at node2 windows]# ls
test.txt.
On node2: vim test.txt, i, this is a test from node2,:, wq.
On node1: /cat/windows/test.txt
[root at node1 windows]# cat /windows/test.txt
this is a test fr...
2010 Aug 16
1
WG: HOWTO samba4 centos5.5 named dnsupdate drbd simple failover
...:gfs2"
UUID: 885E2E87-90CE-B916-8A73-D66336CD98C0
Now start on node2 gfs: service gfs start.
No make on both nodes in / the directory windows: mkdir windows.
Then mount on both nodes the gfs filesystem to windows:
mount -t gfs2 /dev/drbd0 /windows.
Now lets do testing, on nod1:
cd /windows.
touch test.txt
[root at node1 windows]# ls
test.txt
On node2 you must see in /windows:
[root at node2 windows]# ls
test.txt.
On node2: vim test.txt, i, this is a test from node2,:, wq.
On node1: /cat/windows/test.txt
[root at node1 windows]# cat /windows/test.txt
this is a test fr...
2010 Oct 05
0
WG: HOWTO samba4 centos5.5 named dnsupdate drbd simple failover
...:gfs2"
UUID: 885E2E87-90CE-B916-8A73-D66336CD98C0
Now start on node2 gfs: service gfs start.
No make on both nodes in / the directory windows: mkdir windows.
Then mount on both nodes the gfs filesystem to windows:
mount -t gfs2 /dev/drbd0 /windows.
Now lets do testing, on nod1:
cd /windows.
touch test.txt
[root at node1 windows]# ls
test.txt
On node2 you must see in /windows:
[root at node2 windows]# ls
test.txt.
On node2: vim test.txt, i, this is a test from node2,:, wq.
On node1: /cat/windows/test.txt
[root at node1 windows]# cat /windows/test.txt
this is a test fr...