Hi,
I installed Lustre FS on one of my servers. The file system is installed
successfully but I cannot able to run scripts  or the test scripts which are
located inside the tests directory. There is no error displayed. After
excecuting a part of the script its going hang
Here is the output when I ran the local.xml with lconf ---reformat local.xml
loading module: libcfs srcdir None devdir libcfs
loading module: lnet srcdir None devdir lnet
loading module: ksocklnd srcdir None devdir klnds/socklnd
loading module: lvfs srcdir None devdir lvfs
loading module: obdclass srcdir None devdir obdclass
loading module: ptlrpc srcdir None devdir ptlrpc
loading module: ost srcdir None devdir ost
loading module: ldiskfs srcdir None devdir ldiskfs
loading module: fsfilt_ldiskfs srcdir None devdir lvfs
loading module: obdfilter srcdir None devdir obdfilter
loading module: mdc srcdir None devdir mdc
loading module: osc srcdir None devdir osc
loading module: lov srcdir None devdir lov
loading module: mds srcdir None devdir mds
loading module: llite srcdir None devdir llite
NETWORK: NET_dev4_tcp .dev_tcp_UUID tcp dev4
OSD: ost1-test ost1-test_UUID obdfilter /tmp/ost1-test 100000 ldiskfs no 0 0
OST mount options: errors=remount-ro
OSD: ost2-test ost2-test_UUID obdfilter /tmp/ost2-test 100000 ldiskfs no 0 0
OST mount options: errors=remount-ro
MDSDEV: mds-test mds-test_UUID /tmp/mds-test ldiskfs yes
recording clients for filesystem: FS_fsname_UUID
Recording log mds-test on mds-test
LOV: lov_mds-test b0d6f_lov_mds-test_1b4f231e55 mds-test_UUID 0 1048576 0 0
[u''ost1-test_UUID'', u''ost2-test_UUID'']
mds-test
OSC: OSC_dev_ost1-test_mds-test b0d6f_lov_mds-test_1b4f231e55 ost1-test_UUID
OSC: OSC_dev_ost2-test_mds-test b0d6f_lov_mds-test_1b4f231e55 ost2-test_UUID
End recording log mds-test on mds-test
MDSDEV: mds-test mds-test_UUID /tmp/mds-test ldiskfs 50000 yes
MDS mount options: errors=remount-ro
when I press Ctrl+C
Traceback (most recent call last):
  File "/usr/sbin/lconf", line 2838, in ?
    main()
  File "/usr/sbin/lconf", line 2831, in main
    doHost(lustreDB, node_list)
  File "/usr/sbin/lconf", line 2274, in doHost
    for_each_profile(node_db, prof_list, doSetup)
  File "/usr/sbin/lconf", line 2054, in for_each_profile
    operation(services)
  File "/usr/sbin/lconf", line 2074, in doSetup
    n.prepare()
  File "/usr/sbin/lconf", line 1324, in prepare
    setup ="%s %s %s %s %s" %(blkdev, self.fstype, self.name,
  File "/usr/sbin/lconf", line 397, in newdev
    self.setup(name, setup)
  File "/usr/sbin/lconf", line 376, in setup
    self.run(cmds)
  File "/usr/sbin/lconf", line 278, in run
    ready = select.select([outfd,errfd],[],[]) # Wait for input
KeyboardInterrupt
my local.xml file is as follows
#!/bin/sh
# local.sh
# Create node
rm -f local.xml
lmc -m local.xml --add node --node dev4
lmc -m local.xml --add net --node dev4 --nid dev4 --nettype tcp
# Configure MDS
lmc -m local.xml --format --add mds --node dev4 --mds mds-test --fstype ext3
--dev /tmp/mds-test --size 50000
# Configure OSTs
lmc -m local.xml --add lov --lov lov-test --mds mds-test --stripe_sz 1048576
--stripe_cnt 0 --stripe_pattern 0
lmc -m local.xml --add ost --node dev4 --lov lov-test --ost ost1-test
--fstype ext3 --dev /tmp/ost1-test --size 100000
lmc -m local.xml --add ost --node dev4 --lov lov-test --ost ost2-test
--fstype ext3 --dev /tmp/ost2-test --size 100000
# Configure client
lmc -m local.xml --add mtpt --node dev4 --path /mnt/lustre --mds mds-test
--lov lov-test
if any idea please resolve
my drive is about 105GB
Thanks,
Prajith
-- 
Lals
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
http://mail.clusterfs.com/pipermail/lustre-discuss/attachments/20060628/dc26f300/attachment-0001.html
Hi, What does dmesg say? Prajith Lal wrote:> Hi, > I installed Lustre FS on one of my servers. The file system is > installed successfully but I cannot able to run scripts or the test > scripts which are located inside the tests directory. There is no > error displayed. After excecuting a part of the script its going hang > Here is the output when I ran the local.xml with lconf ---reformat > local.xml > > loading module: libcfs srcdir None devdir libcfs > loading module: lnet srcdir None devdir lnet > loading module: ksocklnd srcdir None devdir klnds/socklnd > loading module: lvfs srcdir None devdir lvfs > loading module: obdclass srcdir None devdir obdclass > loading module: ptlrpc srcdir None devdir ptlrpc > loading module: ost srcdir None devdir ost > loading module: ldiskfs srcdir None devdir ldiskfs > loading module: fsfilt_ldiskfs srcdir None devdir lvfs > loading module: obdfilter srcdir None devdir obdfilter > loading module: mdc srcdir None devdir mdc > loading module: osc srcdir None devdir osc > loading module: lov srcdir None devdir lov > loading module: mds srcdir None devdir mds > loading module: llite srcdir None devdir llite > NETWORK: NET_dev4_tcp .dev_tcp_UUID tcp dev4 > OSD: ost1-test ost1-test_UUID obdfilter /tmp/ost1-test 100000 ldiskfs > no 0 0 > OST mount options: errors=remount-ro > OSD: ost2-test ost2-test_UUID obdfilter /tmp/ost2-test 100000 ldiskfs > no 0 0 > OST mount options: errors=remount-ro > MDSDEV: mds-test mds-test_UUID /tmp/mds-test ldiskfs yes > recording clients for filesystem: FS_fsname_UUID > Recording log mds-test on mds-test > LOV: lov_mds-test b0d6f_lov_mds-test_1b4f231e55 mds-test_UUID 0 > 1048576 0 0 [u''ost1-test_UUID'', u''ost2-test_UUID''] mds-test > OSC: OSC_dev_ost1-test_mds-test b0d6f_lov_mds-test_1b4f231e55 > ost1-test_UUID > OSC: OSC_dev_ost2-test_mds-test b0d6f_lov_mds-test_1b4f231e55 > ost2-test_UUID > End recording log mds-test on mds-test > MDSDEV: mds-test mds-test_UUID /tmp/mds-test ldiskfs 50000 yes > MDS mount options: errors=remount-ro > > when I press Ctrl+C > Traceback (most recent call last): > File "/usr/sbin/lconf", line 2838, in ? > main() > File "/usr/sbin/lconf", line 2831, in main > doHost(lustreDB, node_list) > File "/usr/sbin/lconf", line 2274, in doHost > for_each_profile(node_db, prof_list, doSetup) > File "/usr/sbin/lconf", line 2054, in for_each_profile > operation(services) > File "/usr/sbin/lconf", line 2074, in doSetup > n.prepare() > File "/usr/sbin/lconf", line 1324, in prepare > setup ="%s %s %s %s %s" %(blkdev, self.fstype, self.name > <http://self.name>, > File "/usr/sbin/lconf", line 397, in newdev > self.setup(name, setup) > File "/usr/sbin/lconf", line 376, in setup > self.run(cmds) > File "/usr/sbin/lconf", line 278, in run > ready = select.select([outfd,errfd],[],[]) # Wait for input > KeyboardInterrupt > > my local.xml file is as follows > #!/bin/sh > > # local.sh > > # Create node > rm -f local.xml > lmc -m local.xml --add node --node dev4 > lmc -m local.xml --add net --node dev4 --nid dev4 --nettype tcp > > # Configure MDS > lmc -m local.xml --format --add mds --node dev4 --mds mds-test > --fstype ext3 --dev /tmp/mds-test --size 50000 > > # Configure OSTs > lmc -m local.xml --add lov --lov lov-test --mds mds-test --stripe_sz > 1048576 --stripe_cnt 0 --stripe_pattern 0 > lmc -m local.xml --add ost --node dev4 --lov lov-test --ost ost1-test > --fstype ext3 --dev /tmp/ost1-test --size 100000 > lmc -m local.xml --add ost --node dev4 --lov lov-test --ost ost2-test > --fstype ext3 --dev /tmp/ost2-test --size 100000 > > # Configure client > lmc -m local.xml --add mtpt --node dev4 --path /mnt/lustre --mds > mds-test --lov lov-test > > if any idea please resolve > > my drive is about 105GB > > Thanks, > Prajith > > -- > Lals > ------------------------------------------------------------------------ > > _______________________________________________ > Lustre-discuss mailing list > Lustre-discuss@clusterfs.com > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss >-- Cheers, Wang Yibin
Looks like your is lnet not properly configured. Why using ''lo'' as netif? what does ''lctl list_nids'' show? Is there a line in /etc/modprobe.conf like ''options lnet networks=tcp accept=all''? Prajith Lal wrote:> Hi, > This is dmesg output.. > LustreError: Unexpected error -11 connecting to 127.0.0.1@tcp at host > 127.0.0.1 <http://127.0.0.1> on port 988 > LustreError: previously skipped 4 similar messages > LustreError: 4752:0:(client.c:951:ptlrpc_expire_one_request()) @@@ > timeout (sent at 1151562883, 0s ago) req@c30d4000 x4365/t0 > o8->ost1-test_UUID@4vm1_UUID:6 lens 240/272 ref 1 fl Rpc:/0/0 rc 0/0 > LustreError: 4752:0:(client.c:951:ptlrpc_expire_one_request()) > previously skippe d 47 similar messages > audit(1151562958.904:525): avc: denied { rawip_send } for pid=4485 > comm="acce ptor_988" saddr=127.0.0.1 <http://127.0.0.1> src=988 > daddr=127.0.0.1 <http://127.0.0.1> dest=1023 netif=lo scontext=sy > stem_u:object_r:unlabeled_t tcontext=system_u:object_r:netif_lo_t > tclass=netif > audit(1151563083.889:526): avc: denied { rawip_send } for pid=4485 > comm="acce ptor_988" saddr=127.0.0.1 <http://127.0.0.1> src=988 > daddr=127.0.0.1 <http://127.0.0.1> dest=1023 netif=lo scontext=sy > stem_u:object_r:unlabeled_t tcontext=system_u:object_r:netif_lo_t > tclass=netif > > Thanks, > Prajith > > On 6/29/06, *Wang Yibin* <wangyb@clusterfs.com > <mailto:wangyb@clusterfs.com>> wrote: > > Hi, > > What does dmesg say? > > Prajith Lal wrote: > > Hi, > > I installed Lustre FS on one of my servers. The file system is > > installed successfully but I cannot able to run scripts or the test > > scripts which are located inside the tests directory. There is no > > error displayed. After excecuting a part of the script its going > hang > > Here is the output when I ran the local.xml with lconf ---reformat > > local.xml > > > > loading module: libcfs srcdir None devdir libcfs > > loading module: lnet srcdir None devdir lnet > > loading module: ksocklnd srcdir None devdir klnds/socklnd > > loading module: lvfs srcdir None devdir lvfs > > loading module: obdclass srcdir None devdir obdclass > > loading module: ptlrpc srcdir None devdir ptlrpc > > loading module: ost srcdir None devdir ost > > loading module: ldiskfs srcdir None devdir ldiskfs > > loading module: fsfilt_ldiskfs srcdir None devdir lvfs > > loading module: obdfilter srcdir None devdir obdfilter > > loading module: mdc srcdir None devdir mdc > > loading module: osc srcdir None devdir osc > > loading module: lov srcdir None devdir lov > > loading module: mds srcdir None devdir mds > > loading module: llite srcdir None devdir llite > > NETWORK: NET_dev4_tcp .dev_tcp_UUID tcp dev4 > > OSD: ost1-test ost1-test_UUID obdfilter /tmp/ost1-test 100000 > ldiskfs > > no 0 0 > > OST mount options: errors=remount-ro > > OSD: ost2-test ost2-test_UUID obdfilter /tmp/ost2-test 100000 > ldiskfs > > no 0 0 > > OST mount options: errors=remount-ro > > MDSDEV: mds-test mds-test_UUID /tmp/mds-test ldiskfs yes > > recording clients for filesystem: FS_fsname_UUID > > Recording log mds-test on mds-test > > LOV: lov_mds-test b0d6f_lov_mds-test_1b4f231e55 mds-test_UUID 0 > > 1048576 0 0 [u''ost1-test_UUID'', u''ost2-test_UUID''] mds-test > > OSC: OSC_dev_ost1-test_mds-test b0d6f_lov_mds-test_1b4f231e55 > > ost1-test_UUID > > OSC: OSC_dev_ost2-test_mds-test b0d6f_lov_mds-test_1b4f231e55 > > ost2-test_UUID > > End recording log mds-test on mds-test > > MDSDEV: mds-test mds-test_UUID /tmp/mds-test ldiskfs 50000 yes > > MDS mount options: errors=remount-ro > > > > when I press Ctrl+C > > Traceback (most recent call last): > > File "/usr/sbin/lconf", line 2838, in ? > > main() > > File "/usr/sbin/lconf", line 2831, in main > > doHost(lustreDB, node_list) > > File "/usr/sbin/lconf", line 2274, in doHost > > for_each_profile(node_db, prof_list, doSetup) > > File "/usr/sbin/lconf", line 2054, in for_each_profile > > operation(services) > > File "/usr/sbin/lconf", line 2074, in doSetup > > n.prepare() > > File "/usr/sbin/lconf", line 1324, in prepare > > setup ="%s %s %s %s %s" %(blkdev, self.fstype, self.name > <http://self.name> > > <http://self.name>, > > File "/usr/sbin/lconf", line 397, in newdev > > self.setup(name, setup) > > File "/usr/sbin/lconf", line 376, in setup > > self.run(cmds) > > File "/usr/sbin/lconf", line 278, in run > > ready = select.select ([outfd,errfd],[],[]) # Wait for input > > KeyboardInterrupt > > > > my local.xml file is as follows > > #!/bin/sh > > > > # local.sh > > > > # Create node > > rm -f local.xml > > lmc -m local.xml --add node --node dev4 > > lmc -m local.xml --add net --node dev4 --nid dev4 --nettype tcp > > > > # Configure MDS > > lmc -m local.xml --format --add mds --node dev4 --mds mds-test > > --fstype ext3 --dev /tmp/mds-test --size 50000 > > > > # Configure OSTs > > lmc -m local.xml --add lov --lov lov-test --mds mds-test --stripe_sz > > 1048576 --stripe_cnt 0 --stripe_pattern 0 > > lmc -m local.xml --add ost --node dev4 --lov lov-test --ost > ost1-test > > --fstype ext3 --dev /tmp/ost1-test --size 100000 > > lmc -m local.xml --add ost --node dev4 --lov lov-test --ost > ost2-test > > --fstype ext3 --dev /tmp/ost2-test --size 100000 > > > > # Configure client > > lmc -m local.xml --add mtpt --node dev4 --path /mnt/lustre --mds > > mds-test --lov lov-test > > > > if any idea please resolve > > > > my drive is about 105GB > > > > Thanks, > > Prajith > > > > -- > > Lals > > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > Lustre-discuss mailing list > > Lustre-discuss@clusterfs.com <mailto:Lustre-discuss@clusterfs.com> > > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss > > > > > -- > Cheers, > Wang Yibin > > > > > -- > Lals-- Cheers, Wang Yibin
Hi,
This is dmesg output..
LustreError: Unexpected error -11 connecting to 127.0.0.1@tcp at host
127.0.0.1 on port 988
LustreError: previously skipped 4 similar messages
LustreError: 4752:0:(client.c:951:ptlrpc_expire_one_request()) @@@ timeout
(sent  at 1151562883, 0s ago) req@c30d4000 x4365/t0
o8->ost1-test_UUID@4vm1_UUID:6
lens 240/272 ref 1 fl Rpc:/0/0 rc 0/0
LustreError: 4752:0:(client.c:951:ptlrpc_expire_one_request()) previously
skippe d 47 similar messages
audit(1151562958.904:525): avc:  denied  { rawip_send } for  pid=4485
comm="acce ptor_988" saddr=127.0.0.1 src=988 daddr=127.0.0.1 dest=1023
netif=lo scontext=sy stem_u:object_r:unlabeled_t
tcontext=system_u:object_r:netif_lo_t tclass=netif
audit(1151563083.889:526): avc:  denied  { rawip_send } for  pid=4485
comm="acce ptor_988" saddr=127.0.0.1 src=988 daddr=127.0.0.1 dest=1023
netif=lo scontext=sy stem_u:object_r:unlabeled_t
tcontext=system_u:object_r:netif_lo_t tclass=netif
Thanks,
Prajith
On 6/29/06, Wang Yibin <wangyb@clusterfs.com>
wrote:>
> Hi,
>
> What does dmesg say?
>
> Prajith Lal wrote:
> > Hi,
> > I installed Lustre FS on one of my servers. The file system is
> > installed successfully but I cannot able to run scripts  or the test
> > scripts which are located inside the tests directory. There is no
> > error displayed. After excecuting a part of the script its going hang
> > Here is the output when I ran the local.xml with lconf ---reformat
> > local.xml
> >
> > loading module: libcfs srcdir None devdir libcfs
> > loading module: lnet srcdir None devdir lnet
> > loading module: ksocklnd srcdir None devdir klnds/socklnd
> > loading module: lvfs srcdir None devdir lvfs
> > loading module: obdclass srcdir None devdir obdclass
> > loading module: ptlrpc srcdir None devdir ptlrpc
> > loading module: ost srcdir None devdir ost
> > loading module: ldiskfs srcdir None devdir ldiskfs
> > loading module: fsfilt_ldiskfs srcdir None devdir lvfs
> > loading module: obdfilter srcdir None devdir obdfilter
> > loading module: mdc srcdir None devdir mdc
> > loading module: osc srcdir None devdir osc
> > loading module: lov srcdir None devdir lov
> > loading module: mds srcdir None devdir mds
> > loading module: llite srcdir None devdir llite
> > NETWORK: NET_dev4_tcp .dev_tcp_UUID tcp dev4
> > OSD: ost1-test ost1-test_UUID obdfilter /tmp/ost1-test 100000 ldiskfs
> > no 0 0
> > OST mount options: errors=remount-ro
> > OSD: ost2-test ost2-test_UUID obdfilter /tmp/ost2-test 100000 ldiskfs
> > no 0 0
> > OST mount options: errors=remount-ro
> > MDSDEV: mds-test mds-test_UUID /tmp/mds-test ldiskfs yes
> > recording clients for filesystem: FS_fsname_UUID
> > Recording log mds-test on mds-test
> > LOV: lov_mds-test b0d6f_lov_mds-test_1b4f231e55 mds-test_UUID 0
> > 1048576 0 0 [u''ost1-test_UUID'',
u''ost2-test_UUID''] mds-test
> > OSC: OSC_dev_ost1-test_mds-test b0d6f_lov_mds-test_1b4f231e55
> > ost1-test_UUID
> > OSC: OSC_dev_ost2-test_mds-test b0d6f_lov_mds-test_1b4f231e55
> > ost2-test_UUID
> > End recording log mds-test on mds-test
> > MDSDEV: mds-test mds-test_UUID /tmp/mds-test ldiskfs 50000 yes
> > MDS mount options: errors=remount-ro
> >
> > when I press Ctrl+C
> > Traceback (most recent call last):
> >   File "/usr/sbin/lconf", line 2838, in ?
> >     main()
> >   File "/usr/sbin/lconf", line 2831, in main
> >     doHost(lustreDB, node_list)
> >   File "/usr/sbin/lconf", line 2274, in doHost
> >     for_each_profile(node_db, prof_list, doSetup)
> >   File "/usr/sbin/lconf", line 2054, in for_each_profile
> >     operation(services)
> >   File "/usr/sbin/lconf", line 2074, in doSetup
> >     n.prepare()
> >   File "/usr/sbin/lconf", line 1324, in prepare
> >     setup ="%s %s %s %s %s" %(blkdev, self.fstype, self.name
> > <http://self.name>,
> >   File "/usr/sbin/lconf", line 397, in newdev
> >     self.setup(name, setup)
> >   File "/usr/sbin/lconf", line 376, in setup
> >     self.run(cmds)
> >   File "/usr/sbin/lconf", line 278, in run
> >     ready = select.select([outfd,errfd],[],[]) # Wait for input
> > KeyboardInterrupt
> >
> > my local.xml file is as follows
> > #!/bin/sh
> >
> > # local.sh
> >
> > # Create node
> > rm -f local.xml
> > lmc -m local.xml --add node --node dev4
> > lmc -m local.xml --add net --node dev4 --nid dev4 --nettype tcp
> >
> > # Configure MDS
> > lmc -m local.xml --format --add mds --node dev4 --mds mds-test
> > --fstype ext3 --dev /tmp/mds-test --size 50000
> >
> > # Configure OSTs
> > lmc -m local.xml --add lov --lov lov-test --mds mds-test --stripe_sz
> > 1048576 --stripe_cnt 0 --stripe_pattern 0
> > lmc -m local.xml --add ost --node dev4 --lov lov-test --ost ost1-test
> > --fstype ext3 --dev /tmp/ost1-test --size 100000
> > lmc -m local.xml --add ost --node dev4 --lov lov-test --ost ost2-test
> > --fstype ext3 --dev /tmp/ost2-test --size 100000
> >
> > # Configure client
> > lmc -m local.xml --add mtpt --node dev4 --path /mnt/lustre --mds
> > mds-test --lov lov-test
> >
> > if any idea please resolve
> >
> > my drive is about 105GB
> >
> > Thanks,
> > Prajith
> >
> > --
> > Lals
> >
------------------------------------------------------------------------
> >
> > _______________________________________________
> > Lustre-discuss mailing list
> > Lustre-discuss@clusterfs.com
> > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss
> >
>
>
> --
> Cheers,
>      Wang Yibin
>
>
-- 
Lals
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
http://mail.clusterfs.com/pipermail/lustre-discuss/attachments/20060628/dbea5b15/attachment.html
On Jun 29, 2006 10:12 +0530, Prajith Lal wrote:> LustreError: Unexpected error -11 connecting to 127.0.0.1@tcp at host > 127.0.0.1 on port 988Your /etc/hosts file is likely mapping your hostname to "127.0.0.1". This is a common RH problem. Cheers, Andreas -- Andreas Dilger Principal Software Engineer Cluster File Systems, Inc.
Hi, Its actually a virtual machine. There is a physical server and using vmware a virtual server with linux OS is cofigured I am working on that virtual mahine. Thanks, Prajith On 6/29/06, Wang Yibin <wangyb@clusterfs.com> wrote:> > Looks like your is lnet not properly configured. > Why using ''lo'' as netif? > what does ''lctl list_nids'' show? > Is there a line in /etc/modprobe.conf like ''options lnet networks=tcp > accept=all''? > > Prajith Lal wrote: > > Hi, > > This is dmesg output.. > > LustreError: Unexpected error -11 connecting to 127.0.0.1@tcp at host > > 127.0.0.1 <http://127.0.0.1> on port 988 > > LustreError: previously skipped 4 similar messages > > LustreError: 4752:0:(client.c:951:ptlrpc_expire_one_request()) @@@ > > timeout (sent at 1151562883, 0s ago) req@c30d4000 x4365/t0 > > o8->ost1-test_UUID@4vm1_UUID:6 lens 240/272 ref 1 fl Rpc:/0/0 rc 0/0 > > LustreError: 4752:0:(client.c:951:ptlrpc_expire_one_request()) > > previously skippe d 47 similar messages > > audit(1151562958.904:525): avc: denied { rawip_send } for pid=4485 > > comm="acce ptor_988" saddr=127.0.0.1 <http://127.0.0.1> src=988 > > daddr=127.0.0.1 <http://127.0.0.1> dest=1023 netif=lo scontext=sy > > stem_u:object_r:unlabeled_t tcontext=system_u:object_r:netif_lo_t > > tclass=netif > > audit(1151563083.889:526): avc: denied { rawip_send } for pid=4485 > > comm="acce ptor_988" saddr=127.0.0.1 <http://127.0.0.1> src=988 > > daddr=127.0.0.1 <http://127.0.0.1> dest=1023 netif=lo scontext=sy > > stem_u:object_r:unlabeled_t tcontext=system_u:object_r:netif_lo_t > > tclass=netif > > > > Thanks, > > Prajith > > > > On 6/29/06, *Wang Yibin* <wangyb@clusterfs.com > > <mailto:wangyb@clusterfs.com>> wrote: > > > > Hi, > > > > What does dmesg say? > > > > Prajith Lal wrote: > > > Hi, > > > I installed Lustre FS on one of my servers. The file system is > > > installed successfully but I cannot able to run scripts or the > test > > > scripts which are located inside the tests directory. There is no > > > error displayed. After excecuting a part of the script its going > > hang > > > Here is the output when I ran the local.xml with lconf ---reformat > > > local.xml > > > > > > loading module: libcfs srcdir None devdir libcfs > > > loading module: lnet srcdir None devdir lnet > > > loading module: ksocklnd srcdir None devdir klnds/socklnd > > > loading module: lvfs srcdir None devdir lvfs > > > loading module: obdclass srcdir None devdir obdclass > > > loading module: ptlrpc srcdir None devdir ptlrpc > > > loading module: ost srcdir None devdir ost > > > loading module: ldiskfs srcdir None devdir ldiskfs > > > loading module: fsfilt_ldiskfs srcdir None devdir lvfs > > > loading module: obdfilter srcdir None devdir obdfilter > > > loading module: mdc srcdir None devdir mdc > > > loading module: osc srcdir None devdir osc > > > loading module: lov srcdir None devdir lov > > > loading module: mds srcdir None devdir mds > > > loading module: llite srcdir None devdir llite > > > NETWORK: NET_dev4_tcp .dev_tcp_UUID tcp dev4 > > > OSD: ost1-test ost1-test_UUID obdfilter /tmp/ost1-test 100000 > > ldiskfs > > > no 0 0 > > > OST mount options: errors=remount-ro > > > OSD: ost2-test ost2-test_UUID obdfilter /tmp/ost2-test 100000 > > ldiskfs > > > no 0 0 > > > OST mount options: errors=remount-ro > > > MDSDEV: mds-test mds-test_UUID /tmp/mds-test ldiskfs yes > > > recording clients for filesystem: FS_fsname_UUID > > > Recording log mds-test on mds-test > > > LOV: lov_mds-test b0d6f_lov_mds-test_1b4f231e55 mds-test_UUID 0 > > > 1048576 0 0 [u''ost1-test_UUID'', u''ost2-test_UUID''] mds-test > > > OSC: OSC_dev_ost1-test_mds-test b0d6f_lov_mds-test_1b4f231e55 > > > ost1-test_UUID > > > OSC: OSC_dev_ost2-test_mds-test b0d6f_lov_mds-test_1b4f231e55 > > > ost2-test_UUID > > > End recording log mds-test on mds-test > > > MDSDEV: mds-test mds-test_UUID /tmp/mds-test ldiskfs 50000 yes > > > MDS mount options: errors=remount-ro > > > > > > when I press Ctrl+C > > > Traceback (most recent call last): > > > File "/usr/sbin/lconf", line 2838, in ? > > > main() > > > File "/usr/sbin/lconf", line 2831, in main > > > doHost(lustreDB, node_list) > > > File "/usr/sbin/lconf", line 2274, in doHost > > > for_each_profile(node_db, prof_list, doSetup) > > > File "/usr/sbin/lconf", line 2054, in for_each_profile > > > operation(services) > > > File "/usr/sbin/lconf", line 2074, in doSetup > > > n.prepare() > > > File "/usr/sbin/lconf", line 1324, in prepare > > > setup ="%s %s %s %s %s" %(blkdev, self.fstype, self.name > > <http://self.name> > > > <http://self.name>, > > > File "/usr/sbin/lconf", line 397, in newdev > > > self.setup(name, setup) > > > File "/usr/sbin/lconf", line 376, in setup > > > self.run(cmds) > > > File "/usr/sbin/lconf", line 278, in run > > > ready = select.select ([outfd,errfd],[],[]) # Wait for input > > > KeyboardInterrupt > > > > > > my local.xml file is as follows > > > #!/bin/sh > > > > > > # local.sh > > > > > > # Create node > > > rm -f local.xml > > > lmc -m local.xml --add node --node dev4 > > > lmc -m local.xml --add net --node dev4 --nid dev4 --nettype tcp > > > > > > # Configure MDS > > > lmc -m local.xml --format --add mds --node dev4 --mds mds-test > > > --fstype ext3 --dev /tmp/mds-test --size 50000 > > > > > > # Configure OSTs > > > lmc -m local.xml --add lov --lov lov-test --mds mds-test > --stripe_sz > > > 1048576 --stripe_cnt 0 --stripe_pattern 0 > > > lmc -m local.xml --add ost --node dev4 --lov lov-test --ost > > ost1-test > > > --fstype ext3 --dev /tmp/ost1-test --size 100000 > > > lmc -m local.xml --add ost --node dev4 --lov lov-test --ost > > ost2-test > > > --fstype ext3 --dev /tmp/ost2-test --size 100000 > > > > > > # Configure client > > > lmc -m local.xml --add mtpt --node dev4 --path /mnt/lustre --mds > > > mds-test --lov lov-test > > > > > > if any idea please resolve > > > > > > my drive is about 105GB > > > > > > Thanks, > > > Prajith > > > > > > -- > > > Lals > > > > > > ------------------------------------------------------------------------ > > > > > > _______________________________________________ > > > Lustre-discuss mailing list > > > Lustre-discuss@clusterfs.com <mailto:Lustre-discuss@clusterfs.com> > > > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss > > > > > > > > > -- > > Cheers, > > Wang Yibin > > > > > > > > > > -- > > Lals > > > -- > Cheers, > Wang Yibin > >-- Lals -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.clusterfs.com/pipermail/lustre-discuss/attachments/20060629/470e9095/attachment-0001.html
Hi, Yes!!!!!!!! It was the problem I changed the mapping with the system IP and it worked smoothly . Thank you very much Andreas, thanks a lot... Thanks, Prajith On 6/29/06, Andreas Dilger <adilger@clusterfs.com> wrote:> > On Jun 29, 2006 10:12 +0530, Prajith Lal wrote: > > LustreError: Unexpected error -11 connecting to 127.0.0.1@tcp at host > > 127.0.0.1 on port 988 > > Your /etc/hosts file is likely mapping your hostname to "127.0.0.1". > This is a common RH problem. > > Cheers, Andreas > -- > Andreas Dilger > Principal Software Engineer > Cluster File Systems, Inc. > >-- Lals -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.clusterfs.com/pipermail/lustre-discuss/attachments/20060629/0cdfe471/attachment-0001.html