Displaying 20 results from an estimated 29 matches for "49157".
Did you mean:
49152
2012 Dec 05
1
libvirt migrate error
hi,all
when I try to migrate `vm0` from (192.168.1.102) to another
host(192.168.1.200), the name `ubuntu1204` is 192.168.1.200's hostname.
command is :
`sudo virsh migrate --live vm0 qemu+tcp://192.168.1.200/system`
the error is : unable to connect to server at 'ubuntu1204:49157':
Connection refused.
however,when I try to virsh connect to 192.168.1.200 by tcp is no problem.
I dont know what the port 49157?how to fix it?
version:libvirt:0.9.8
system:ubuntu1204
thanks in advance for any reply.
-------------- next part --------------
An HTML attachment was scrubbed.....
2010 Feb 26
1
Migration error
...rom one libvirt 0.7.6-1 (qemu-kvm
0.11.1+dfsg-1) to another libvirt 0.7.6-2 (qemu-kvm 0.11.1+dfsg-1)
connected with SSH , i have followed pre requite (same shared, same
path, same network conf ...) . But when i migrate , i have following
error : operation failed:
/migration to 'tcp:x.x.x.x:49157' failed: migration failed
DETAIL :
Unable to migrate guest:
Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/migrate.py", line 457, in
_async_migrate
vm.migrate(dstconn, migrate_uri, rate, live, secure)
File "/usr/share/virt-manager/virtMana...
2017 Jun 27
2
Gluster volume not mounted
...gine
gluster volume status data
Status of volume: data
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 192.168.170.141:/gluster_bricks/data/
data 49157 0 Y
11967
Brick 192.168.170.143:/gluster_bricks/data/
data 49157 0 Y
2901
Brick 192.168.170.147:/gluster_bricks/data/
data 49158 0 Y
2626
Self-heal Daemon on localhost...
2018 May 30
2
RDMA inline threshold?
...ocess is locked up from the logs.
Status of volume: rhev_vms_primary
Gluster process
TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick spidey.ib.runlevelone.lan:/gluster/brick/rhev_vms_primary
0 49157 Y 15666
Brick deadpool.ib.runlevelone.lan:/gluster/brick/rhev_vms_primary
0 49156 Y 2542
Brick groot.ib.runlevelone.lan:/gluster/brick/rhev_vms_primary
0 49156 Y 2180
Self-heal Daemon on localhost
N/A N/A N...
2014 Apr 16
1
Possible SYN flooding
Anyone seen this problem?
server
Apr 16 14:34:28 nas1 kernel: [7506182.154332] TCP: TCP: Possible SYN flooding on port 49156. Sending cookies. Check SNMP counters.
Apr 16 14:34:31 nas1 kernel: [7506185.142589] TCP: TCP: Possible SYN flooding on port 49157. Sending cookies. Check SNMP counters.
Apr 16 14:34:53 nas1 kernel: [7506207.126193] TCP: TCP: Possible SYN flooding on port 49159. Sending cookies. Check SNMP counters.
client
Apr 16 14:34:21 charlie5 GlusterFS[6718]: [2014-04-16 06:34:21.710137] C [client-handshake.c:127:rpc_client_ping_timer...
2018 May 30
0
RDMA inline threshold?
...rocess TCP Port RDMA Port Online Pid
> ------------------------------------------------------------------------------
> Brick spidey.ib.runlevelone.lan:/gluster/brick/rhev_vms_primary 0 49157 Y 15666
> Brick deadpool.ib.runlevelone.lan:/gluster/brick/rhev_vms_primary 0 49156 Y 2542
> Brick groot.ib.runlevelone.lan:/gluster/brick/rhev_vms_primary 0 49156 Y 2180
> Self-heal Daemon on localhost...
2017 Jun 28
0
Gluster volume not mounted
...; Status of volume: data
> Gluster process TCP Port RDMA Port Online
> Pid
> ------------------------------------------------------------
> ------------------
> Brick 192.168.170.141:/gluster_bricks/data/
> data 49157 0 Y
> 11967
> Brick 192.168.170.143:/gluster_bricks/data/
> data 49157 0 Y
> 2901
> Brick 192.168.170.147:/gluster_bricks/data/
> data 49158 0 Y
> 2626
> Self...
2008 Aug 27
1
Rsync Error Code 23?
...ib/v9" ->
"id13/v9"
failed: File exists
Number of files: 511604
Number of files transferred: 75
Total file size: 107684107564 bytes
Total transferred file size: 13358336 bytes
Literal data: 2097627 bytes
Matched data: 11260709 bytes
File list size: 11714766
Total bytes written: 49157
Total bytes read: 13837325
wrote 49157 bytes read 13837325 bytes 55657.24 bytes/sec
total size is 107684107564 speedup is 7754.60
rsync error: some files could not be transferred (code 23) at main.c(1048)
END Tue Aug 26 11:14:21 PDT 2008
-------------- next part --------------
HTML attachment s...
2018 Feb 02
1
How to trigger a resync of a newly replaced empty brick in replicate config ?
...just single brick to resync !
> gluster v status home
volume status home
Status of volume: home
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick server1:/data/glusterfs/home/brick1 49157 0 Y 5003
Brick server1:/data/glusterfs/home/brick2 49153 0 Y 5023
Brick server1:/data/glusterfs/home/brick3 49154 0 Y 5004
Brick server1:/data/glusterfs/home/brick4 49155 0 Y 5011
Brick server3:/data/glusterfs/home/b...
2010 Sep 17
1
multipath troubleshoot
...XX....... 7/20
5:0:0:32772 sdt 65:48 1 [active][ready] XXX....... 7/20
5:0:0:49156 sdu 65:64 1 [active][ready] XXX....... 7/20
5:0:0:5 sdv 65:80 0 [undef] [faulty] [orphan]
5:0:0:16389 sdw 65:96 0 [undef] [faulty] [orphan]
5:0:0:32773 sdx 65:112 0 [undef] [faulty] [orphan]
5:0:0:49157 sdy 65:128 0 [undef] [faulty] [orphan]
multipathd>
Thanks in Adv
Paras.
2017 May 31
1
Snapshot auto-delete unmount problem
...pshot-utils.c:55:glusterd_snapobject_delete] 0-management:
Failed destroying lockof snap Snap_GMT-2017.05.31-09.20.04
[2017-05-31 09:22:02.444038] I [MSGID: 106144]
[glusterd-pmap.c:377:pmap_registry_remove] 0-pmap: removing brick
/run/gluster/snaps/4f980da64dec424ba0b48d6d36c4c54e/brick1/b on port 49157
Can anyone help ?
Thanks
*Gary Lloyd*
________________________________________________
I.T. Systems:Keele University
Finance & IT Directorate
Keele:Staffs:IC1 Building:ST5 5NB:UK
________________________________________________
-------------- next part --------------
An HTML attachment wa...
2018 May 30
0
RDMA inline threshold?
Stefan,
Sounds like a brick process is not running. I have notice some strangeness
in my lab when using RDMA, I often have to forcibly restart the brick
process, often as in every single time I do a major operation, add a new
volume, remove a volume, stop a volume, etc.
gluster volume status <vol>
Does any of the self heal daemons show N/A? If that's the case, try forcing
a restart on
2018 May 29
2
RDMA inline threshold?
Dear all,
I faced a problem with a glusterfs volume (pure distributed, _not_ dispersed) over RDMA transport. One user had a directory with a large number of files (50,000 files) and just doing an "ls" in this directory yields a "Transport endpoint not connected" error. The effect is, that "ls" only shows some files, but not all.
The respective log file shows this
2006 Apr 08
2
dovecot-dspam-plugin not launching dspam
...tatisticalSedation
AllowOverride enableBNR
AllowOverride enableWhitelist
AllowOverride signatureLocation
AllowOverride showFactors
AllowOverride optIn optOut
AllowOverride whitelistThreshold
HashRecMax 98317
HashAutoExtend on
HashMaxExtents 0
HashExtentSize 49157
HashMaxSeek 100
HashConnectionCache 10
Notifications off
PurgeSignatures 14
PurgeNeutral 90
PurgeUnused 90
PurgeHapaxes 30
PurgeHits1S 15
PurgeHits1I 15
LocalMX 127.0.0.1
SystemLog on
UserLog on
Opt out
ProcessorBias on
Include /etc/dspam/dspam.d/
-> /etc/d...
2018 Apr 12
2
Turn off replication
...TCP Port RDMA Port Online
>> Pid
>> ------------------------------------------------------------
>> ------------------
>> Brick gluster01ib:/gdata/brick1/scratch 49152 49153 Y
>> 1743
>> Brick gluster02ib:/gdata/brick1/scratch 49156 49157 Y
>> 1732
>> Brick gluster01ib:/gdata/brick2/scratch 49154 49155 Y
>> 1738
>> Brick gluster02ib:/gdata/brick2/scratch 49158 49159 Y
>> 1733
>> Self-heal Daemon on localhost N/A N/A Y
>> 1728
>>...
2017 Jul 13
0
Snapshot auto-delete unmount problem
...0-management:
>> Failed destroying lockof snap Snap_GMT-2017.05.31-09.20.04
>> [2017-05-31 09:22:02.444038] I [MSGID: 106144]
>> [glusterd-pmap.c:377:pmap_registry_remove] 0-pmap: removing brick
>> /run/gluster/snaps/4f980da64dec424ba0b48d6d36c4c54e/brick1/b on port
>> 49157
>>
>>
>>
>> Can anyone help ?
>>
>> Thanks
>>
>>
>> *Gary Lloyd*
>> ________________________________________________
>> I.T. Systems:Keele University
>> Finance & IT Directorate
>> Keele:Staffs:IC1 Building:ST5 5NB:UK...
2018 Apr 25
0
Turn off replication
...r01 ~]#
[root at gluster02 ~]# gluster volume status scratch
Status of volume: scratch
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gluster01ib:/gdata/brick1/scratch 49156 49157 Y 1819
Brick gluster01ib:/gdata/brick2/scratch 49158 49159 Y 1827
Brick gluster02ib:/gdata/brick1/scratch N/A N/A N N/A
Brick gluster02ib:/gdata/brick2/scratch N/A N/A N N/A
Task Status of Volume scratch
-----------...
2018 Apr 25
2
Turn off replication
...gluster02 ~]# gluster volume status scratch
> Status of volume: scratch
> Gluster process TCP Port RDMA Port Online Pid
> ------------------------------------------------------------------------------
> Brick gluster01ib:/gdata/brick1/scratch 49156 49157 Y 1819
> Brick gluster01ib:/gdata/brick2/scratch 49158 49159 Y 1827
> Brick gluster02ib:/gdata/brick1/scratch N/A N/A N N/A
> Brick gluster02ib:/gdata/brick2/scratch N/A N/A N N/A
>
> Task Status of V...
2010 Sep 09
2
Invalid or corrupt kernel image
...und
13:03:37 atftpd[315]: Server thread exiting
13:03:37 atftpd[316]: Serving pxelinux.cfg/C0A801 to 192.168.1.67:49156
13:03:37 atftpd[316]: File /opt/tftpboot/pxelinux.cfg/C0A801 not found
13:03:37 atftpd[316]: Server thread exiting
13:03:37 atftpd[317]: Serving pxelinux.cfg/C0A80 to 192.168.1.67:49157
13:03:37 atftpd[317]: File /opt/tftpboot/pxelinux.cfg/C0A80 not found
13:03:37 atftpd[317]: Server thread exiting
13:03:37 atftpd[318]: Serving pxelinux.cfg/C0A8 to 192.168.1.67:49158
13:03:37 atftpd[318]: File /opt/tftpboot/pxelinux.cfg/C0A8 not found
13:03:37 atftpd[318]: Server thread exiting
13...
2018 Apr 27
0
Turn off replication
...gluster02 ~]# gluster volume status scratch
> Status of volume: scratch
> Gluster process TCP Port RDMA Port Online Pid
> ------------------------------------------------------------------------------
> Brick gluster01ib:/gdata/brick1/scratch 49156 49157 Y
> 1819
> Brick gluster01ib:/gdata/brick2/scratch 49158 49159 Y
> 1827
> Brick gluster02ib:/gdata/brick1/scratch N/A N/A N N/A
> Brick gluster02ib:/gdata/brick2/scratch N/A N/A N N/A
>
> Task Status of Volume scra...