Displaying 20 results from an estimated 264 matches for "49153".
Did you mean:
49152
2019 Sep 02
3
Problems with Internal DNS Samba 4
...ghttpd
tcp 0 0 192.168.1.20:53 0.0.0.0:* OU?A
1930/named
tcp 0 0 127.0.0.1:53 0.0.0.0:* OU?A
1930/named
tcp 0 0 127.0.0.1:953 0.0.0.0:* OU?A
1930/named
tcp 0 0 0.0.0.0:49153 0.0.0.0:* OU?A
662/samba: task[dce
tcp6 0 0 :::81 :::* OU?A
534/lighttpd
tcp6 0 0 :::49153 :::* OU?A
662/samba: task[dce
udp 0 0 192.168. 1.20:53 0....
2020 Oct 12
3
unable to migrate: virPortAllocatorSetUsed:299 : internal error: Failed to reserve port 49153
...trying to live migrate "error:
internal error: Failed to reserve port" error is received and
migration does not succeed:
virsh # migrate cartridge qemu+tls://ratchet.lan/system --live
--persistent --undefinesource --copy-storage-all --verbose
error: internal error: Failed to reserve port 49153
virsh #
On target host with debug logs, nothing interesting but the error
itself is found in the logs
...
2020-10-12 02:11:33.852+0000: 6871: debug :
qemuMonitorJSONIOProcessLine:220 : Line [{"return": {}, "id":
"libvirt-373"}]
2020-10-12 02:11:33.852+0000: 6871: in...
2019 Sep 03
1
Sporadic duplicate requests with lpxelinux.0
...a machine in this state it is booting fine.
As we rely heavily on the HTTP capability of lpxelinux.0, testing with pxelinux.0 is not trivial :(
Has anybody seen this before or additional ideas?
# Number Time Source SrcPort Destination DSTPort Protocol Length Info
16062 0.003813687 131.169.168.108 49153 131.169.81.129 69 TFTP 121 Read Request, File: pxelinux.cfg/008093db-74fd-e711-8000-e0d55eccd74f, Transfer type: octet, tsize=0, blksize=1408
16064 0.080917021 131.169.168.108 49153 131.169.81.129 69 TFTP 121 Read Request, File: pxelinux.cfg/008093db-74fd-e711-8000-e0d55eccd74f, Transfer type: octe...
2013 Jun 08
2
memdisk and iso
...on being correctly established and then
> disconnected, with all the negotiation involved. I'm wondering if it is
> iPXE that is playing games with us here...
Packets 301-306, 309-312 show two successful attempts to read
linux-install/pxelinux.cfg/C0A80058 with the same UDP source port
(49153).
Packets 307,308,313-316 show two unsuccessful attempts to read
linux-install/pxelinux.cfg/vesamenu.c32 reusing the same UDP source
port (49153) with an ICMP reply stating the port is closed.
Packets 317-357 show a successful single attempt to read
linux-install/pxelinux.cfg/vesamenu.c32 reusing...
2019 Sep 02
2
Problems with Internal DNS Samba 4
Hi,
>is Bind9 running ?
Yes
netstat -lntup | grep 53
tcp 0 0 127.0.0.1:953 0.0.0.0:* OU?A
13296/named
tcp 0 0 0.0.0.0:49153 0.0.0.0:* OU?A
15105/samba: task[d
tcp6 0 0 :::49153 :::* OU?A
15105/samba: task[d
/etc/init.d/bind9 status
? bind9.service - BIND Domain Name Server
Loaded: loaded (/lib/systemd/system/bind9.service; enabled; vendor...
2020 Oct 26
1
Re: unable to migrate: virPortAllocatorSetUsed:299 : internal error: Failed to reserve port 49153
...error: Failed to reserve port" error is received and
>> migration does not succeed:
>>
>> virsh # migrate cartridge qemu+tls://ratchet.lan/system --live
>> --persistent --undefinesource --copy-storage-all --verbose
>> error: internal error: Failed to reserve port 49153
>>
>> virsh #
>>
>
> Sorry for not replying earlier. But this is a clear libvirt bug and I
> think it's a regression introduced by the following commit:
>
> https://gitlab.com/libvirt/libvirt/-/commit/e74d627bb3b
>
> The problem is, if you have two or...
2020 Oct 26
0
Re: unable to migrate: virPortAllocatorSetUsed:299 : internal error: Failed to reserve port 49153
...rror:
> internal error: Failed to reserve port" error is received and
> migration does not succeed:
>
> virsh # migrate cartridge qemu+tls://ratchet.lan/system --live
> --persistent --undefinesource --copy-storage-all --verbose
> error: internal error: Failed to reserve port 49153
>
> virsh #
>
Sorry for not replying earlier. But this is a clear libvirt bug and I
think it's a regression introduced by the following commit:
https://gitlab.com/libvirt/libvirt/-/commit/e74d627bb3b
The problem is, if you have two or more disks that need to be copied
over to th...
2013 Jun 08
1
memdisk and iso
...connected, with all the negotiation involved. I'm wondering if it
>>is
>>> iPXE that is playing games with us here...
>>
>>Packets 301-306, 309-312 show two successful attempts to read
>>linux-install/pxelinux.cfg/C0A80058 with the same UDP source port
>>(49153).
>>
>>Packets 307,308,313-316 show two unsuccessful attempts to read
>>linux-install/pxelinux.cfg/vesamenu.c32 reusing the same UDP source
>>port (49153) with an ICMP reply stating the port is closed.
>>
>>Packets 317-357 show a successful single attempt to read...
2018 Feb 02
1
How to trigger a resync of a newly replaced empty brick in replicate config ?
...Status of volume: home
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick server1:/data/glusterfs/home/brick1 49157 0 Y 5003
Brick server1:/data/glusterfs/home/brick2 49153 0 Y 5023
Brick server1:/data/glusterfs/home/brick3 49154 0 Y 5004
Brick server1:/data/glusterfs/home/brick4 49155 0 Y 5011
Brick server3:/data/glusterfs/home/brick1 49152 0 Y 5422
Brick server4:/data/glusterfs/home/b...
2018 Feb 23
2
Problem migration 3.7.6 to 3.13.2
...f gluster but
when i doing a command
df -h
the result of space are minor of total.
Configuration:
2 peers
Brick serda2:/glusterfs/p2/b2 49152 0 Y
1560
Brick serda1:/glusterfs/p2/b2 49152 0 Y
1462
Brick serda1:/glusterfs/p1/b1 49153 0 Y
1476
Brick serda2:/glusterfs/p1/b1 49153 0 Y
1566
Self-heal Daemon on localhost N/A N/A Y
1469
Self-heal Daemon on serda1 N/A N/A Y
1286
Thanks
-------------- next part --------------
An HTML a...
2002 Nov 19
0
winbindd+ win24
.......... N.T.1.4.
[010] 00 00 53 00 45 00 52 00 57 00 45 00 52 00 32 00 ..S.E.R. W.E.R.2.
[020] 30 00 30 00 30 00 00 00 0.0.0...
write_socket(5,92)
write_socket(5,92) wrote 92
got smb length of 125
size=125
smb_com=0x73
smb_rcls=0
smb_reh=0
smb_err=0
smb_flg=136
smb_flg2=49153
smb_tid=0
smb_pid=29260
smb_uid=10241
smb_mid=1
smt_wct=3
smb_vwv[0]=255 (0xFF)
smb_vwv[1]=125 (0x7D)
smb_vwv[2]=0 (0x0)
smb_bcc=84
[000] 7C 57 00 69 00 6E 00 64 00 6F 00 77 00 73 00 20 |W.i.n.d .o.w.s.
[010] 00 35 00 2E 00 30 00 00 00 57 00 69 00 6E 00 64 .5...0.. .W.i.n.d
[020] 00 6F 00 77 0...
2015 Feb 09
2
IAX port
Hi!
Sometimes IAX peers are not reachable and with "iax2 set debug on" I get something like this
Tx-Frame Retry[ No] -- OSeqno: 000 ISeqno: 001 Type: IAX Subclass: PONG
Timestamp: 00014ms SCall: 00001 DCall: 01200 79.233.155.174:49153
Rx-Frame Retry[ No] -- OSeqno: 001 ISeqno: 001 Type: IAX Subclass: ACK
Timestamp: 00014ms SCall: 01200 DCall: 00001 79.233.155.174:49153
I am not sure what causes port 4569 to be replaced an an arbitrary port, which could be the
reason for my problem. Does someone know whether this is a...
2019 Sep 02
0
Problems with Internal DNS Samba 4
.../09/2019 13:19, Marcio Demetrio Bacci wrote:
> Hi,
>
>
>
> >is Bind9 running ?
> Yes
> netstat -lntup | grep 53
> tcp ? ? ? ?0 ? ? ?0 127.0.0.1:953 <http://127.0.0.1:953> ? ? ? ? ?
> 0.0.0.0:* ? ? ? ? ? ? ? OU?A ? ? ? 13296/named
> tcp ? ? ? ?0 ? ? ?0 0.0.0.0:49153 <http://0.0.0.0:49153> ? ? ? ? ?
> 0.0.0.0:* ? ? ? ? ? ? ? OU?A ? ? ? 15105/samba: task[d
> tcp6 ? ? ? 0 ? ? ?0 :::49153 ? ? ? ? ? ? ? ?:::* ? ? ?OU?A ? ? ?
> 15105/samba: task[d
That will be a NO then.
On my DC:
netstat -lntup | grep 53
tcp??????? 0????? 0 192.168.0.6:53 0.0.0....
2018 Mar 04
1
tiering
...ot Bricks:
Brick labgfs81:/gfs/p1-tier/mount 49156 0 Y 4217
Brick labgfs51:/gfs/p1-tier/mount N/A N/A N N/A
Brick labgfs11:/gfs/p1-tier/mount 49152 0 Y 643
Cold Bricks:
Brick labgfs11:/gfs/p1/mount 49153 0 Y 312
Brick labgfs51:/gfs/p1/mount 49153 0 Y 295
Brick labgfs81:/gfs/p1/mount 49153 0 Y 307
Cannot find a command to replace the ssd so instead trying detach the tier but:
# gluster volume tier labgreenbin de...
2018 Sep 11
0
shared folder in the samba domain, can't be access on trusting domain users
...uman_readable)
> Auth: [SamLogon,network] user [TESTHV]\[mtest] at [Mon, 10 Sep 2018
> 18:18:57.227901 NZST] with [NTLMv2] status [NT_STATUS_NO_SUCH_USER]
> workstation [TESTHV-DC1] remote host [ipv4:192.168.179.229:50070] mapped
> to [TESTHV]\[mtest]. local host [ipv4:192.168.179.226:49153] NETLOGON
> computer [VM000459] trust account [VM000459$]
> [2018/09/10 18:18:57.228057, 2]
> ../lib/audit_logging/audit_logging.c:141(audit_log_json)
> JSON Authentication: {"timestamp": "2018-09-10T18:18:57.227924+1200",
> "type": "Authenticat...
2002 Jun 30
0
Winbind in Windows 2000 Domain
...z..... R.I.C.H.
[010] 49 00 4E 00 53 00 00 00 50 00 48 00 4F 00 45 00 I.N.S... P.H.O.E.
[020] 4E 00 49 00 58 00 00 00 N.I.X...
write_socket(14,92)
write_socket(14,92) wrote 92
got smb length of 131
size=131
smb_com=0x73
smb_rcls=0
smb_reh=0
smb_err=0
smb_flg=136
smb_flg2=49153
smb_tid=0
smb_pid=12371
smb_uid=28674
smb_mid=1
smt_wct=3
smb_vwv[0]=255 (0xFF)
smb_vwv[1]=131 (0x83)
smb_vwv[2]=0 (0x0)
smb_bcc=90
[000] 00 57 00 69 00 6E 00 64 00 6F 00 77 00 73 00 20 .W.i.n.d .o.w.s.
[010] 00 35 00 2E 00 30 00 00 00 57 00 69 00 6E 00 64 .5...0.. .W.i.n.d
[020] 00 6F 00 77 00...
2018 Jan 10
2
Creating cluster replica on 2 nodes 2 bricks each.
...Thanks
Jose
[root at gluster01 ~]# gluster volume status
Status of volume: scratch
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gluster01ib:/gdata/brick1/scratch 49152 49153 Y 3140
Brick gluster02ib:/gdata/brick1/scratch 49153 49154 Y 2634
Self-heal Daemon on localhost N/A N/A Y 3132
Self-heal Daemon on gluster02ib N/A N/A Y 2626
Task Status of Volume scratch
-----------...
2002 Dec 13
0
smbpasswd join strace - failed session setup = 21
...= 13
write(1, "smb_rcls=109\n", 13smb_rcls=109
) = 13
write(1, "smb_reh=0\n", 10smb_reh=0
) = 10
write(1, "smb_err=49152\n", 14smb_err=49152
) = 14
write(1, "smb_flg=136\n", 12smb_flg=136
) = 12
write(1, "smb_flg2=49153\n", 15smb_flg2=49153
) = 15
write(1, "smb_tid=0\n", 10smb_tid=0
) = 10
write(1, "smb_pid=10404\n", 14smb_pid=10404
) = 14
write(1, "smb_uid=0\n", 10smb_uid=0
) = 10
write(1, "smb_mid=1\n", 10smb_mid=1
)...
2018 Mar 06
1
geo replication
...a ?master volinfo unavailable? in master logfile.
Any ideas?
Master:
Status of volume: testtomcat
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gfstest07:/gfs/testtomcat/mount 49153 0 Y 326
Brick gfstest05:/gfs/testtomcat/mount 49153 0 Y 326
Brick gfstest01:/gfs/testtomcat/mount 49153 0 Y 335
Self-heal Daemon on localhost N/A N/A Y 1134
Self-heal Daemon on gfstest07...
2018 Feb 23
0
Problem migration 3.7.6 to 3.13.2
...; the result of space are minor of total.
>
> Configuration:
> 2 peers
>
> Brick serda2:/glusterfs/p2/b2 49152 0 Y
> 1560
> Brick serda1:/glusterfs/p2/b2 49152 0 Y
> 1462
> Brick serda1:/glusterfs/p1/b1 49153 0 Y
> 1476
> Brick serda2:/glusterfs/p1/b1 49153 0 Y
> 1566
> Self-heal Daemon on localhost N/A N/A Y
> 1469
> Self-heal Daemon on serda1 N/A N/A Y
> 1286
>
> Thanks
>...