Displaying 7 results from an estimated 7 matches for "19120".
Did you mean:
1120
2009 Oct 22
1
Intersection an Sum between list and matrix
...[[2]]
[1] 0 1 2 18 22
> mm
1 4 7 10 12 14 19 22
1 00000 18128 18576 20048 19408 21472 16528 20432
4 20080 00000 20576 23520 19776 19504 21312 22384
7 21072 25456 00000 18448 19152 22144 18368 19280
10 18624 22880 16256 00000 17856 16032 17008 19120
12 20208 15712 17008 23264 00000 23168 19872 24000
14 26560 19024 20704 19520 20048 00000 16736 21056
19 17600 22640 20704 17200 17312 17728 00000 18912
22 18128 19024 21120 17296 20208 19904 21120 00000
> example 1
1 4 7 10 12 14 19 22
1 00000...
2013 Sep 26
1
Lot of connections IMAP
Hi to all, i have dovecot 2.2.5 when i digit doveadm who i see a lot of
connections IMAP for single user liker the example below
xxxxx.yyyyyy at mail.cgilfe.it 9 imap (20572 20614 19120 20653 19136 20655
19138 20661 20471) (192.168.x.xxx)
Why so many IMAP?
--
*Davide Marchi*
*T*eorema *F*errara *Srl*
Via Spronello, 7 - Ferrara - 44121
Tel. *0532783161* Fax. *0532783368*
E-m at il: *davide.marchi at mail.cgilfe.it*
Skype: *davide.marchi73*
Web: *http://www.cgilfe.it*
*CONFIDEN...
2014 Nov 07
4
[Bug 2308] New: Forwarded Unix domain sockets not removed on logout
...nnect the ssh session, the path
/run/user/1000/keyring-wpPOO8/gpg-fwd is not deleted. lsof doesn't show
any processes with the file open. When I re-execute the same ssh
command above, the domain socket forwarding fails, with the following
showing up in sshd's log:
Nov 6 23:25:12 dart sshd[19120]: error: bind: Address already in use
Nov 6 23:25:12 dart sshd[19120]: error: unix_listener: cannot bind to
path: /run/user/1000/keyring-wpPOO8/gpg-fwd
If I rm the domain socket manually on the server, then forwarding with
that remote name works again, once, until I delete it again, etc.
--
You...
2006 Apr 19
0
Mouse grab scrolling?
Is there support for scrolling with a mouse like in Google Maps?
--
Posted via http://www.ruby-forum.com/.
2019 Nov 14
1
get_share_mode_lock: get_static_share_mode_data failed: NT_STATUS_NO_MEMORY
...560
Header offset/logical size: 81920/42496000
Number of records: 806
Incompatible hash: yes
Active/supported feature flags: 0x00000001/0x00000001
Robust mutexes locking: yes
Smallest/average/largest keys: 24/24/24
Smallest/average/largest data: 304/503/2080
Smallest/average/largest padding: 4/1528/19120
Number of dead records: 11146
Smallest/average/largest dead records: 416/2688/21816
Number of free records: 4049
Smallest/average/largest free records: 12/2579/6959332
Number of hash chains: 10007
Smallest/average/largest hash chains: 0/1/5
Number of uncoalesced records: 3473
Smallest/average/large...
2019 Nov 14
6
get_share_mode_lock: get_static_share_mode_data failed: NT_STATUS_NO_MEMORY
Upgraded to Samba 4.11.2 and I?ve now too started seeing the message:
get_share_mode_lock: get_static_share_mode_data failed: NT_STATUS_NO_MEMORY
A lot. I modified the source in source3/locking/share_mode_lock.c a bit in order to print out the values of the service path, smb_fname & old_write_time when it fails and it seems they are all NULL?
[2019/11/14 14:24:23.358441, 0]
2010 Apr 19
20
Lustre Client - Memory Issue
Hi Guys,
My users are reporting some issues with memory on our lustre 1.8.1 clients.
It looks like when they submit a single job at a time the run time was about
4.5 minutes. However, when they ran multiple jobs (10 or less) on a client
with 192GB of memory on a single node the run time for each job was
exceeding 3-4X the run time for the single process. They also noticed that
the swap space