similar to: openssh tunnel log info

Displaying 20 results from an estimated 8000 matches similar to: "openssh tunnel log info"

2017 Apr 20
2
lvm cache + qemu-kvm stops working after about 20GB of writes
Hello everyone, Anybody had the chance to test out this setup and reproduce the problem? I assumed it would be something that's used often these days and a solution would benefit a lot of users. If can be of any assistance please contact me. -- Met vriendelijke groet, Richard Landsman http://rimote.nl T: +31 (0)50 - 763 04 07 (ma-vr 9:00 tot 18:00) 24/7 bij storingen: +31 (0)6 - 4388
1998 Jul 15
0
Re: RedHat 5.X Security Book
I think it depends on wat you are using the book for..I myself have been trying for a long time to find a document that describes basic RedHat and Linux security, what to look for, inherent dangers etc etc. So I was overjoyed when I found this book. No, I am not depending on it as a sole source of information, but the basicis that it covers simply do not get repeatadly posted to the lists you
2002 Sep 06
2
Huge amount of used inode handlers reported by sar -v (inode-sz)
Any help with this problem would be very much appreciated (even "it's not 7.3 or ext3 pointers, look somewhere else"). I've seen a similar post to ext3-users, but since that one received no reply and I'm not convinced it's a ext3 problem (it only appears on our 7.3 hosts) , I'm CCing to the valhalla list. We have the same problem on ALL our Redhat 7.3 machines
2007 Jul 04
1
[LLVMdev] a strange emit of llvm-g++
I tested a simple function shown as follows for for llvm-g++: ------------------------------------------------------------- void f_loop(long* c, long sz) { long i; for (i = 0; i < sz; i++) { long offset = i * sz; long* out = c + offset; out[i] = 0; } } ------------------------------------------------------------- LLVM assembly was emitted out as follows:
2002 Jun 20
0
Huge amount of used inode handlers
Hi all, Any ideas, why is this happening or how to debug this? After random time (usually one to three days uptime) inode-sz reported by sar jumps way up its normal level and <24 hours after this bug has happened server will crash. Server used to work fine, but started to do this around beginning of June. --- 08:20:00 AM dentunusd file-sz %file-sz inode-sz super-sz %super-sz
2018 May 10
4
kernel spew from nouveau/ swiotlb
Greetings, When box is earning its keep, nouveau/swiotlb grumble.. a LOT. The below is from master.today. [12594.640959] nouveau 0000:01:00.0: swiotlb buffer is full (sz: 2097152 bytes) [12594.693000] nouveau 0000:01:00.0: swiotlb buffer is full (sz: 2097152 bytes) [12594.713787] nouveau 0000:01:00.0: swiotlb buffer is full (sz: 2097152 bytes) [12594.743413] nouveau 0000:01:00.0: swiotlb buffer
2017 Apr 10
0
lvm cache + qemu-kvm stops working after about 20GB of writes
Adding Paolo and Miroslav. On Sat, Apr 8, 2017 at 4:49 PM, Richard Landsman - Rimote <richard at rimote.nl > wrote: > Hello, > > I would really appreciate some help/guidance with this problem. First of > all sorry for the long message. I would file a bug, but do not know if it > is my fault, dm-cache, qemu or (probably) a combination of both. And i can > imagine some of
2007 Jun 29
1
[LLVMdev] LLVM assembly without basic block
Hello, guys. I just wonder if there is any way to spit out LLVM assembly without any basic block division. E.g., If I emit LLVM assembly for the following simple code: ------------------------------------------------------------ void f_loop(long* c, long sz) { long i; for (i = 0; i < sz; i++) { long offset = i * sz; long* out = c + offset; out[i] = 0; } }
2017 Sep 13
0
[PATCH 04/10] drivers:mpt: return -ENOMEM on allocation failure.
Signed-off-by: Allen Pais <allen.lkml at gmail.com> --- drivers/message/fusion/mptbase.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/message/fusion/mptbase.c b/drivers/message/fusion/mptbase.c index 84eab28..7920b2b 100644 --- a/drivers/message/fusion/mptbase.c +++ b/drivers/message/fusion/mptbase.c @@ -4328,15 +4328,15 @@
2018 May 10
0
kernel spew from nouveau/ swiotlb
> Greetings, > > When box is earning its keep, nouveau/swiotlb grumble.. a LOT. The > below is from master.today. > > [12594.640959] nouveau 0000:01:00.0: swiotlb buffer is full (sz: 2097152 > bytes) > [12594.693000] nouveau 0000:01:00.0: swiotlb buffer is full (sz: 2097152 > bytes) > [12594.713787] nouveau 0000:01:00.0: swiotlb buffer is full (sz: 2097152 >
2018 May 10
0
kernel spew from nouveau/ swiotlb
On Thu, 2018-05-10 at 11:10 +0200, Mike Galbraith wrote: > Greetings, > > When box is earning its keep, nouveau/swiotlb grumble.. a LOT. The > below is from master.today. > > [12594.640959] nouveau 0000:01:00.0: swiotlb buffer is full (sz: 2097152 bytes) > [12594.693000] nouveau 0000:01:00.0: swiotlb buffer is full (sz: 2097152 bytes) > [12594.713787] nouveau
2003 Nov 06
1
Hierarchical glm
Hi all, I'm not sure how to correctly analyse the following data with glm, and hope for some advice from this list, ideally showing how to specify the model in R and perform the tests, and also for suggestions of literature. The data structure is like this: - 20 plant populations were investigated (random factor pop), which belong to different habitat types (factor ht) - Within
2016 Jun 01
0
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
I did some additional testing - I stopped Kafka on the host, and kicked off a disk check, and it ran at the expected speed overnight. I started kafka this morning, and the raid check's speed immediately dropped down to ~2000K/Sec. I then enabled the write-back cache on the drives (hdparm -W1 /dev/sd*). The raid check is now running between 100000K/Sec and 200000K/Sec, and has been for several
2018 Jun 05
0
[PATCH v2 1/2] compiler-gcc.h: add gnu_inline to all inline declarations
On Tue, 2018-06-05 at 10:23 -0700, Joe Perches wrote: > Perhaps these are simpler as > > #define __inline__ inline > #define __inline inline Currently, there are these uses of inline variants in the kernel $ git grep -w inline | wc -l 68410 $ git grep -w __inline__ | wc -l 503 $ git grep -w __inline | wc -l 57 So it seems it's also reasonable to sed all uses of __inline to
2018 May 11
2
kernel spew from nouveau/ swiotlb
On Thu, 2018-05-10 at 12:28 +0200, Mike Galbraith wrote: > On Thu, 2018-05-10 at 11:10 +0200, Mike Galbraith wrote: > > Greetings, > > > > When box is earning its keep, nouveau/swiotlb grumble.. a LOT. The > > below is from master.today. > > > > [12594.640959] nouveau 0000:01:00.0: swiotlb buffer is full (sz: 2097152 bytes) > > [12594.693000] nouveau
2008 Mar 26
2
Range across a List
Hi R, I have a list > class(pp2) [1] "list" > length(pp2) [1] 1244 It is in the below format RIC Trade.Date Close.Price Currency.Code Convertion.Rate New.Price ABCD.SZ 2008/02/29 15.30 CNY 0.1408 2.154240 ABCD.SZ 2008/01/31 15.27 CNY 0.1392 2.125584 ABCD.SZ 2007/12/31 14.88 CNY 0.1371 2.040048
2016 May 27
2
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
All of our Kafka clusters are fairly write-heavy. The cluster in question is our second-heaviest ? we haven?t yet upgraded the heaviest, due to the issues we?ve been experiencing in this one. Here is an iostat example from a host within the same cluster, but without the RAID check running: [root at r2k1 ~] # iostat -xdmc 1 10 Linux 3.10.0-327.13.1.el7.x86_64 (r2k1) 05/27/16 _x86_64_ (32 CPU)
2009 Aug 24
2
Creating a simple line graph
Hey everyone, Sorry for yet another simple question but hopefully it makes whoever comes up with the answer feel good about helping others. I would like to simply plot the following two sets of data in a line graph. The one set is an observed set of points and the latter is the predicted. I have looked through the documentation (which makes any graphing very complicated to me) but i havent
2005 Sep 22
0
High CPU Time an Load Avarage on our Samba Server
Hello list, how could this happen? The Server doesn't respond from time to time with a high load avarage. We found a suspicious smbd process: top - 13:43:07 up 1 day, 2:27, 5 users, load average: 32.49, 58.41, 37.95 Tasks: 1196 total, 5 running, 1190 sleeping, 0 stopped, 1 zombie Cpu0 : 14.7% us, 3.8% sy, 0.0% ni, 79.8% id, 1.3% wa, 0.0% hi, 0.3% si Cpu1 : 1.3% us, 84.6%
2008 Oct 05
1
io writes very slow when using vmware server
We are struggling with a strange problem. When we have some VMWare clients running (mostly MS windows clients), than the IO-write performance on the host becomes very bad. The guest os's do not do anything, just having them started, sitting at the login prompt, is enough to trigger the problem. The host has plenty of 4G of RAM, and all clients fit easily into the space. The disksystem is a