similar to: (Fwd) VIRUS (Worm.SomeFool) IN MAIL TO YOU (from <rsync-bounce

Displaying 20 results from an estimated 100 matches similar to: "(Fwd) VIRUS (Worm.SomeFool) IN MAIL TO YOU (from <rsync-bounce"

2004 Dec 05
0
VIRUS (Worm.SomeFool.Gen-2) IN MAIL FROM YOU
VIRUS ALERT Our content checker found virus: Worm.SomeFool.Gen-2 in email presumably from you (<logcheck-devel at lists.alioth.debian.org>), to the following recipient: -> barbier at linuxfr.org Please check your system for viruses, or ask your system administrator to do so. Delivery of the email was stopped! For your reference, here are headers from your email:
2003 Dec 04
0
rsync for nt - new drive letter
Hello! I have been running rsync on some machines and rsync for NT on others. Recently, I discovered that on one of the NT machines we are getting this error: @ERROR: chdir failed rsync: connection unexpectedly closed (33 bytes read so far) rsync error: error in rsync protocol data stream (code 12) at io.c(165) The module pathway has been moved to a new drive letter on the NT box due to a
2005 Feb 05
0
AVVISO DI VIRUS / VIRUS WARNING : Worm.SomeFool.P
Il messaggio sotto riportato, spedito dal vostro indirizzo di posta, contiene un VIRUS e pertanto non ? stato consegnato. Probabilmente il computer dal quale ? stato spedito ? infetto. CONTROLLATELO QUANTO PRIMA CON UN PROGRAMMA ANTIVIRUS! A message containing a virus was sent from your e-mail address. It is very likely this machine (or any other you use for e-mail) is infected! CHECK IT AS SOON
2004 May 11
2
cwrsync strange path in error message
Hi, I am running cwrsync. 2004/05/11 [93] rsync error: some files could not be transferred (code 23) at /home/lapo/packaging/tmp/rsync-2.5.7/main.c(383) I am uncertain as to what this path statement is about in the above error message. /home/lapo/packaging/tmp/rsync-2.5.7/main.c(383) Thanks, Pat -- Patricia Palumbo DuBois & King, Inc. ppalumbo@dubois-king.com 802-728-4113 | ext 322
2007 Mar 06
0
virus encontrado em mensagem enviada "Re: Valeu!!"
Atencao: openssh-unix-dev at mindrot.org Um virus foi encontrado numa mensagem de Email que acabou de ser enviada por voce. Este scanner de Email a interceptou e impediu a mensagem de chegar no seu destino. O virus foi reportado como sendo: Worm.Somefool.AR Por favor atualize seu antivirus ou contate o seu suporte tecnico o mais rapido possivel pois voce tem um virus no seu computador.
2004 Jun 14
0
failure notice
Hi. This is the qmail-send program at qmail-in.click21.com.br. I'm afraid I wasn't able to deliver your message to the following addresses. This is a permanent error; I've given up. Sorry it didn't work out. <leandroallan@ibest.com.br>: Sorry, I wasn't able to establish an SMTP connection. (#4.4.1) I'm not going to try again; this message has been in the queue too
2004 Jun 02
0
Virus intercepted
A message you sent to <psycorps at tvtel.pt> contained Worm.SomeFool.Gen-1 and has not been delivered.
1999 Mar 29
0
Re: ADM Worm. Worm for Linux x86 found in wild. (fwd)
Hi, some more info on the previous admw0rm alert. Fwd'd from BugTraq Greetings, Jan-Philip Velders ---------- Forwarded message ---------- Date: Fri, 26 Mar 1999 21:17:40 +0100 From: Mixter <mixter@HOME.POPMAIL.COM> To: BUGTRAQ@NETSPACE.ORG Subject: Re: ADM Worm. Worm for Linux x86 found in wild. The "ADM w0rm" is public and can be found at:
1999 Mar 26
2
Re: [Security - intern] *ALERT*: ADM Worm. Worm for Linux x86 found in wild.
On Fri, 26 Mar 1999, Thomas Biege wrote: > Date: Fri, 26 Mar 1999 09:34:10 +0100 (MET) > From: Thomas Biege <thomas@suse.de> > To: Jan-Philip Velders <jpv@jvelders.tn.tudelft.nl> > Cc: linux-security@redhat.com > Subject: Re: [Security - intern] [linux-security] *ALERT*: ADM Worm. Worm for Linux x86 found in wild. > The worm just exploits old security holes, so
1999 Mar 26
3
*ALERT*: ADM Worm. Worm for Linux x86 found in wild.
-=> To moderator: I don't know whether it's wise to release the FTP-location I would recommend everyone to just look over their daemons, and run something like nessus against theirselves... Greetings, Jan-Philip Velders ---------- Forwarded message ---------- Date: Thu, 25 Mar 1999 16:26:59 -0700 From: "Ben Cantrick (Macky Stingray)" <mackys@MACKY.RONIN.NET> To:
2017 Jul 07
1
GluserFS WORM hardlink
GlusterFS WORM hard links will not be created OS is CentOS7 -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170707/aaea8dfc/attachment.html>
2017 Jul 10
1
GlusterFS WORM mode can't create hard link pliz ㅠ
hard linksA read-only file system does not produce a hard link in GlusterFS WORM mode. Is it impossible? OS is CentOS7 -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170710/837d3179/attachment.html> -------------- next part -------------- A non-text attachment was scrubbed... Name: error.png
2018 Feb 27
1
Scheduled AutoCommit Function for WORM Feature
Hello Gluster Community, while reading that article: https://github.com/gluster/glusterfs-specs/blob/master/under_review/worm-compliance.md there seems to be an interesting feature planned for the WORM Xlator: *Scheduled Auto-commit*: Scan Triggered Using timeouts for untouched files. The next scheduled namespace scan will cause the transition. CTR DB via libgfdb can be used to find files that
2007 Dec 19
0
VIRUS (Worm.Mydoom.M): IN UNA E-MAIL DA LEI INVIATA
VIRUS ALERT Il sistema di scansione ha rilevato un problema in una email presumibilmente inviata da Lei -> (<openssh-unix-dev at mindrot.org>), per il seguente destinatario: -> agtd09000r at istruzione.it La consegna del messaggio non e' potuta avvenire Di seguito i riferimenti della e-Mail inviata: ------------------------- BEGIN HEADERS -----------------------------
2006 Oct 23
1
Worm distribution :-)
You are talking about random point patterns, since the glow-worms appear as ``stars'' (= points). See the package ``spatial'' (which comes with R) and try simulating a pattern using Strauss(). Or install the package ``spatstat'' from CRAN --- in this package there is a variety of ways to simulate ``regular'' random point patterns --- rMaternI, rMaternII, rSSI,
2018 Mar 12
0
Expected performance for WORM scenario
Hi, Can you send us the following details: 1. gluster volume info 2. What client you are using to run this? Thanks, Nithya On 12 March 2018 at 18:16, Andreas Ericsson <andreas.ericsson at findity.com> wrote: > Heya fellas. > > I've been struggling quite a lot to get glusterfs to perform even > halfdecently with a write-intensive workload. Testnumbers are from gluster >
2018 Mar 12
0
Expected performance for WORM scenario
Hi, Gluster will never perform well for small files. I believe there is nothing you can do with this. Ondrej From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of Andreas Ericsson Sent: Monday, March 12, 2018 1:47 PM To: Gluster-users at gluster.org Subject: [Gluster-users] Expected performance for WORM scenario Heya fellas. I've been
2018 Mar 14
0
Expected performance for WORM scenario
That seems unlikely. I pre-create the directory layout and then write to directories I know exist. I don't quite understand how any settings at all can reduce performance to 1/5000 of what I get when writing straight to ramdisk though, and especially when running on a single node instead of in a cluster. Has anyone else set this up and managed to get better write performance? On 13 March
2018 Mar 13
0
Expected performance for WORM scenario
Well, it might be close to the _synchronous_ nfs, but it is still well behind of the asynchronous nfs performance. Simple script (bit extreme I know, but helps to draw the picture): #!/bin/csh set HOSTNAME=`/bin/hostname` set j=1 while ($j <= 7000) echo ahoj > test.$HOSTNAME.$j @ j++ end rm -rf test.$HOSTNAME.* Takes 9 seconds to execute on the NFS share, but 90 seconds on
2018 Mar 12
4
Expected performance for WORM scenario
Heya fellas. I've been struggling quite a lot to get glusterfs to perform even halfdecently with a write-intensive workload. Testnumbers are from gluster 3.10.7. We store a bunch of small files in a doubly-tiered sha1 hash fanout directory structure. The directories themselves aren't overly full. Most of the data we write to gluster is "write once, read probably never", so 99%