similar to: Samba 2.2.8a: Deleting all files

Displaying 20 results from an estimated 4000 matches similar to: "Samba 2.2.8a: Deleting all files"

2006 Dec 14
1
Does send_file act differently with Mongrel than withWebrick?
I tried the mime thing but it did not help. I didn''t think it was that because I can get the mp3 served OK hitting the URL in a browser. I just noticed that I am getting an error in Mongrel when WinAmp make the request for the mp3 file: BAD CLIENT (127.0.0.1): Invalid HTTP format, parsing fails. I don''t think Mongrel has a problem with the URL itself because if a browser
2007 Feb 15
1
Anyway to Force a start even if the .pid file exists?
I''m using monit to restart mongrel processes if they go away. With mongrel 0.3 I could issue a start command and the process would start up even if the .pid file existed (though a warning would appear). Now, with mongrel 1.0.1 the warning says I must clear the .pid file and the process is not started. Is there anyway to force it to start without having to clear the .pid? Regards
2006 Dec 14
1
Does send_file act differently with Mongrel than with Webrick?
Mongrel 0.3.13.3 Windows XP Rails 1.1.6 I have a rails app that serves up mp3 files via the send_file command. We create a playlist (M3U) with URLs to the mp3s. These play just fine using Windows Media Player. However using WinAmp we get an error "error syncing to stream" when it tries to request the mp3. After playing around I found that if I am running Rails under Webrick WinAmp
2006 Sep 26
4
Mongrel Processes Dying
We are seeing mongrel processes dying in our mongrel cluster. What is the best way to troubleshoot this? We have ruby 1.8.4, Rails 1.1.0 (upgrading soon), MySql 4.1 running on Red Hat Enterprise Linux ES release 4, Apache 2.2, mongrel-0.3.13.3, mongrel_cluster-0.2.0 I saw the following messages in the mongrel.log but not sure if theay are related to the processes dying. It would be nice if
2006 Sep 26
3
Clustering - Avoiding "dead" processes?
I have Mongrel Cluster setup with Apache a mod_proxy_balancer. I''ve seen (from time to time) mongrel instances become non-responsive. Is there anyway to configure the balancer so that it "knows" which processes are no longer good and stops trying to use them? _______________________________________________________ The FREE service that prevents junk email
2007 Feb 15
2
Multiple Processes Spawned from mongrel_rails start ?
Hello, I have mongrel 1.0.1, rails 1.2.2 ruby 1.8.5 running on Centos 4.4. When I execute the mongrel_rails start -d I see that 3 processes are spawned. See below: [root at ccc aaa]# mongrel_rails start -d [root at ccc aaa]# ps -def |grep mong root 2743 1 9 07:14 ? 00:00:01 /usr/bin/ruby /usr/bin/mongrel_rails start -d root 2744 2743 0 07:14 ? 00:00:00
2006 Mar 16
3
Drag Drop problem with Div using overflow:auto
There seems to be a problem with dragging an object outside a div if using overflow:auto on the div style. My problem is the same as the one described in the following thread but the responder''s "fix" did not work for me. Is there a fix for this problem? http://wrath.rubyonrails.org/pipermail/rails-spinoffs/2006-February/002599.html Regards
2003 Jan 01
1
Simulating rdist?
rsync is great for syncing 2 directory trees, but I want to maintain a master source tree on one machine and copy that to multiple machines. i.e. basically what rdist does The only way I can see of doing this with rsync is to have multiple cron jobs 0 * * * * rsync ... machine1:... 0 * * * * rsync ... machine2:... ...and so on. Is there a more elegant/compact way of doing this? Thanks
2003 Feb 11
1
--delete ignored?
rsync 2.5.6 If I turn on the --backup (and --backup-dir and --suffix) options and rsync a tree like rsync --backup ... -av --delete dir1/ user@machine:dir1 I dont see the 'deleting ...' message in the output, but if I delete a file in the source tree, it DOES get deleted from the target tree. If I remove the '--backup' option, I do get the 'deleting ...' message.
2011 Jul 24
2
Deleting rows and store the deleted rows in new data frame
Dear all, I am using grep but I did not understand the problem as I am doing something wrong.Please help me. I am using this code- sf=data.frame(sapply(df[],function(x) grep('\\.&\\,', df[,9]))) the thing is i have a data frame(df) like this- 10 135349467 g G 4 0 0 5 ,,,., 10 135349468 t T 2 0 0 5 ,,c., 10 135349469 g G 7 0 0 5 ,,a., 10 135349470 c C 8 0 0 5 ,,,., 10 135349471
2005 Aug 31
1
Painfully slow under windows
I did some testing on the speed. found tinc under windows is paintfully slow, particularly, it is paintfully slow to transfer anything from the windows machine. here is my setting. I have two windows xp in local network 100MB. use ftp to test speed. without tinc, the speed is about 5MB/sec. but through the vpn interface, speed is about only 10-20KB/sec. the next test, I installed VMWare 5.0 on
2011 Aug 22
3
automatic file input
Dear all, I have 100 files which are used as input.and I have to input the name of my files again and again.the name of the files are 1.out, 2.out......100.out. I want to know if there is anything like perl so that i can use something like this- for($f = 1; $f <= 100; $f++) { $file = $f.".out"; I have tried this thing in R but it does not work.Can somebody please help me.
2011 Jul 01
13
For help in R coding
Dear all, I am doing a project on variant calling using R.I am working on pileup file.There are 10 columns in my data frame and I want to count the number of A,C,G and T in each row for column 9.example of column 9 is given below- .a,g,, .t,t,, .,c,c, .,a,,, .,t,t,t .c,,g,^!. .g,ggg.^!, .$,,,,,.,
2005 Aug 31
1
why does the server needs to have the client's host file
First I have the say TINC is great. it is the only one that I found fits my needs. I use it to create a virtual network for parallel computing using computers around labs and campus. i have a question though. suppose we work as a server A and client B setting. it is necessary for the client B to have the host file of the server A so that B knows where to find A. but why would A need to have the
2010 Feb 16
1
Migrate from an NFS storage to GlusterFS
Hi - I already have an NFS server in production which shares Web data for a 4-node Apache cluster. I'd like to switch to GlusterFS. Do I have to copy the files from the NFS storage to a GlusterFS one, or may it work if I just install GlusterFS on that server, configuring a GFS volume to the existing storage directory (assuming, of course, the NFS server is shuuted down and not used
2011 Jul 14
5
Adding rows based on column value
Dear all, I have one problem and did not find any solution.(I have also attached the problem in text file because sometimes column spacing is not good in mail) I have a file(file.txt) attached with this mail.I am reading it using this code to make a data frame (file)- file=read.table("file.txt",fill=T,colClasses = "character",header=T) file looks like this- Chr Pos
2009 Feb 25
4
DID's in a specific rate center
I need 100 DID's in a specific rate center (916-854-xxxx). How do I go about finding who owns the rate center ? If the DID's are available in this rate center ? Thanks Vikas
2009 Aug 11
3
Fuse problem
Hello all, I'm running a 64bit Centos5 setup and am trying to mount a gluster filesystem (which is exported out of the same box). glusterfs --debug --volfile=/root/gluster/webspace2.vol /home/webspace_glust/ Gives me: <snip> [2009-08-11 16:26:37] D [client-protocol.c:5963:init] glust1b_36: defaulting ping-timeout to 10 [2009-08-11 16:26:37] D [transport.c:141:transport_load]
2004 Sep 28
3
add-on packages
I want to add RMySQL and RODBC packages to my R installation on redhat linux box. The command install.packages gives following output. What could be wrong? ******************** install.packages(RMySQL) trying URL `http://cran.r-project.org/src/contrib/PACKAGES' Content type `text/plain; charset=iso-8859-1' length 202145 bytes opened URL .......... .......... .......... ..........
2011 Aug 07
3
Printing data frame with million rows
Dear all, I was working on number of files and at the end I got a data frame with approx. million rows.To prin this data frame in output, I used capture.output(print.data.frame(end,row.names=F), file = "summary", append = FALSE) where end is the name of my data frame and summary is the name of my output file. but when I checked the output there were only 10000 rows and at the last it