similar to: using file in hdfs for data mining algorithms in r

Displaying 20 results from an estimated 600 matches similar to: "using file in hdfs for data mining algorithms in r"

2011 Dec 09
1
Samba 3.0, fuse-hdfs and write problems
Hi folks, I research at the moment a connection over fuse and samba for hadoop-cluster. Its my private playground, but I have some issues I can't figure out why they happen. I use RHEL5.7, packetlist: rpm -qa|grep samba samba-3.0.33-3.29.el5_7.4.x86_64 samba-common-3.0.33-3.29.el5_7.4.x86_64 2 servers, one provides samba-shares and one I use as client. Samba-Server (HOST2): FuseFS:
2013 Oct 09
1
How to write R data frame to HDFS using rhdfs?
Hello, I am trying to write the default "OrchardSprays" R data frame into HDFS using the "rhdfs" package. I want to write this data frame directly into HDFS without first storing it into any file in local file system. Which rhdfs command i should use? Can some one help me? I am very new to R and rhdfs. Regards, Gaurav [[alternative HTML version deleted]]
2016 Apr 15
1
help on moving data from local to HDFS using RODBC in R
Hi, I have requirement to move the data from Linux local path( ex /home/user/sample.txt) to hadoop HDFS using RODBC in R I knew that we can move the data using rhive comamnds like *rhive.put* and *rhive.get *but looking for similar commands using RODBC as well. I would appreciate for your inputs. Regards, Divakar Phoenix, USA [[alternative HTML version deleted]]
2013 Oct 09
0
How to write R data frame to HDFS using rhdfs?
Hello, I am trying to write the default "OrchardSprays" R data frame into HDFS using the "rhdfs" package. I want to write this data frame directly into HDFS without first storing it into any file in local file system. Which rhdfs command i should use? Can some one help me? I am very new to R and rhdfs. Regards, Gaurav [[alternative HTML version deleted]]
2016 Apr 22
1
Storage cluster advise, anybody?
Hi Valeri On Fri, Apr 22, 2016 at 10:24 PM, Digimer <lists at alteeve.ca> wrote: > On 22/04/16 03:18 PM, Valeri Galtsev wrote: >> Dear Experts, >> >> I would like to ask everybody: what would you advise to use as a storage >> cluster, or as a distributed filesystem. >> >> I made my own research of what I can do, but I hit a snag with my >>
2005 Sep 13
1
possible bug in model.matrix
Is this a bug, or have I misunderstood the proper use of lm? Thanks, Whit code: x <- rnorm(50) y <- matrix(as.logical(round(runif(100),0)),ncol=2) NROW(x)==NROW(y) lm(x~y) > x <- rnorm(50) > y <- matrix(as.logical(round(runif(100),0)),ncol=2) > NROW(x)==NROW(y) [1] TRUE > lm(x~y) Error in "[[<-.data.frame"(`*tmp*`, nn, value = c(2, 1, 2, 1, 1, 1, 2, :
2009 Nov 09
1
running icecast on top of hadoop
Hi, I have MP3 files stored inside hadoop's HDFS. Is there anyway of running icecast on top of those MP3 files ? Is there any help on how to do it? Thanks, jb -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.xiph.org/pipermail/icecast/attachments/20091109/9dc520bb/attachment.htm
2009 Jan 23
1
svnserve with SASL on CentOS 5.2
Hello List. I'm cross posting this from svn-users, as I'm not sure whether this is an CentOS specific issue. Perhaps someone here has an idea of what's going on? ----------------------------- I got a fresh install of CentOS 5.2 x32, svnserve, version 1.5.5 (r34862), here is my svnserve.conf file [general] anon-access = none auth-access = write realm = isf [sasl] use-sasl = true
2010 Oct 29
1
What won't rsync sync this file?
For some reason, Rsync did not copy two specific file. Then I tried to specifically tell it to synchronize one of them and I get this result. *I* don't see any reason why to chose not to copy this file, but maybe on of you out there would see the obvious reason I am missing. dprweb> /usr/local/bin/rsync -vvv --stats -Pzrtpl --delete --password-file=/export/home/webuser/.appprod
2010 Apr 19
2
from solaris to linux: /usr/local/bin/rsync: No such file or directory
Moin, I want to use rsysnc to get files from a linux box to a solaris server. root at solaris:/root # which rsync /usr/local/bin/rsync root at solaris:/var/log/r5backup # rsync --version rsync version 3.0.7 protocol version 30 Copyright (C) 1996-2009 by Andrew Tridgell, Wayne Davison, and others. Web site: http://rsync.samba.org/ Capabilities: 64-bit files, 64-bit inums, 32-bit timestamps,
2011 May 18
1
Need expert help with model.matrix
Dear experts: Is it possible to create a new function based on stats:::model.matrix.default so that an alternative factor coding is used when the function is called instead of the default factor coding? Basically, I'd like to reproduce the results in 'mat' below, without having to explicitly specify my desired factor coding (identity matrices) in the 'contrasts.arg'. dd
2013 Apr 12
2
model frame and formula mismatch in model.matrix()
Hello everyone, I am trying to fit the following model All X. variables are continuous, while the conditions are categoricals. model <- lm(X2
2010 Nov 11
1
rsync stops at system call select()
hi, all: I need to back up about 50 files, the size of which won't exceed 5m, every 10~15 minutes to four remote machines. The back up command is written in a shell script file and was executed by the scheduling program with system() function. The scheduling program is implemented with c++. The command as follow: *rsync -az /home/admin/service/* admin at
2011 Oct 19
1
gluster map/reduce performance..
Hi, all, i try to check the performance of Map/Reduce of Gluster File system. Mapper side speed is quite good and it is sometimes faster than hadoop's map job. But in the Reduce Side job is much slower than hadoop. i analyze the result and i found the primary reason of slow speed is bad performance in Merging stage. Would you have any suggestion for this issue FYI check the blog
2007 Feb 08
1
Problem with factor state when subset()ing a data.frame
Hi folks, I am running into a problem when calling subset() on a large data.frame. One of the columns contains strings which are used as factors. R seems to automatically factor the column when the data.frame is contstructed, and this appears to not get updated when I create a subset of the table. A minimal testcase to demonstrate the problem follows: sample <- data.frame(c("A",
2010 Nov 07
1
rsync fails to retrieve file (if local file is incorrect)
I'm using rsync on an embedded powerpc platform with flash filesystem. Because of a power-cycle a local file got corrupted. This file is /flashfs/isd1, it has the correct size but wrong MD5SUM (cd5...). Using rsync to retrieve the right file (from a remote machine) fails! If I delete the file first, it works (see below). How is this possible ? --- N. van Bolhuis. # md5sum /flashfs/isd1
2012 Jan 07
2
Linux Container and Tapdev
Hi: Recently I have some study on linux container(lxc), which is a light weight OS level virtualization. With the previous knowledge of tapdisk, I have an assumption that, I may could use the vhd for each container to seperate the storage of all containers, or even in later days, vhd is stored in distributed storage, container could be migrated. Build filesystem on tapdev
2005 Jan 07
8
Problem with bridging/routing on three interfaces and DNAT
Hello all, I have a problem with external access to a postfix mailserver running on my firewall as a mail-gateway. My setup with shorewall 2.2.0 rc4 is as follows: eth0 is zone isf - this is an intranet to other companies eth1 is zone loc - local network eth2 is zone net - internet, fix ip adress eth0 and eth1 are bridged shorewall version 2.2.0-RC4 ip addr show 1: lo: <LOOPBACK,UP> mtu
2011 Oct 24
2
How to use "virsh migrat" with p2p option?
Hi libvirt support, Can you please give me some example of how to use "virsh migrate --live" with p2p option including both source host and target host? I try to get some info from your website, but no info with migrate. [root at vmoactive02 qemu]# virsh help migrate NAME migrate - migrate domain to another host SYNOPSIS migrate [--live] [--p2p] [--direct]
2010 Mar 04
8
Huge difference in reporting disk usage via du and zfs list. Fragmentation?
Do we have enormous fragmentation here on our X4500 with Solaris 10, ZFS Version 10? What except zfs send/receive can be done to free the fragmented space? One ZFS was used for some month to store some large disk images (each 50GByte large) which are copied there with rsync. This ZFS then reports 6.39TByte usage with zfs list and only 2TByte usage with du. The other ZFS was used for similar