Displaying 20 results from an estimated 1000 matches similar to: "Samba 3.0, fuse-hdfs and write problems"
2012 Feb 06
1
using file in hdfs for data mining algorithms in r
hi all, i am new to r , i am trying to run data mining algorithms using map
reduce framework.. *
*i have few basic doubts*
*1. can i give file in hdfs to kmeans( ) ? ?I tried as
> file1 = hdfs.file("testdata/synthetic_control.data")
> isf = hdfs.read(lsf,5242880,0)
>l = kmeans(isf,2,10)
its not working.. please help
2. How to access the file in hdfs and give as input to
2016 Apr 15
1
help on moving data from local to HDFS using RODBC in R
Hi,
I have requirement to move the data from Linux local path( ex
/home/user/sample.txt) to hadoop HDFS using RODBC in R
I knew that we can move the data using rhive comamnds like *rhive.put*
and *rhive.get
*but looking for similar commands using RODBC as well.
I would appreciate for your inputs.
Regards,
Divakar
Phoenix, USA
[[alternative HTML version deleted]]
2013 Oct 09
1
How to write R data frame to HDFS using rhdfs?
Hello,
I am trying to write the default "OrchardSprays" R data frame into HDFS
using the "rhdfs" package. I want to write this data frame directly into
HDFS without first storing it into any file in local file system.
Which rhdfs command i should use? Can some one help me? I am very new to R
and rhdfs.
Regards,
Gaurav
[[alternative HTML version deleted]]
2016 Apr 22
1
Storage cluster advise, anybody?
Hi Valeri
On Fri, Apr 22, 2016 at 10:24 PM, Digimer <lists at alteeve.ca> wrote:
> On 22/04/16 03:18 PM, Valeri Galtsev wrote:
>> Dear Experts,
>>
>> I would like to ask everybody: what would you advise to use as a storage
>> cluster, or as a distributed filesystem.
>>
>> I made my own research of what I can do, but I hit a snag with my
>>
2013 Oct 09
0
How to write R data frame to HDFS using rhdfs?
Hello,
I am trying to write the default "OrchardSprays" R data frame into HDFS
using the "rhdfs" package. I want to write this data frame directly into
HDFS without first storing it into any file in local file system.
Which rhdfs command i should use? Can some one help me? I am very new to R
and rhdfs.
Regards,
Gaurav
[[alternative HTML version deleted]]
2009 Nov 09
1
running icecast on top of hadoop
Hi,
I have MP3 files stored inside hadoop's HDFS.
Is there anyway of running icecast on top of those MP3 files ?
Is there any help on how to do it?
Thanks,
jb
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.xiph.org/pipermail/icecast/attachments/20091109/9dc520bb/attachment.htm
2011 Oct 19
1
gluster map/reduce performance..
Hi, all,
i try to check the performance of Map/Reduce of Gluster File system.
Mapper side speed is quite good and it is sometimes faster than hadoop's map job.
But in the Reduce Side job is much slower than hadoop.
i analyze the result and i found the primary reason of slow speed is bad performance in Merging stage.
Would you have any suggestion for this issue
FYI check the blog
2009 Nov 06
4
Hadoop Cluster on Xen
Hi all,
Has anyone created a Xen cluster to run a hadoop vm cluster?
I would be interested in how it performs
Thanks
Lance
--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.
_______________________________________________
Xen-users mailing list
Xen-users@lists.xensource.com
http://lists.xensource.com/xen-users
2010 Jun 16
6
clustered file system of choice
Hi all,
I am just trying to consider my options for storing a large mass of
data (tens of terrabytes of files) and one idea is to build a
clustered FS of some kind. Has anybody had any experience with that?
Any recommendations?
Thanks in advance for any and all advice.
Boris.
2011 Nov 28
1
Very strange permission problem: samba on zfs-fuse
Hi all,
Centos 5.7
samba-common-3.0.33-3.29.el5_7.4
samba-3.0.33-3.29.el5_7.4
zfs-fuse-0.6.9_p1-6.20100709git.el5.1
smb.conf
[depot]
path = /data/depot
public = no
writable = yes
directory mask = 2775
create mask = 0664
vfs objects = recycle
recycle:repository = .deleted/%U
recycle:keeptree = Yes
recycle:touch = Yes
recycle:versions = Yes
recycle:maxsixe =
2013 May 09
0
How to run TestDFSIO on hadoop running lustre?
I generally run TestDFSIO for Hadoop running on HDFS as
shown:
hadoop jar hadoop-*test*.jar TestDFSIO -write -nrFiles 10 -fileSize 1000
How to run it on lustre filesystem?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.lustre.org/pipermail/lustre-discuss/attachments/20130509/821dc4bb/attachment.html
2009 Mar 24
0
A question about rJava and the classpath
Hello,
Since this an R package, I'm sending the email. I'm loading all the
jar files as:
library(rJava)
hadoop <- Sys.getenv("HADOOP")
allfiles <- c(list.files(hadoop,pattern="jar$",full.names=T),list.files(paste(hadoop,"lib",sep=.Platform$file.sep,collapse=""),pattern="jar$",full.names=T))
allfiles <-
2011 Jun 13
0
Hadoop Hive output read into R
All,
I am using a pretty crude method to get data out of HDFS via Hive and into R and was curious about alternatives that the group has explored.
Basically, I run a system command that runs a hive statement and writes the returned data to a delimited file. Then, I read that file into an object and continue.
For example:
hive.script <- "select * from orders where date =
2012 Jan 07
2
Linux Container and Tapdev
Hi:
Recently I have some study on linux container(lxc), which is a light weight OS level
virtualization.
With the previous knowledge of tapdisk, I have an assumption that, I may could
use the vhd for each container to seperate the storage of all containers, or even in
later days, vhd is stored in distributed storage, container could be migrated.
Build filesystem on tapdev
2013 Sep 23
0
Unable to execute Java MapReduce (Hadoop) code from R using rJava
Hi All,
I have written a Java MapReduce code that runs on Hadoop. My intention is
to create an R package which will call the Java code and execute the job.
Hence, I have written a similar R function. But when I call this function
from R terminal, the Hadoop job is not running. Its just printing few lines
of warning messages and does nothing further. Here is the execution
scenario:
*>
2012 Oct 23
0
Rserve-PHP client warning
i connected Rserver successfully. can do some functions like
rnorm(),print(),etc...
i am connecting R with Hive-hadoop
i have installed library "RHive" which will take care of connection
but when i am connecting it throws warning..like
"Warning: type 7 is currently not implemented in the PHP client."
code:-->
2011 Nov 15
2
I can browse but can't modify or create files
Hello,
I am having trouble getting my samba share to work properly. We are running CentOS and the samba version is
samba.x86_64 3.0.33-3.29.el5_7.4
The problem that I am having is that I can see all the shares and browse the directories but I can not create or modify files. Whenever I try to create or modify a file I get an "Access Denied" pop-up window. This is
2013 Jan 14
0
Revolutions blog roundup: December 2012
I write about R every weekday at the Revolutions blog:
http://blog.revolutionanalytics.com
and every month I post a summary of articles from the previous month
of particular interest to readers of r-help.
In case you missed them, here are some articles related to R from the
month of December:
January 24: the webinar "Using R with Hadoop" will be presented by
Jeffrey Breen:
2013 Mar 11
4
Understanding lustre setup ..
Hello,
I have been reading
http://wiki.lustre.org/images/1/1b/Hadoop_wp_v0.4.2.pdf for setting up
Hadoop over lustre.
Generally in hadoop setup, we have 1 Namenode and various number of datanodes.
If I want to setup the same keeping Lustre as backend, in the document
it is mentioned that:
".............Our experiments run on cluster with 8 nodes in total,
one is mds/namenode, the rest are
2016 Apr 22
4
Storage cluster advise, anybody?
Dear Experts,
I would like to ask everybody: what would you advise to use as a storage
cluster, or as a distributed filesystem.
I made my own research of what I can do, but I hit a snag with my
seemingly best choice, so I decided to stay away from it finally, and ask
clever people what they would use.
My requirements are:
1. I would like to have one big (say, comparable to petabyte)