Displaying 20 results from an estimated 1000 matches similar to: "favorite cheap VPS services"
2015 Jan 16
1
favorite cheap VPS services
> On 01/15/2015 06:24 PM, Tim Dunphy wrote:
> >
> > So I was wondering.. what are some really cheap VPS services that you like
> > to use for one off projects like this and why. I'm looking for dirt cheap
> > as possible.
Depends what you mean by 'cheap'.
In my experience good, fast, less than USD 100 annually, from Germany
(Hetzner, they have good
2015 Jan 16
0
favorite cheap VPS services
On 01/15/2015 06:24 PM, Tim Dunphy wrote:
> Hey all,
>
> I'm trying to learn how to use some of the big data stores. Specifically I
> want to learn how to use CassandraDB and Hadoop. Originally I'd had the
> idea of trying to setup a cassandra ring on the Amazon AWS free tier.
> However it seems that neither will run on a t2.micro instance.
>
> So I was wondering..
2015 Apr 02
1
mounted NFS does not show in df -h
Hey guys,
This is kind of odd, so I wanted to do a sanity check.
I mounted an NFS share like so:
[root at web1:~] #mount -t nfs nfs1.jokefire.com:/home /mnt/home
Seemed to go ok. Then I took a look at the output of df -h and didn't see
it!
[root at web1:~] #df -h
Filesystem Size Used Avail Use% Mounted on
/dev/vda 40G 24G 14G 64% /
devtmpfs
2013 May 20
1
Glusterfs-Hadoop
Hi,
Where can I find glusterfs-hadoop-0.20.2-0.1.x86_64.rpm?
The following link is from the Gluster FS Admin Guide, but it doesn't exist:
http://download.gluster.com/pub/gluster/glusterfs/qa-releases/3.3-beta-2/glusterfs-hadoop-0.20.2-0.1.x86_64.rpm
Thanks!
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2009 May 07
4
problem with conditionals
I''m new to puppet. I''m trying to use some real case examples to better
understand how Puppet works.
Here''s my case:
exec { "usermod -d /home/hadoop -s /bin/bash hadoop":
unless => "test `grep ^hadoop /etc/passwd | awk -F: ''{print
$6}''` == ''/home/hadoop''"
}
The idea is the usermod would only
2013 Mar 11
4
Understanding lustre setup ..
Hello,
I have been reading
http://wiki.lustre.org/images/1/1b/Hadoop_wp_v0.4.2.pdf for setting up
Hadoop over lustre.
Generally in hadoop setup, we have 1 Namenode and various number of datanodes.
If I want to setup the same keeping Lustre as backend, in the document
it is mentioned that:
".............Our experiments run on cluster with 8 nodes in total,
one is mds/namenode, the rest are
2013 Oct 09
2
Error while running MR using rmr2
Hi,
I have trying to run a simple MR program using rmr2 in a single node Hadoop
cluster. Here is the environment for the setup
Ubuntu 12.04 (32 bit)
R (Ubuntu comes with 2.14.1, so updated to 3.0.2)
Installed the latest rmr2 and rhdfs from
here<https://github.com/RevolutionAnalytics/RHadoop/wiki/Downloads>and
the corresponding dependencies
Hadoop 1.2.1
Now I am trying to run a simple MR
2011 Oct 19
1
gluster map/reduce performance..
Hi, all,
i try to check the performance of Map/Reduce of Gluster File system.
Mapper side speed is quite good and it is sometimes faster than hadoop's map job.
But in the Reduce Side job is much slower than hadoop.
i analyze the result and i found the primary reason of slow speed is bad performance in Merging stage.
Would you have any suggestion for this issue
FYI check the blog
2011 Jan 04
5
Allowing puppet to drop privileges for a manifest
Greetings,
Our environment consists of about 600 Redhat Enterprise Linux 3, 4, 5,
and soon 6 servers. We use cfengine 2 currently, but plan on
migrating to puppet. Right now, we have our root-owned cfengine
client running every 15 minutes from cron contacting a single cfservd
server. Additionally, our employees start their own cfengine and
puppet instances on on some servers running under
2013 Nov 20
2
How come that module is not executed in Windows?
I have the following in vagrantfile in WIndows system.
config.vm.provision :puppet do |puppet|
puppet.manifests_path = "manifests"
puppet.manifest_file = "base-hadoop.pp"
puppet.module_path = "modules"
end
when i run vagrant provision, i do see manifest and module folders are
mounted and ssh into vm, I can find files in the following path
2019 Nov 21
2
How to make xapian run in hadoop
Hi all,
We use xapian as the backend of our system. Now the data need be indexed ever-increasing, and the local mode is hard to maintain, so we plan to move the index builder to hadoop. We try to make xapian can be run in hadoop, and now met a problem that there are many seek operations when xapian writes the index files, but the method seek() in hadoop c api only support read, and we blocked by
2010 Oct 08
2
New user - Issue using Generic::Mkuser in the ghoneycutt/generic module.
I''m trying to automatically create users as a requirement for ssh keys
to work. Here is my issue. I am getting this error from the agent. The
SSH part works fine, but it will not create the user due to a
dependency issue. I do not know how to debug this.
err: Could not run Puppet configuration client: Could not find
dependency Generic::Mkuser[hadoop] for Ssh::Authorized_keys[hadoop]
at
2015 Dec 11
2
SVM hadoop
Hola Mª Luz,
Te cuento un poco mi visión:
Lo primero de todo es tener claro qué quiero hacer exactamente en paralelo,
se me ocurren 3 escenarios:
(1) Aplicar un modelo en este caso SVM sobre unos datos muy grandes y por
eso necesito hadoop/spark
(2) Realizar muchos modelos SVM sobre datos pequeños (por ejemplo uno por
usuario) y por eso necesito hadoop/spark para parelilizar estos procesos
2015 Dec 10
3
SVM hadoop
Estimados
Un día leí algo en el siguiente hipervínculo, pero nunca lo use.
http://blog.revolutionanalytics.com/2015/06/using-hadoop-with-r-it-depends.html
Javier Rubén Marcuzzi
De: Carlos J. Gil Bellosta
Enviado: miércoles, 9 de diciembre de 2015 14:33
Para: MªLuz Morales
CC: r-help-es
Asunto: Re: [R-es] SVM hadoop
No, no correrán en paralelo si usas los SVM de paquetes como e1071.
No
2015 Dec 10
2
SVM hadoop
Hola,
Puedes poner un RStudio en Amazon, poner "caret" y a correr....
No sé si tendrás suficiente con lo que te pueda ofrecer Amazon para tu
problema... creo que sí... ;-)....
O directamente hacerlo aquí, que toda esta instalación ya la tienen hecha:
http://www.teraproc.com/front-page-posts/r-on-demand/
Gracias,
Carlos.
El 10 de diciembre de 2015, 14:43, MªLuz Morales <mlzmrls
2008 Aug 21
2
Large data sets with R (binding to hadoop available?)
Dear R community,
I find R fantastic and use R whenever I can for my data analytic
needs. Certain data sets, however, are so large that other tools
seem to be needed to pre-process data such that it can be brought
into R for further analysis.
Questions I have for the many expert contributors on this list are:
1. How do others handle situations of large data sets (gigabytes,
terabytes)
2009 Jul 31
1
Using R with Hadoop/Hive for Big Data
Hive <http://hadoop.apache.org/hive/> is a data warehouse infrastructure
built on top of Hadoop that provides tools to enable easy data
summarization, adhoc querying and analysis of large datasets data stored in
Hadoop files. It provides a mechanism to put structure on this data and it
also provides a simple query language called QL which is based on SQL and
which enables users familiar with
2009 Nov 06
4
Hadoop Cluster on Xen
Hi all,
Has anyone created a Xen cluster to run a hadoop vm cluster?
I would be interested in how it performs
Thanks
Lance
--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.
_______________________________________________
Xen-users mailing list
Xen-users@lists.xensource.com
http://lists.xensource.com/xen-users
2015 Dec 09
2
SVM hadoop
Buenos días,
alguien sabe si hay alguna manera de implementar una máquina de soporte
vectorial (svm) con R-hadoop??
Mi interés es hacer procesamiento big data con svm. Se que en R, existen
los paquetes {RtextTools} y {e1071} que permiten hacer svm. Pero no estoy
segura de que el algoritmo sea paralelizable, es decir, que pueda correr en
paralelo a través de la plataforma R-hadoop.
Muchas
2016 Jun 15
5
Hadoop
Hola buenas,
me preguntaba si alguno usa hadoop Spark en su día día y si me podíais
recomendar un buen curso para empezar. Estuve en la charla de meetup de
madrid hace unos meses de Rspark y estuvo bien, ahora me preguntaba si es
posible profundizar.
Pero me gustaría tener recomendaciones de cualquier material que podáis
recomendar, cursos de coursera que hayais hecho, libros que hayais leido,