similar to: problem with conditionals

Displaying 20 results from an estimated 30000 matches similar to: "problem with conditionals"

2010 Jul 16
4
Installping puppet with kickstart -- Cannot find local fact /proc/cpuinfo
Hi I have been trying to get puppet working with kickstart. I am trying to install Hadoop on the nodes. Installing puppet from kickstart work and when the machine restarts, certificates are pulled down and hadoop user is created and files are extracted. I want all the user creation etc to be done before the machine reboots so that I can set up init.d scripts to do a start of hadoop . So I
2009 Jul 18
15
large file download, timeout?
Hi. I''m a beginner, but I have a basic puppet setup working. I am doing a manual tarball installation and it seems to be hanging then eventually timing out on just downloading the file: file { "/opt/hadoop-0.20.0.tar.gz": source => "puppet:///hadoop020/hadoop-0.20.0.tar.gz" } I have another module that does the same things and works, my only guess
2010 Oct 08
2
New user - Issue using Generic::Mkuser in the ghoneycutt/generic module.
I''m trying to automatically create users as a requirement for ssh keys to work. Here is my issue. I am getting this error from the agent. The SSH part works fine, but it will not create the user due to a dependency issue. I do not know how to debug this. err: Could not run Puppet configuration client: Could not find dependency Generic::Mkuser[hadoop] for Ssh::Authorized_keys[hadoop] at
2013 Nov 20
2
How come that module is not executed in Windows?
I have the following in vagrantfile in WIndows system. config.vm.provision :puppet do |puppet| puppet.manifests_path = "manifests" puppet.manifest_file = "base-hadoop.pp" puppet.module_path = "modules" end when i run vagrant provision, i do see manifest and module folders are mounted and ssh into vm, I can find files in the following path
2011 Jan 04
5
Allowing puppet to drop privileges for a manifest
Greetings, Our environment consists of about 600 Redhat Enterprise Linux 3, 4, 5, and soon 6 servers. We use cfengine 2 currently, but plan on migrating to puppet. Right now, we have our root-owned cfengine client running every 15 minutes from cron contacting a single cfservd server. Additionally, our employees start their own cfengine and puppet instances on on some servers running under
2009 Aug 03
3
Could not call fileserver.describe: #<Errno:: ECONNRESET: Connection reset by peer>
Hello there I''m having this error: Could not call fileserver.describe: #<Errno:: ECONNRESET: Connection reset by peer> From what i can tell, eventually the master decides it has had enough and freezes. I''ve got about 25 hosts checking in to it, but restarting the master daemon appears to be only a partial solution with the clients occasionally not being able to then
2012 Jun 04
1
Need help resolving bad Puppet module entries for STIG
I discovered that a number of our STIG Puppet modules are failing. I am thinking its because the code is wrong because when I make a quick chance to the actual code being used, then the code actually works as intended. STIG Puppet Code Repository:
2009 Jun 12
7
Obtaining puppet and facter for RHEL5/Centos5
What''s the correct yum repo to use for installing Puppet & Facter on RHEL5 and Centos5? I used to get them from the dlutter-rhel5 repo but this seems to be massively out of date now - latest version of puppet-server in there is 0.24.5-1.el5 and facter 1.5.4-1.el5. In Epel I see puppet-server 0.24.8-1.el5.1 and facter 1.5.4-1.el5 which is better but isn''t 1.5.4 the version
2008 Aug 21
2
Large data sets with R (binding to hadoop available?)
Dear R community, I find R fantastic and use R whenever I can for my data analytic needs. Certain data sets, however, are so large that other tools seem to be needed to pre-process data such that it can be brought into R for further analysis. Questions I have for the many expert contributors on this list are: 1. How do others handle situations of large data sets (gigabytes, terabytes)
2007 Dec 11
4
EL5.1 client problems
Hi all, I attempted to add an EL5.1 client to our puppet server (EL5), and after signing the client cert, got the error "Certificates were not trusted: hostname not match with the server certificate" I found the mailing list discussion and the relevant page: http://www.reductivelabs.com/trac/puppet/wiki/RubySSL-2007-006 As far as I can tell, my puppermaster''s cert CN matches
2013 Apr 11
3
Anyone managed to integrate Ambari/Hortonworks with an existing puppet installation?
Hi All We''re investigating the Hortonworks Hadoop Data Platform. It uses the Apache Ambari installer, and we are running into problems as the installation notes (http://hortonworks.com/hdp110-hmc-quick-start-guide/) for the application say (and I kid you not): *Remove or disable any existing Puppet agent configurations * It seems that its management centre runs as a puppet master and
2013 Oct 09
2
Error while running MR using rmr2
Hi, I have trying to run a simple MR program using rmr2 in a single node Hadoop cluster. Here is the environment for the setup Ubuntu 12.04 (32 bit) R (Ubuntu comes with 2.14.1, so updated to 3.0.2) Installed the latest rmr2 and rhdfs from here<https://github.com/RevolutionAnalytics/RHadoop/wiki/Downloads>and the corresponding dependencies Hadoop 1.2.1 Now I am trying to run a simple MR
2019 Nov 21
2
How to make xapian run in hadoop
Hi all, We use xapian as the backend of our system. Now the data need be indexed ever-increasing, and the local mode is hard to maintain, so we plan to move the index builder to hadoop. We try to make xapian can be run in hadoop, and now met a problem that there are many seek operations when xapian writes the index files, but the method seek() in hadoop c api only support read, and we blocked by
2015 Dec 11
2
SVM hadoop
Hola Mª Luz, Te cuento un poco mi visión: Lo primero de todo es tener claro qué quiero hacer exactamente en paralelo, se me ocurren 3 escenarios: (1) Aplicar un modelo en este caso SVM sobre unos datos muy grandes y por eso necesito hadoop/spark (2) Realizar muchos modelos SVM sobre datos pequeños (por ejemplo uno por usuario) y por eso necesito hadoop/spark para parelilizar estos procesos
2009 Jul 31
1
Using R with Hadoop/Hive for Big Data
Hive <http://hadoop.apache.org/hive/> is a data warehouse infrastructure built on top of Hadoop that provides tools to enable easy data summarization, adhoc querying and analysis of large datasets data stored in Hadoop files. It provides a mechanism to put structure on this data and it also provides a simple query language called QL which is based on SQL and which enables users familiar with
2013 Mar 11
4
Understanding lustre setup ..
Hello, I have been reading http://wiki.lustre.org/images/1/1b/Hadoop_wp_v0.4.2.pdf for setting up Hadoop over lustre. Generally in hadoop setup, we have 1 Namenode and various number of datanodes. If I want to setup the same keeping Lustre as backend, in the document it is mentioned that: ".............Our experiments run on cluster with 8 nodes in total, one is mds/namenode, the rest are
2012 Dec 13
4
Strange signing problem in AWS - stumped
Any light someone can shed sure would be appreciated. I start with 1 cert -- the master''s, where I am running this: jblaine@ip-10-191-115-140:~$ sudo puppet cert list --all + "ip-10-191-115-140.ec2.internal" (74:8B:7B:EF:41:E6:F9:98:93:15:42:6A:4C:2F:28:CC) (alt names: "DNS:ip-10-191-115-140.ec2.internal", "DNS:puppet", "DNS:puppet.ec2.internal")
2011 Nov 28
1
Very strange permission problem: samba on zfs-fuse
Hi all, Centos 5.7 samba-common-3.0.33-3.29.el5_7.4 samba-3.0.33-3.29.el5_7.4 zfs-fuse-0.6.9_p1-6.20100709git.el5.1 smb.conf [depot] path = /data/depot public = no writable = yes directory mask = 2775 create mask = 0664 vfs objects = recycle recycle:repository = .deleted/%U recycle:keeptree = Yes recycle:touch = Yes recycle:versions = Yes recycle:maxsixe =
2015 Dec 10
2
SVM hadoop
Hola, Puedes poner un RStudio en Amazon, poner "caret" y a correr.... No sé si tendrás suficiente con lo que te pueda ofrecer Amazon para tu problema... creo que sí... ;-).... O directamente hacerlo aquí, que toda esta instalación ya la tienen hecha: http://www.teraproc.com/front-page-posts/r-on-demand/ Gracias, Carlos. El 10 de diciembre de 2015, 14:43, MªLuz Morales <mlzmrls
2013 May 20
1
Glusterfs-Hadoop
Hi, Where can I find glusterfs-hadoop-0.20.2-0.1.x86_64.rpm? The following link is from the Gluster FS Admin Guide, but it doesn't exist: http://download.gluster.com/pub/gluster/glusterfs/qa-releases/3.3-beta-2/glusterfs-hadoop-0.20.2-0.1.x86_64.rpm Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: