Displaying 20 results from an estimated 20000 matches similar to: "Gosa (was: user management )"
2009 Nov 04
2
user management solution needed
Hey folks,
What is the best way to manage users across multiple CentOS boxes?
Ideally what I'd like to be able to do is have central control over
who has access to which box from a minute-to-minute basis. e.g. User
X needs access to Box A for 30 minutes - clickity, clickity and they
have access for that long after which their access is automatically
turned off.
thanks,
-Alan
--
2009 Oct 22
3
what else is missing in 5.4?
[root at alan centos]# du -sh 5.*
19G 5.3
14G 5.4
--
?Don't eat anything you've ever seen advertised on TV?
- Michael Pollan, author of "In Defense of Food"
2011 Dec 01
2
JNLP app problems
Hey folks,
I'm trying to use a 5.3 box to run some JNLP apps, but all I get is a
view of XML.
I try doing some googling and don't come up with much other than this
one thread that says I may need both 32 and 64 bit Java to run JNLP.
But it is not clear to me how to do that.
thanks,
-Alan
--
?Don't eat anything you've ever seen advertised on TV?
? ? ? ?? - Michael Pollan,
2009 Nov 12
1
please suggest reading on Linux memory management
Hey folks,
This is sort of a follow up to my email yesterday about memory leaks.
I'd found some really good reading material in my hour or so of
googling prior to sending that email. Wondering if anyone can
recommend good reading on the topic - including raw facts like this
jackpot I seem to have hit upon (
http://www.kernel.org/doc/gorman/html/understand/ ), as well as
articles on
2009 Nov 03
8
recommend benchmarking SW
Hey folks,
We've got some new hardware and are trying to figure out what best to
do with it. Either run CentOS right on the bare metal, or
virtualize, or several combination options. Mainly looking at :
- CentOS on bare metal
- CentOS on ESXi 4.0 with local disk
- CentOS on ESXi with 1 VM running Openfiler to serve disk to other VMs
And want to benchmark these 3 scenarios
So far all we
2013 Sep 16
7
Rsync rules for Shorewall
Hi folks,
I''m having an issue with rsync between my firewall and an internal
box. It seems to be a shorewall issue (or correctly speaking, an
issue with my shorewall config) because if I disable shorewall my
rsync works fine.
And I just can''t find it documented anywhere what I need to do.
I have rules like this :
root@userver:/etc/shorewall# grep -i Rsync rules
2013 Jul 04
6
Trouble creating DomU with 2 NICs
Hey folks,
I created a DomU, installed Linux, and then realized I''d only given it
1 NIC so brought it down to edit the cfg file to give it another NIC.
Originally I just had :
vif = ['''']
And so I guess the defaults worked for the 1 NIC. So I changed it to :
vif =
2011 Nov 25
3
CentOS fileserver migrating to ZFS appliance
Hey folks,
I've got a CentOS / RHEL (5.x) environment and am in the process of
migrating the 5.3 file server over to an Oracle/Sun 7120 appliance.
I want to keep my main 5.3 server as our NIS server but am moving NFS
and Samba functions over to the appliance.
NFS was a no brainer as one can imagine. Samba seems a bit trickier
because of the authentication requirements in the ZFS server.
2012 Jan 06
2
monitoring space in directories
Hey folks,
Is there a Linux tool that will monitor a disk and tell me which
directories are growing over time?
I could cobble something together myself of course, but if there is already
a good off-the-shelf solution, why bother?
Even if it only checks once per day that would be fine. Graphs would be
pretty too :-)
cheers,
-Alan
--
?Don't eat anything you've ever seen advertised on
2011 Dec 08
2
ZFS magic (was: Backup Redux)
> My non-tape solution of choice is definitely rsync => box with ZFS,
> snapshot however often you'd like. => forever incrementals.
>
> For more redundancy and performance, add more ZFS boxes, do
> replication between them.
>
>
Not sure whether ZFS now makes this OT - if so, sorry for not putting "OT:"
in the subject.
Anyway, I have a ZFS storage unit
2011 Dec 19
5
forcing yum to download but not install
Hey folks,
Is there any way to fake a "yum update" just to get yum to force a download
of all the files it needs, without actually installing them.
I finally have a RPM cache/proxy working and I just want to populate it.
The server I want to actually update cannot be updated until tomorrow but
I'd like to do a fake update just to force the RPMs into my cache so they
will all be
2009 Dec 15
2
Using (was: Announcing) Gluster Storage Platform
OK, you have my attention. But reading around on the website for 5 or
10 minutes and I'm still not sure what this is.
What is it?
Why would I use it?
What would I use instead of it? (Who are its competitors)
When would I not use it?
I see this "The software is a powerful and flexible solution that
simplifies the task of managing unstructured file data", but I have no
idea what
2011 Dec 19
1
Squid to Cache RPMs from yum (was: forcing yum ...)
>
> The default config won't cache large files. And yum will try to use
> different mirrors every time.
>
>
Aha. I thought I had it set for no file limit, but I guess using different
mirrors is what is confounding me.
So squid will cache a specific file from a specific site, I guess? And
even if it tries to get the exact same file elsewhere, it will re-download
it afresh?
2012 May 23
5
biggest disk partition on 5.8?
Hey folks,
I have a Sun J4400 SAS1 disk array with 24 x 1T drives in it connected
to a Sunfire x2250 running 5.8 ( 64 bit )
I used 'arcconf' to create a big RAID60 out of (see below).
But then I mount it and it is way too small
This should be about 20TB :
[root at solexa1 StorMan]# df -h /dev/sdb1
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 186G 60M
2011 Nov 30
3
checking package versions in various releases
Hey folks,
I am sure there must be an easy way to do this.
I am currently running 5.3 and "yum info db4" tells me that they have
version 4.3.29.
Is that telling me that this is the version in 5.3? Or that this is
the latest version in the 5.x stream?
If the former, then how do I find out what release of the db4 software
(sleepcat berkeley db) is in 5.7?
I don't want to
2009 Aug 28
2
Need httpd / apache RPM > 2.2.3 for 5.3
Hey folks,
It looks to me like the httpd on CentOS is stuck at 2.2.2 - what's up
with that? Even after a yum upgrade.
I need 2.2.10 or greater, and would prefer to get it via yum or at
very last an RPM if at all possible. But I cannot even find an RPM
out there. For some reason both EPEL and Dag Wieers do not even seem
to have an httpd RPM for RHEL5
Any idea where to look?
Why are we
2009 Sep 21
2
sed (or other) magic to get RPM base names ?
Hey folks,
Once upon a time I saw some sed magic to take the output of "rpm -qa"
and strip away all the version info to give just the RPM base names.
And of course I forgot to note it :-/ And have not been able to
replicate it myself.
e.g. from this :
avahi-0.6.16-1.el5
avahi-glib-0.6.16-1.el5
produce this :
avahi
avahi-glib
thanks,
-Alan
--
?Don't eat anything you've
2011 Nov 26
2
OT: ZFS appliance Oracle / Sun 7120
On Fri, Nov 25, 2011 at 8:11 PM, Fajar Priyanto <fajarpri at arinet.org> wrote:
> Hi Alan, sorry for the OT.
> I'm very much interested on the 7120.
> How much space do you have on it and what is the price?
I don't know the price - I've only been here a few weeks.
I'll have to check when I'm back at work for details on it - don't
have my VPN login yet.
2010 Jan 12
2
more kickstart - saving %pre decisions for %post
Hey again folks,
How can I save answers to questions in %pre, for use in %post?
I'm assuming (though have not yet tried) that variables won't live that long.
Could I save them off to /tmp in a file, and retrieve them?
e.g.
%pre
echo "VARNAME=$VARNAME" >> /tmp/varfile
%post
grep ^VARNAME= /tmp/varile
Or some such ...
--
?Don't eat anything you've ever seen
2012 Feb 01
4
gtar compression achieved
Hey folks,
I looked at the man page and don't see any way to do this - maybe it is a
function of the compression program used I dunno.
Is there any way to get gtar to report on the compression it achieved?
I can't just check file sizes because I'm writing data to tape.
The basic problem is that I know how much data is there to begin with but I
don't know how much room it took