similar to: Differences from upstream RHEL

Displaying 20 results from an estimated 10000 matches similar to: "Differences from upstream RHEL"

2015 Nov 21
5
CPU Limit in Centos
A few years ago, I vaguely recall some issue with RHEL needing a special license or something like that, if you had more than a certain amount of CPU's or a certain amount of RAM. Does Centos work fine for 2 CPU's, 16 cores, 32 threads, and 256 G of ram? Centos6 specifically.
2015 Nov 10
0
Differences from upstream RHEL
On 11/11/2015 09:03 AM, Edward Ned Harvey (centos) wrote: > At work, we use some commercial software, that names RHEL6 as a > supported OS, but not Centos6. I would like to know the difference > between Centos and RHEL, in order to claim (or not) that we can > support our users on Centos instead of RHEL. > > I see the release notes, that say "Packages modified by
2015 Nov 10
0
Differences from upstream RHEL
On 11/10/2015 02:03 PM, Edward Ned Harvey (centos) wrote: > At work, we use some commercial software, that names RHEL6 as a supported OS, but not Centos6. I would like to know the difference between Centos and RHEL, in order to claim (or not) that we can support our users on Centos instead of RHEL. > > I see the release notes, that say "Packages modified by CentOS," but
2015 Nov 11
2
Differences from upstream RHEL
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 11/11/15 15:17, Edward Ned Harvey (centos) wrote: >> From: centos-bounces at centos.org >> [mailto:centos-bounces at centos.org] On Behalf Of Devin Reade >> >> The above answer is right-on. From a technical perspective, you >> can probably expect the 3rd party software to work exactly the >> same on RHEL and
2015 Nov 10
2
Differences from upstream RHEL
--On Tuesday, November 10, 2015 12:53:20 PM -0800 Gordon Messmer <gordon.messmer at gmail.com> wrote: > That depends on what you mean by "support." > > It's almost certainly possible to run the binaries on CentOS, but if you > need any technical support from the vendor of that application, they > might not provide it. Your first step should be to talk to them
2015 Nov 10
0
Differences from upstream RHEL
On 11/10/2015 12:03 PM, Edward Ned Harvey (centos) wrote: > At work, we use some commercial software, that names RHEL6 as a supported OS, but not Centos6. I would like to know the difference between Centos and RHEL, in order to claim (or not) that we can support our users on Centos instead of RHEL. That depends on what you mean by "support." It's almost certainly possible to run
2015 Nov 13
0
Differences from upstream RHEL
In my experience software compiled for RHEL "just work" with Centos and I don't remember any case where it didn't. I have however heard whisperings on a grapevine that RH may want to try and make future versions of Centos slightly incompatible with RHEL but these are probably just whisperings. If you software vendor will not support Centos as RHEL then they probably need a good
2009 Dec 04
30
ZFS send | verify | receive
If there were a ?zfs send? datastream saved someplace, is there a way to verify the integrity of that datastream without doing a ?zfs receive? and occupying all that disk space? I am aware that ?zfs send? is not a backup solution, due to vulnerability of even a single bit error, and lack of granularity, and other reasons. However ... There is an attraction to ?zfs send? as an augmentation to the
2012 Dec 21
4
zfs receive options (was S11 vs illumos zfs compatiblity)
> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- > bounces at opensolaris.org] On Behalf Of bob netherton > > You can, with recv, override any property in the sending stream that can > be > set from the command line (ie, a writable). > > # zfs send repo/support at cpu-0412 | zfs recv -o version=4 repo/test > cannot receive: cannot override received
2011 Jan 19
2
Is there a difference between RHEL 6 and 5.6?
I have seen over the past few months subjects on RHEL 6 and RHEL 5.6 Are these two different builds for Centos to chase or one in the same?
2010 Jun 17
9
Monitoring filessytem access
When somebody is hammering on the system, I want to be able to detect who''s doing it, and hopefully even what they''re doing. I can''t seem to find any way to do that. Any suggestions? Everything I can find ... iostat, nfsstat, etc ... AFAIK, just show me performance statistics and so forth. I''m looking for something more granular. Either *who* the
2010 Apr 28
1
vmdk support by libguestfs in RHEL and upstream
Hi all, As we are doing plan for testing libguestfs, I have a question about vmdk support for libguestfs. In Fedora 12, we can successfully manage a vmdk image via guestfish(add -> run -> mkfs -> mount -> ...). But in RHEL6, for qemu-kvm can not boot with a vmdk image, libguestfs can not manage vmdk either. So is it true that vmdk is only supported by upstream libguestfs but not
2013 Feb 15
28
zfs-discuss mailing list & opensolaris EOL
So, I hear, in a couple weeks'' time, opensolaris.org is shutting down. What does that mean for this mailing list? Should we all be moving over to something at illumos or something? I''m going to encourage somebody in an official capacity at opensolaris to respond... I''m going to discourage unofficial responses, like, illumos enthusiasts etc simply trying to get people
2010 Oct 13
40
Running on Dell hardware?
I have a Dell R710 which has been flaky for some time. It crashes about once per week. I have literally replaced every piece of hardware in it, and reinstalled Sol 10u9 fresh and clean. I am wondering if other people out there are using Dell hardware, with what degree of success, and in what configuration? The failure seems to be related to the perc 6i. For some period around the time
2010 Dec 17
6
copy complete zpool via zfs send/recv
Hi, I want to move all the ZFS fs from one pool to another, but I don''t want to "gain" an extra level in the folder structure on the target pool. On the source zpool I used zfs snapshot -r tank at moveTank on the root fs and I got a new snapshot in all sub fs, as expected. Now, I want to use zfs send -R tank at moveTank | zfs recv targetTank/... which would place all zfs fs
2010 Oct 17
10
RaidzN blocksize ... or blocksize in general ... and resilver
The default blocksize is 128K. If you are using mirrors, then each block on disk will be 128K whenever possible. But if you''re using raidzN with a capacity of M disks (M disks useful capacity + N disks redundancy) then the block size on each individual disk will be 128K / M. Right? This is one of the reasons the raidzN resilver code is inefficient. Since you end up waiting for the
2011 Jan 29
27
ZFS and TRIM
My google-fu is coming up short on this one... I didn''t see that it had been discussed in a while ... What is the status of ZFS support for TRIM? For the pool in general... and... Specifically for the slog and/or cache??? -------------- next part -------------- An HTML attachment was scrubbed... URL:
2010 Apr 27
7
Mapping inode numbers to file names
Let''s suppose you rename a file or directory. /tank/widgets/a/rel2049_773.13-4/somefile.txt Becomes /tank/widgets/b/foogoo_release_1.9/README Let''s suppose you are now working on widget B, and you want to look at the past zfs snapshot of README, but you don''t remember where it came from. That is, you don''t know the previous name or location where that
2018 Jan 18
0
IMP: Release 4.0: CentOS 6 packages will not be made available
On 11/01/2018 18:32, Shyam Ranganathan wrote: > Gluster Users, > > This is to inform you that from the 4.0 release onward, packages for > CentOS 6 will not be built by the gluster community. This also means > that the CentOS SIG will not receive updates for 4.0 gluster packages. > > Gluster release 3.12 and its predecessors will receive CentOS 6 updates > till Release 4.3
2018 Jan 11
3
IMP: Release 4.0: CentOS 6 packages will not be made available
Gluster Users, This is to inform you that from the 4.0 release onward, packages for CentOS 6 will not be built by the gluster community. This also means that the CentOS SIG will not receive updates for 4.0 gluster packages. Gluster release 3.12 and its predecessors will receive CentOS 6 updates till Release 4.3 of gluster (which is slated around Dec, 2018). The decision is due to the following,