similar to: CPU Limit in Centos

Displaying 20 results from an estimated 10000 matches similar to: "CPU Limit in Centos"

2015 Nov 10
6
Differences from upstream RHEL
At work, we use some commercial software, that names RHEL6 as a supported OS, but not Centos6. I would like to know the difference between Centos and RHEL, in order to claim (or not) that we can support our users on Centos instead of RHEL. I see the release notes, that say "Packages modified by CentOS," but it's not clear what the modifications are. I have been browsing around for
2011 Jul 15
22
Zil on multiple usb keys
This might be a stupid question, but here goes... Would adding, say, 4 4 or 8gb usb keys as a zil make enough of a difference for writes on an iscsi shared vol? I am finding reads are not too bad (40is mb/s over gige on 2 500gb drives stripped) but writes top out at about 10 and drop a lot lower... If I where to add a couple usb keys for zil, would it make a difference? Thanks. Sent from a
2015 Nov 21
0
CPU Limit in Centos
> From: centos-bounces at centos.org [mailto:centos-bounces at centos.org] On > Behalf Of Edward Ned Harvey (centos) > > A few years ago, I vaguely recall some issue with RHEL needing a special > license or something like that, if you had more than a certain amount of > CPU's or a certain amount of RAM. > > Does Centos work fine for 2 CPU's, 16 cores, 32 threads,
2009 Dec 04
30
ZFS send | verify | receive
If there were a ?zfs send? datastream saved someplace, is there a way to verify the integrity of that datastream without doing a ?zfs receive? and occupying all that disk space? I am aware that ?zfs send? is not a backup solution, due to vulnerability of even a single bit error, and lack of granularity, and other reasons. However ... There is an attraction to ?zfs send? as an augmentation to the
2013 Feb 15
28
zfs-discuss mailing list & opensolaris EOL
So, I hear, in a couple weeks'' time, opensolaris.org is shutting down. What does that mean for this mailing list? Should we all be moving over to something at illumos or something? I''m going to encourage somebody in an official capacity at opensolaris to respond... I''m going to discourage unofficial responses, like, illumos enthusiasts etc simply trying to get people
2011 Jan 29
27
ZFS and TRIM
My google-fu is coming up short on this one... I didn''t see that it had been discussed in a while ... What is the status of ZFS support for TRIM? For the pool in general... and... Specifically for the slog and/or cache??? -------------- next part -------------- An HTML attachment was scrubbed... URL:
2010 Jun 17
9
Monitoring filessytem access
When somebody is hammering on the system, I want to be able to detect who''s doing it, and hopefully even what they''re doing. I can''t seem to find any way to do that. Any suggestions? Everything I can find ... iostat, nfsstat, etc ... AFAIK, just show me performance statistics and so forth. I''m looking for something more granular. Either *who* the
2011 Jul 22
4
add device to mirror rpool in sol11exp
In my new oracle server, sol11exp, it''s using multipath device names... Presently I have two disks attached: (I removed the other 10 disks for now, because these device names are so confusing. This way I can focus on *just* the OS disks.) 0. c0t5000C5003424396Bd0 <SEAGATE-ST32000SSSUN2.0-0514 cyl 3260 alt 2 hd 255 sec 252> /scsi_vhci/disk at g5000c5003424396b
2010 Oct 17
10
RaidzN blocksize ... or blocksize in general ... and resilver
The default blocksize is 128K. If you are using mirrors, then each block on disk will be 128K whenever possible. But if you''re using raidzN with a capacity of M disks (M disks useful capacity + N disks redundancy) then the block size on each individual disk will be 128K / M. Right? This is one of the reasons the raidzN resilver code is inefficient. Since you end up waiting for the
2010 Nov 21
10
Running on Dell hardware?
> From: Edward Ned Harvey [mailto:shill at nedharvey.com] > > I have a Dell R710 which has been flaky for some time.? It crashes about once > per week.? I have literally replaced every piece of hardware in it, and > reinstalled Sol 10u9 fresh and clean. It has been over 3 weeks now, with no crashes, and me doing everything I can to get it to crash again. So I''m going to
2010 Oct 13
40
Running on Dell hardware?
I have a Dell R710 which has been flaky for some time. It crashes about once per week. I have literally replaced every piece of hardware in it, and reinstalled Sol 10u9 fresh and clean. I am wondering if other people out there are using Dell hardware, with what degree of success, and in what configuration? The failure seems to be related to the perc 6i. For some period around the time
2012 Dec 21
4
zfs receive options (was S11 vs illumos zfs compatiblity)
> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- > bounces at opensolaris.org] On Behalf Of bob netherton > > You can, with recv, override any property in the sending stream that can > be > set from the command line (ie, a writable). > > # zfs send repo/support at cpu-0412 | zfs recv -o version=4 repo/test > cannot receive: cannot override received
2011 Jan 05
6
ZFS on top of ZFS iSCSI share
I have a filer running Opensolaris (snv_111b) and I am presenting a iSCSI share from a RAIDZ pool. I want to run ZFS on the share at the client. Is it necessary to create a mirror or use ditto blocks at the client to ensure ZFS can recover if it detects a failure at the client? Thanks, Bruin
2010 Dec 17
6
copy complete zpool via zfs send/recv
Hi, I want to move all the ZFS fs from one pool to another, but I don''t want to "gain" an extra level in the folder structure on the target pool. On the source zpool I used zfs snapshot -r tank at moveTank on the root fs and I got a new snapshot in all sub fs, as expected. Now, I want to use zfs send -R tank at moveTank | zfs recv targetTank/... which would place all zfs fs
2010 Dec 18
10
a single nfs file system shared out twice with different permissions
I am trying to configure a system where I have two different NFS shares which point to the same directory. The idea is if you come in via one path, you will have read-only access and can''t delete any files, if you come in the 2nd path, then you will have read/write access. For example, create the read/write nfs share: zfs create tank/snapshots zfs set sharenfs=on tank/snapshots root
2010 Apr 27
7
Mapping inode numbers to file names
Let''s suppose you rename a file or directory. /tank/widgets/a/rel2049_773.13-4/somefile.txt Becomes /tank/widgets/b/foogoo_release_1.9/README Let''s suppose you are now working on widget B, and you want to look at the past zfs snapshot of README, but you don''t remember where it came from. That is, you don''t know the previous name or location where that
2009 Aug 23
23
incremental backup with zfs to file
FULL backup to a file zfs snapshot -r rpool at 0908 zfs send -Rv rpool at 0908 > /net/remote/rpool/snaps/rpool.0908 INCREMENTAL backup to a file zfs snapshot -i rpool at 0908 rpool at 090822 zfs send -Rv rpool at 090822 > /net/remote/rpool/snaps/rpool.090822 As I understand the latter gives a file with changes between 0908 and 090822. Is this correct? How do I restore those files? I know
2012 Nov 07
45
Dedicated server running ESXi with no RAID card, ZFS for storage?
Morning all... I have a Dedicated server in a data center in Germany, and it has 2 3TB drives, but only software RAID. I have got them to install VMWare ESXi and so far everything is going ok... I have the 2 drives as standard data stores... But i am paranoid... So, i installed Nexenta as a VM, gave it a small disk to boot off and 2 1Tb disks on separate physical drives... I have created a
2012 Nov 20
6
zvol wrapped in a vmdk by Virtual Box and double writes?
Hi folks, (Long time no post...) Only starting to get into this one, so apologies if I''m light on detail, but... I have a shiny SSD I''m using to help make some VirtualBox stuff I''m doing go fast. I have a 240GB Intel 520 series jobbie. Nice. I chopped into a few slices - p0 (partition table), p1 128GB, p2 60gb. As part of my work, I have used it both as a RAW
2010 Jan 15
14
Backing up a ZFS pool
What is the best way to back up a zfs pool for recovery? Recover entire pool or files from a pool... Would you use snapshots and clones? I would like to move the "backup" to a different disk and not use tapes. suggestions?? TIA --Kenny -- This message posted from opensolaris.org