Displaying 20 results from an estimated 10000 matches similar to: "14.4G samba filesystem limit?"
2015 Nov 21
5
CPU Limit in Centos
A few years ago, I vaguely recall some issue with RHEL needing a special license or something like that, if you had more than a certain amount of CPU's or a certain amount of RAM.
Does Centos work fine for 2 CPU's, 16 cores, 32 threads, and 256 G of ram?
Centos6 specifically.
2009 Nov 12
8
"zfs send" from solaris 10/08 to "zfs receive" on solaris 10/09
I built a fileserver on solaris 10u6 (10/08) intending to back it up to
another server via zfs send | ssh othermachine ''zfs receive''
However, the new server is too new for 10u6 (10/08) and requires a later
version of solaris . presently available is 10u8 (10/09)
Is it crazy for me to try the send/receive with these two different versions
of OSes?
Is it possible the
2011 Jul 22
4
add device to mirror rpool in sol11exp
In my new oracle server, sol11exp, it''s using multipath device names...
Presently I have two disks attached: (I removed the other 10 disks for now,
because these device names are so confusing. This way I can focus on *just*
the OS disks.)
0. c0t5000C5003424396Bd0 <SEAGATE-ST32000SSSUN2.0-0514 cyl 3260 alt 2
hd 255 sec 252>
/scsi_vhci/disk at g5000c5003424396b
2011 Jan 29
27
ZFS and TRIM
My google-fu is coming up short on this one... I didn''t see that it had
been discussed in a while ...
What is the status of ZFS support for TRIM?
For the pool in general...
and...
Specifically for the slog and/or cache???
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2011 Jul 15
22
Zil on multiple usb keys
This might be a stupid question, but here goes... Would adding, say, 4 4 or 8gb usb keys as a zil make enough of a difference for writes on an iscsi shared vol?
I am finding reads are not too bad (40is mb/s over gige on 2 500gb drives stripped) but writes top out at about 10 and drop a lot lower... If I where to add a couple usb keys for zil, would it make a difference?
Thanks.
Sent from a
2010 Nov 21
10
Running on Dell hardware?
> From: Edward Ned Harvey [mailto:shill at nedharvey.com]
>
> I have a Dell R710 which has been flaky for some time.? It crashes about
once
> per week.? I have literally replaced every piece of hardware in it, and
> reinstalled Sol 10u9 fresh and clean.
It has been over 3 weeks now, with no crashes, and me doing everything I
can to get it to crash again. So I''m going to
2013 Feb 15
28
zfs-discuss mailing list & opensolaris EOL
So, I hear, in a couple weeks'' time, opensolaris.org is shutting down. What does that mean for this mailing list? Should we all be moving over to something at illumos or something?
I''m going to encourage somebody in an official capacity at opensolaris to respond...
I''m going to discourage unofficial responses, like, illumos enthusiasts etc simply trying to get people
2009 Dec 04
30
ZFS send | verify | receive
If there were a ?zfs send? datastream saved someplace, is there a way to
verify the integrity of that datastream without doing a ?zfs receive? and
occupying all that disk space?
I am aware that ?zfs send? is not a backup solution, due to vulnerability of
even a single bit error, and lack of granularity, and other reasons.
However ... There is an attraction to ?zfs send? as an augmentation to the
2009 Aug 23
23
incremental backup with zfs to file
FULL backup to a file
zfs snapshot -r rpool at 0908
zfs send -Rv rpool at 0908 > /net/remote/rpool/snaps/rpool.0908
INCREMENTAL backup to a file
zfs snapshot -i rpool at 0908 rpool at 090822
zfs send -Rv rpool at 090822 > /net/remote/rpool/snaps/rpool.090822
As I understand the latter gives a file with changes between 0908 and
090822. Is this correct?
How do I restore those files? I know
2010 Oct 17
10
RaidzN blocksize ... or blocksize in general ... and resilver
The default blocksize is 128K. If you are using mirrors, then each block on
disk will be 128K whenever possible. But if you''re using raidzN with a
capacity of M disks (M disks useful capacity + N disks redundancy) then the
block size on each individual disk will be 128K / M. Right? This is one of
the reasons the raidzN resilver code is inefficient. Since you end up
waiting for the
2011 Jan 05
6
ZFS on top of ZFS iSCSI share
I have a filer running Opensolaris (snv_111b) and I am presenting a
iSCSI share from a RAIDZ pool. I want to run ZFS on the share at the
client. Is it necessary to create a mirror or use ditto blocks at the
client to ensure ZFS can recover if it detects a failure at the client?
Thanks,
Bruin
2010 Jul 20
16
zfs raidz1 and traditional raid 5 perfomrance comparision
Hi,
for zfs raidz1, I know for random io, iops of a raidz1 vdev eqaul to one physical disk iops, since raidz1 is like raid5 , so is raid5 has same performance like raidz1? ie. random iops equal to one physical disk''s ipos.
Regards
Victor
--
This message posted from opensolaris.org
2010 Jun 17
9
Monitoring filessytem access
When somebody is hammering on the system, I want to be able to detect who''s
doing it, and hopefully even what they''re doing.
I can''t seem to find any way to do that. Any suggestions?
Everything I can find ... iostat, nfsstat, etc ... AFAIK, just show me
performance statistics and so forth. I''m looking for something more
granular. Either *who* the
2010 Dec 23
31
SAS/short stroking vs. SSDs for ZIL
Hi,
as I have learned from the discussion about which SSD to use as ZIL
drives, I stumbled across this article, that discusses short stroking
for increasing IOPs on SAS and SATA drives:
http://www.tomshardware.com/reviews/short-stroking-hdd,2157.html
Now, I am wondering if using a mirror of such 15k SAS drives would be a
good-enough fit for a ZIL on a zpool that is mainly used for file
2011 Aug 10
9
zfs destory snapshot takes an hours.
Hi,
I am facing issue with zfs destroy, this takes almost 3 Hours to delete the snapshot of size 150G.
Could you please help me to resolve this issue, why zfs destroy takes this much time.
While taking snapshot, it''s done within few seconds.
I have tried with removing with old snapshot but still problem is same.
===========================
I am using :
Release : OpenSolaris
2010 Oct 13
40
Running on Dell hardware?
I have a Dell R710 which has been flaky for some time. It crashes about
once per week. I have literally replaced every piece of hardware in it, and
reinstalled Sol 10u9 fresh and clean.
I am wondering if other people out there are using Dell hardware, with what
degree of success, and in what configuration?
The failure seems to be related to the perc 6i. For some period around the
time
2010 Dec 18
10
a single nfs file system shared out twice with different permissions
I am trying to configure a system where I have two different NFS shares
which point to the same directory. The idea is if you come in via one path,
you will have read-only access and can''t delete any files, if you come in
the 2nd path, then you will have read/write access.
For example, create the read/write nfs share:
zfs create tank/snapshots
zfs set sharenfs=on tank/snapshots
root
2010 Mar 24
21
ZFS on a 11TB HW RAID-5 controller
Hello all,
I am a complete newbie to OpenSolaris, and must to setup a ZFS NAS. I do have linux experience, but have never used ZFS. I have tried to install OpenSolaris Developer 134 on a 11TB HW RAID-5 virtual disk, but after the installation I can only use one 2TB disk, and I cannot partition the rest. I realize that maximum partition size is 2TB, but I guess the rest must be usable. For
2015 Nov 21
0
CPU Limit in Centos
> From: centos-bounces at centos.org [mailto:centos-bounces at centos.org] On
> Behalf Of Edward Ned Harvey (centos)
>
> A few years ago, I vaguely recall some issue with RHEL needing a special
> license or something like that, if you had more than a certain amount of
> CPU's or a certain amount of RAM.
>
> Does Centos work fine for 2 CPU's, 16 cores, 32 threads,
2010 Apr 10
21
What happens when unmirrored ZIL log device is removed ungracefully
Due to recent experiences, and discussion on this list, my colleague and I
performed some tests:
Using solaris 10, fully upgraded. (zpool 15 is latest, which does not have
log device removal that was introduced in zpool 19) In any way possible,
you lose an unmirrored log device, and the OS will crash, and the whole
zpool is permanently gone, even after reboots.
Using opensolaris,