Displaying 20 results from an estimated 4000 matches similar to: "Best Practises => Keep Pool Below 80%?"
2007 Sep 17
4
ZFS Evil Tuning Guide
Tuning should not be done in general and Best practices
should be followed.
So get very much acquainted with this first :
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
Then if you must, this could soothe or sting :
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide
So drive carefully.
-r
2008 Dec 19
4
ZFS boot and data on same disk - is this supported?
I have read the ZFS best practice guide located at
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
However I have questions whether we support using slices for data on the
same disk as we use for ZFS boot. What issues does this create if we
have a disk failure in a mirrored environment? Does anyone have examples
of customers doing this in production environments.
I
2009 Oct 22
1
raidz "ZFS Best Practices" wiki inconsistency
<http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#RAID-Z_Configuration_Requirements_and_Recommendations>
says that the number of disks in a RAIDZ should be (N+P) with
N = {2,4,8} and P = {1,2}.
But if you go down the page just a little further to the thumper
configuration examples, none of the 3 examples follow this recommendation!
I will have 10 disks to put into a
2008 Sep 14
10
ZFS system requirements
Hi, this says that opensolaris only requires 512MB ram: http://dlc.sun.com/osol/docs/content/IPS/sysreq.html
This says 1GB ram and a 64bit processor are recommended: http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#Memory_and_Swap_Space
Am I going to have problems if I run opensolaris and zfs at the minimum requirements?
--
This message posted from opensolaris.org
2010 Apr 28
3
Solaris 10 default caching segmap/vpm size
Whats the default size of the file system cache for Solaris 10 x86 and can it be tuned?
I read various posts on the subject and its confusing..
--
This message posted from opensolaris.org
2008 Nov 26
9
ZPool and Filesystem Sizing - Best Practices?
Hello,
We have a new Thor here with 24TB of disk in (first of many, hopefully).
We are trying to determine the bext practices with respect to file system
management and sizing. Previously, we have tried to keep each file system
to a max size of 500GB to make sure we could fit it all on a single tape,
and to minimise restore times and impact should we experience some kind of
volume
2016 Oct 25
4
[PATCH] virtio-net: Update the mtu code to match virtio spec
From: Aaron Conole <aconole at bytheb.org>
The virtio committee recently ratified a change, VIRTIO-152, which
defines the mtu field to be 'max' MTU, not simply desired MTU.
This commit brings the virtio-net device in compliance with VIRTIO-152.
Additionally, drop the max_mtu branch - it cannot be taken since the u16
returned by virtio_cread16 will never exceed the initial value of
2016 Oct 25
4
[PATCH] virtio-net: Update the mtu code to match virtio spec
From: Aaron Conole <aconole at bytheb.org>
The virtio committee recently ratified a change, VIRTIO-152, which
defines the mtu field to be 'max' MTU, not simply desired MTU.
This commit brings the virtio-net device in compliance with VIRTIO-152.
Additionally, drop the max_mtu branch - it cannot be taken since the u16
returned by virtio_cread16 will never exceed the initial value of
2016 Oct 21
2
[PATCH net-next v2 6/9] net: use core MTU range checking in virt drivers
On Thu, Oct 20, 2016 at 11:23:54PM +0300, Michael S. Tsirkin wrote:
> On Thu, Oct 20, 2016 at 01:55:21PM -0400, Jarod Wilson wrote:
...
> > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > index fad84f3..720809f 100644
> > --- a/drivers/net/virtio_net.c
> > +++ b/drivers/net/virtio_net.c
> > @@ -1419,17 +1419,6 @@ static const struct ethtool_ops
2010 Jan 28
16
Large scale ZFS deployments out there (>200 disks)
While thinking about ZFS as the next generation filesystem without limits I am wondering if the real world is ready for this kind of incredible technology ...
I''m actually speaking of hardware :)
ZFS can handle a lot of devices. Once in the import bug (http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6761786) is fixed it should be able to handle a lot of disks.
I want to
2016 Oct 21
2
[PATCH net-next v2 6/9] net: use core MTU range checking in virt drivers
On Thu, Oct 20, 2016 at 11:23:54PM +0300, Michael S. Tsirkin wrote:
> On Thu, Oct 20, 2016 at 01:55:21PM -0400, Jarod Wilson wrote:
...
> > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > index fad84f3..720809f 100644
> > --- a/drivers/net/virtio_net.c
> > +++ b/drivers/net/virtio_net.c
> > @@ -1419,17 +1419,6 @@ static const struct ethtool_ops
2007 Nov 15
3
read/write NFS block size and ZFS
Hello all...
I''m migrating a nfs server from linux to solaris, and all clients(linux) are using read/write block sizes of 8192. That was the better performance that i got, and it''s working pretty well (nfsv3). I want to use all the zfs'' advantages, and i know i can have a performance loss, so i want to know if there is a "recomendation" for bs on nfs/zfs, or
2009 Aug 05
2
?: SMI vs. EFI label and a disk''s write cache
For Solaris 10 5/09...
There are supposed to be performance improvements if you create a zpool
on a full disk, such as one with an EFI label. Does the same apply if
the full disk is used with an SMI label, which is required to boot?
I am trying to determine the trade-off, if any, of having a single rpool
on cXtYd0s2, if I can even do that, and improved performance compared to
having two
2009 Mar 04
5
Oracle database on zfs
Hi,
I am wondering if there is a guideline on how to configure ZFS on a server
with Oracle database?
We are experiencing some slowness on writes to ZFS filesystem. It take about
530ms to write a 2k data.
We are running Solaris 10 u5 127127-11 and the back-end storage is a RAID5
EMC EMX.
This is a small database with about 18gb storage allocated.
Is there a tunable parameters that we can apply to
2016 Oct 19
7
[PATCH net-next 5/6] net: use core MTU range checking in virt drivers
hyperv_net:
- set min/max_mtu
virtio_net:
- set min/max_mtu
- remove virtnet_change_mtu
vmxnet3:
- set min/max_mtu
CC: netdev at vger.kernel.org
CC: virtualization at lists.linux-foundation.org
CC: "K. Y. Srinivasan" <kys at microsoft.com>
CC: Haiyang Zhang <haiyangz at microsoft.com>
CC: "Michael S. Tsirkin" <mst at redhat.com>
CC: Shrikrishna Khare
2016 Oct 19
7
[PATCH net-next 5/6] net: use core MTU range checking in virt drivers
hyperv_net:
- set min/max_mtu
virtio_net:
- set min/max_mtu
- remove virtnet_change_mtu
vmxnet3:
- set min/max_mtu
CC: netdev at vger.kernel.org
CC: virtualization at lists.linux-foundation.org
CC: "K. Y. Srinivasan" <kys at microsoft.com>
CC: Haiyang Zhang <haiyangz at microsoft.com>
CC: "Michael S. Tsirkin" <mst at redhat.com>
CC: Shrikrishna Khare
2010 May 07
2
ZFS root ARC memory usage on VxFS system...
Hi Folks..
We have started to convert our Veritas clustered systems over to ZFS root to
take advantage of the extreme simplification of using Live Upgrade. Moving the
data of these systems off VxVM and VxFS is not in scope for reasons to numerous
to go into..
One thing my customers noticed immediately was a reduction in "free" memory as
reported by ''top''. By way
2016 Oct 25
1
[PATCH v2 net-next] virtio-net: Update the mtu code to match virtio spec
The virtio committee recently ratified a change, VIRTIO-152, which
defines the mtu field to be 'max' MTU, not simply desired MTU.
This commit brings the virtio-net device in compliance with VIRTIO-152.
Additionally, drop the max_mtu branch - it cannot be taken since the u16
returned by virtio_cread16 will never exceed the initial value of
max_mtu.
Signed-off-by: Aaron Conole <aconole
2016 Oct 25
1
[PATCH v2 net-next] virtio-net: Update the mtu code to match virtio spec
The virtio committee recently ratified a change, VIRTIO-152, which
defines the mtu field to be 'max' MTU, not simply desired MTU.
This commit brings the virtio-net device in compliance with VIRTIO-152.
Additionally, drop the max_mtu branch - it cannot be taken since the u16
returned by virtio_cread16 will never exceed the initial value of
max_mtu.
Signed-off-by: Aaron Conole <aconole
2007 Nov 27
4
SAN arrays with NVRAM cache : ZIL and zfs_nocacheflush
Hi,
I read some articles on solarisinternals.com like "ZFS_Evil_Tuning_Guide" on http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide . They clearly suggest to disable cache flush http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#FLUSH .
It seems to be the only serious article on the net about this subject.
Could someone here state on this