Displaying 20 results from an estimated 6000 matches similar to: "ZFS on ZFS based storage?"
2017 Apr 20
1
JavaMail setFlags on readonly folder
Hi,
I'm facing a strange issue using JavaMail, where Dovecot let me open "READ_WRITE" a folder with readonly ACLs (same happens on other imap servers), but then doesn't issue any error when if try to setFlags(...) on that folder.
The result is that the code thinks the folder marked the messages (deleted, flagged, etc), while it actually did not.
Just refreshing the folder from
2023 Jan 26
2
User shares do not appear anymore
Hi, I have an Ubuntu system with samba 4 that did work perfectly until this morning.
Normal shares in smb.conf appeared as always, while user shares included with %U did not anymore.
This is the global part of smb.conf:
? ? ? ? security = user
? ? ? ? username map = /etc/samba/smbusers
? ? ? ? include = /etc/samba/smb.conf.%U
? ? ? ? include = /etc/samba/smb.conf.%m
?
There are then various
2017 Jun 01
2
Possible RENAME bug
Hello, I'm having some trouble working on a webapp managing the imap tree on Dovecot.
The same doesn't happen on other imap servers (e.g. Cyrus).
Looks like if I receive shared folders from another user, and try to rename a folder in it, the rename command returns "ok", then if you list the new folder name, it isn't there.
Looks like you've lost all the original folder
2011 Feb 05
12
ZFS Newbie question
I?ve spend a few hours reading through the forums and wiki and honestly my head is spinning. I have been trying to study up on either buying or building a box that would allow me to add drives of varying sizes/speeds/brands (adding more later etc) and still be able to use the full space of drives (minus parity? [not sure if I got the terminology right]) with redundancy. I have found the ?all in
2011 Jan 05
6
ZFS on top of ZFS iSCSI share
I have a filer running Opensolaris (snv_111b) and I am presenting a
iSCSI share from a RAIDZ pool. I want to run ZFS on the share at the
client. Is it necessary to create a mirror or use ditto blocks at the
client to ensure ZFS can recover if it detects a failure at the client?
Thanks,
Bruin
2009 Nov 13
11
scrub differs in execute time?
I have a raidz2 and did a scrub, it took 8h. Then I reconnected some drives to other SATA ports, and now it takes 15h to scrub??
Why is that?
--
This message posted from opensolaris.org
2015 Oct 28
2
Dovecot, JavaMail, UIDs and Message Numbers
Hi,
new to this list, so a little prelude to my issue with Dovecot.
We have been using JavaMail against Cyrus for ages, and developed Webtop, a huge Java web collaboration application running on them in production in various installations for all this time.
Recently we had to run the same software against Dovecot pre-existing accounts running on Nethesis NethServer solution.
After some time of
2011 Apr 05
11
ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?
Hello,
I''m debating an OS change and also thinking about my options for data
migration to my next server, whether it is on new or the same hardware.
Migrating to a new machine I understand is a simple matter of ZFS
send/receive, but reformatting the existing drives to host my existing
data is an area I''d like to learn a little more about. In the past I''ve
asked about
2010 Nov 23
14
ashift and vdevs
zdb -C shows an shift value on each vdev in my pool, I was just wondering if
it is vdev specific, or pool wide. Google didn''t seem to know.
I''m considering a mixed pool with some "advanced format" (4KB sector)
drives, and some normal 512B sector drives, and was wondering if the ashift
can be set per vdev, or only per pool. Theoretically, this would save me
some size on
2010 Aug 21
8
ZFS with Equallogic storage
I''m planning on setting up an NFS server for our ESXi hosts and plan on using a virtualized Solaris or Nexenta host to serve ZFS over NFS.
The storage I have available is provided by Equallogic boxes over 10Gbe iSCSI.
I am trying to figure out the best way to provide both performance and resiliency given the Equallogic provides the redundancy.
Since I am hoping to provide a 2TB
2007 May 12
3
zfs and jbod-storage
Hi.
I''m managing a HDS storage system which is slightly larger than 100 TB
and we have used approx. 3/4. We use vxfs. The storage system is
attached to a solaris 9 on sparc via a fiberswitch. The storage is
shared via nfs to our webservers.
If I was to replace vxfs with zfs I could utilize raidz(2) instead of
the built-in hardware raid-controller.
Are there any jbod-only storage
2008 Jul 23
72
The best motherboard for a home ZFS fileserver
I''m a fan of ZFS since I''ve read about it last year.
Now I''m on the way to build a home fileserver and I''m thinking to go with Opensolaris and eventually ZFS!!
Apart from the other components, the main problem is to choose the motherboard. The offer is incredibly high and I''m lost.
Minimum requisites should be:
- working well with Open Solaris ;-)
-
2009 Dec 10
6
Confusion regarding ''zfs send''
I''m playing around with snv_128 on one of my systems, and trying to
see what kinda of benefits enabling dedup will give me.
The standard practice for reprocessing data that''s already stored to
add compression and now dedup seems to be a send / receive pipe
similar to:
zfs send -R <old fs>@snap | zfs recv -d <new fs>
However, according to the man page,
2010 Jun 04
5
Depth of Scrub
Hi,
I have a small question about the depth of scrub in a raidz/2/3 configuration.
I''m quite sure scrub does not check spares or unused areas of the disks (it
could check if the disks detects any errors there).
But what about the parity? Obviously it has to be checked, but I can''t find
any indications for it in the literature. The man page only states that the
data is being
2010 Sep 07
3
zpool create using whole disk - do I add "p0"? E.g. c4t2d0 or c42d0p0
I have seen conflicting examples on how to create zpools using full disks. The zpool(1M) page uses "c0t0d0" but OpenSolaris Bible and others show "c0t0d0p0". E.g.:
zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0
zpool create tank raidz c0t0d0p0 c0t1d0p0 c0t2d0p0 c0t3d0p0 c0t4d0p0 c0t5d0p0
I have not been able to find any discussion on whether (or when) to
2010 Apr 24
6
Extremely slow raidz resilvering
Hello everyone,
As one of the steps of improving my ZFS home fileserver (snv_134) I wanted
to replace a 1TB disk with a newer one of the same vendor/model/size because
this new one has 64MB cache vs. 16MB in the previous one.
The removed disk will be use for backups, so I thought it''s better off to
have a 64MB cache disk in the on-line pool than in the backup set sitting
off-line all
2006 Mar 03
1
zfs? page faults
Hi,
I have setup an old Compaq Proliant DL580 with 2 xeon @700MHz 2Gb RAM
two SmartArray 5300 controllers and 12 drives in an array enclosure.
I am running the latest opensolaris update bfu''ed from binaries since I
could not build from source. I am controlling the drives with the
cpqary3 driver (Solaris 10) from HP.
Initially the array had 7 drives and I created a raidz zfs pool
2008 Dec 20
2
General question about ZFS and RAIDZ
Hello to the forum,
with my general question about ZFS and RAIDZ I want the following to know:
Must all harddisks for the storage pool have the same capacity or is it possible to use harddisks with different capacities?
Many thanks for the answers.
Best regards
JueDan
--
This message posted from opensolaris.org
2009 Oct 22
1
raidz "ZFS Best Practices" wiki inconsistency
<http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#RAID-Z_Configuration_Requirements_and_Recommendations>
says that the number of disks in a RAIDZ should be (N+P) with
N = {2,4,8} and P = {1,2}.
But if you go down the page just a little further to the thumper
configuration examples, none of the 3 examples follow this recommendation!
I will have 10 disks to put into a
2010 Jul 16
1
ZFS mirror to RAIDz?
Hi all,
I currently have four drives in my OpenSolaris box. The drives are split into two mirrors, one mirror containing my rpool (disks 1 & 2) and one containing other data (disks 2 & 3).
I''m running out of space on my data mirror and am thinking of upgrading it to two 2TB disks. I then considered replacing disk 2 with a 2TB disk and making a RAIDz from the three new drives.