Displaying 20 results from an estimated 8000 matches similar to: "Supporting ~10K users on ZFS"
2008 Feb 17
12
can''t share a zfs
-bash-3.2$ zfs share tank
cannot share ''tank'': share(1M) failed
-bash-3.2$
how do i figure out what''s wrong?
This message posted from opensolaris.org
2008 Aug 26
5
Problem w/ b95 + ZFS (version 11) - seeing fair number of errors on multiple machines
Hi,
After upgrading to b95 of OSOL/Indiana, and doing a ZFS upgrade to the newer
revision, all arrays I have using ZFS mirroring are displaying errors. This
started happening immediately after ZFS upgrades. Here is an example:
ormandj at neutron.corenode.com:~$ zpool status
pool: rpool
state: DEGRADED
status: One or more devices has experienced an unrecoverable error. An
attempt was
2007 Oct 26
1
data error in dataset 0. what''s that?
Hi forum,
I did something stupid the other day, managed to connect an external disk that was part of zpool A such that it appeared in zpool B. I realised as soon as I had done zpool status that zpool B should not have been online, but it was. I immediately switched off the machine, booted without that disk connected and destroyed zpool B. I managed to get zpool A back and all of my data appears
2007 Aug 30
15
ZFS, XFS, and EXT4 compared
I have a lot of people whispering "zfs" in my virtual ear these days,
and at the same time I have an irrational attachment to xfs based
entirely on its lack of the 32000 subdirectory limit. I''m not afraid of
ext4''s newness, since really a lot of that stuff has been in Lustre for
years. So a-benchmarking I went. Results at the bottom:
2009 Dec 15
7
ZFS Dedupe reporting incorrect savings
Hi,
Created a zpool with 64k recordsize and enabled dedupe on it.
zpool create -O recordsize=64k TestPool device1
zfs set dedup=on TestPool
I copied files onto this pool over nfs from a windows client.
Here is the output of zpool list
Prompt:~# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
TestPool 696G 19.1G 677G 2% 1.13x ONLINE -
When I ran a
2009 Apr 19
21
[on-discuss] Reliability at power failure?
Casper.Dik at Sun.COM wrote:
>
> I would suggest that you follow my recipe: not check the boot-archive
> during a reboot. And then report back. (I''m assuming that that will take
> several weeks)
>
We are back at square one; or, at the subject line.
I did a zpool status -v, everything was hunky dory.
Next, a power failure, 2 hours later, and this is what zpool status
2011 Oct 12
33
weird bug with Seagate 3TB USB3 drive
Banging my head against a Seagate 3TB USB3 drive.
Its marketing name is:
Seagate Expansion 3 TB USB 3.0 Desktop External Hard Drive STAY3000102
format(1M) shows it identify itself as:
Seagate-External-SG11-2.73TB
Under both Solaris 10 and Solaris 11x, I receive the evil message:
| I/O request is not aligned with 4096 disk sector size.
| It is handled through Read Modify Write but the performance
2007 Sep 14
5
ZFS Space Map optimalization
I have a huge problem with space maps on thumper. Space maps takes over 3GB
and write operations generates massive read operations.
Before every spa sync phase zfs reads space maps from disk.
I decided to turn on compression for pool ( only for pool, not filesystems ) and it helps.
Now space maps, intent log, spa history are compressed.
Not I''m thinking about disabling checksums. All
2008 Apr 03
3
[Bug 971] New: zfs key -l fails after unloading (keyscope=dataset)
http://defect.opensolaris.org/bz/show_bug.cgi?id=971
Summary: zfs key -l fails after unloading (keyscope=dataset)
Classification: Development
Product: zfs-crypto
Version: unspecified
Platform: Other
OS/Version: Solaris
Status: NEW
Severity: major
Priority: P2
Component: other
AssignedTo:
2009 Dec 06
20
Accidentally added disk instead of attaching
Hi,
I wanted to add a disk to the tank pool to create a mirror. I accidentally used zpool add ? instead of zpool attach ? and now the disk is added. Is there a way to remove the disk without loosing data? Or maybe change it to mirror?
Thanks,
Martijn
--
This message posted from opensolaris.org
2006 Dec 12
23
ZFS Storage Pool advice
This question is concerning ZFS. We have a Sun Fire V890 attached to a EMC disk array. Here''s are plan to incorporate ZFS:
On our EMC storage array we will create 3 LUNS. Now how would ZFS be used for the best performance?
What I''m trying to ask is if you have 3 LUNS and you want to create a ZFS storage pool, would it be better to have a storage pool per LUN or combine the 3
2010 Apr 05
3
no hot spare activation?
While testing a zpool with a different storage adapter using my "blkdev"
device, I did a test which made a disk unavailable -- all attempts to
read from it report EIO.
I expected my configuration (which is a 3 disk test, with 2 disks in a
RAIDZ and a hot spare) to work where the hot spare would automatically
be activated. But I''m finding that ZFS does not behave this way
2009 Mar 27
18
Growing a zpool mirror breaks on Adaptec 1205sa PCI
Setup: Osol.11 build 109
Athlon64 3400+ Aopen AK-86L mobo
adeptec 1250sa Sata PCI controller card
[re-posted from accidental post to osol `general'' group]
I''m having trouble with an adaptec 1205sa (non-raid) SATA PCI card.
It was all working fine when I plugged 2 used sata 200gb disks of a
windows xp machine into it. Booted my osol server and added a zpool
mirror using those
2006 Mar 23
17
Poor performance on NFS-exported ZFS volumes
I''m seeing some pretty pitiful performance using ZFS on a NFS server, with a ZFS volume exported (only with rw=host.foo.com,root=host.foo.com opts) and mounted on a Linux host running kernel 2.4.31. This linux kernel I''m working with is limited in that I can only do NFSv2 mounts... irregardless of that aspect, I''m sure something''s amiss.
I mounted the zfs-based
2006 Aug 22
1
Interesting zfs destroy failure
Saw this while writing a script today -- while debugging the script, I was ctrl-c-ing it a lot rather
than wait for the zfs create / zfs set commands to complete. After doing so, my cleanup script
failed to zfs destroy the new filesystem:
root at kronos:/ # zfs destroy -f raid/www/user-testuser
cannot unshare ''raid/www/user-testuser'': /raid/www/user-testuser: not shared
root
2007 Oct 24
1
S10u4 in kernel sharetab
There was a log of talk about ZFS and NFS shares being a problem when there was a large number of filesystems. There was a fix that in part included an in kernel sharetab (I think :) Does anyone know if this has made it into S10u4?
Thanks,
BlueUmp
This message posted from opensolaris.org
2005 Nov 20
2
ZFS & small files
First - many, many congrats to team ZFS. Developing/writing a new Unix fs
is a very non-trivial exercise with zero tolerance for developer bugs.
I just loaded build 27a on a w1100z with a single AMD 150 CPU (2Gb RAM) and
a single (for now) SCSI disk drive: FUJITSU MAP3367NP (Revision: 0108)
hooked up to the built-in SCSI controller (the only device on the SCSI
bus).
My initial ZFS test was to
2005 Jan 20
1
ChangeNotify help wanted
Hi all,
I''ve checked in some code to win32-changenotify. Unfortunately, it
doesn''t work right. I need some help. I don''t understand what,
exactly, I''m supposed to pass to ReadDirectoryChangesW() for the 2nd
argument, nor how to read the data back out.
There''s also a WCHAR issue that needs to be worked out with regards to
the FileName
2007 Mar 06
16
2007/128 SMF services for Xen
I am sponsoring this fasttrack for John Levon. It is set to expire
on 3/14/2007. Note that this is an externally visible case.
liane
---
SMF services for Xen
1. Introduction
This case introduces the SMF services used by a Solaris-based domain 0 when
running on Xen, or a Xen-compatible hypervisor. All of these services only
run on domain 0 when booted under Xen virtualisation.
2003 Jun 17
2
fsh
Some people suggested fsh as a way of speeding up a build system which
sshes to different hosts to run jobs in parallel. fsh is very handy
but it works by keeping open a *single* connection. It won't work if
you want to execute more than one command in parallel on the same
host.
--
Ed Avis <ed at membled.com>