similar to: Removing large holey file does not free space 6792701 (still)

Displaying 20 results from an estimated 200 matches similar to: "Removing large holey file does not free space 6792701 (still)"

2008 Mar 12
3
Mixing RAIDZ and RAIDZ2 zvols in the same zpool
I have a customer who has implemented the following layout: As you can see, he has mostly raidz zvols but has one raidz2 in the same zpool. What are the implications here? Is this a bad thing to do? Please elaborate. Thanks, Scott Gaspard Scott.J.Gaspard at Sun.COM > NAME STATE READ WRITE CKSUM > > chipool1 ONLINE 0 0 0 > >
2008 Apr 11
0
How to replace root drive if ZFS data is on it?
Hi, Experts: A customer has X4500 and the boot drives mirrored (c5t0d0s0 and c5t4d0s0) by SVM, The ZFS uses the two other partitions on these two drives(c5t0d0s3 and c5t4d0s3). If we need to replace the disk drive c5t0d0, do we need to do anything on the ZFS (c5t0d0s3 and c5t4d0s3) first or just follow the regular boot drive replacement procedure? Below is the summary of their current ZFS
2009 Feb 12
4
Two zvol devices one volume?
Hi, Can anyone explain the following to me? Two zpool devices points at the same data, I was installing osol 2008.11 in xVM when I saw that there already was a partition on the installation disk. An old dataset that I deleted since i gave it a slightly different name than I intended is not removed under /dev. I should not have used that name, but two device links should perhaps not
2008 Apr 02
1
delete old zpool config?
Hi experts zpool import shows some weird config of an old zpool bash-3.00# zpool import pool: data1 id: 7539031628606861598 state: FAULTED status: One or more devices are missing from the system. action: The pool cannot be imported. Attach the missing devices and try again. see: http://www.sun.com/msg/ZFS-8000-3C config: data1 UNAVAIL insufficient replicas
2009 Nov 04
1
ZFS non-zero checksum and permanent error with deleted file
Hello, I am actually using ZFS under FreeBSD, but maybe someone over here can help me anyway. I''d like some advice if I still can rely on one of my ZFS pools: [user at host ~]$ sudo zpool clear zpool01 ... [user at host ~]$ sudo zpool scrub zpool01 ... [user at host ~]$ sudo zpool status -v zpool01 pool: zpool01 state: ONLINE status: One or more devices has experienced an
2007 Apr 11
0
raidz2 another resilver problem
Hello zfs-discuss, One of a disk started to behave strangely. Apr 11 16:07:42 thumper-9.srv sata: [ID 801593 kern.notice] NOTICE: /pci at 1,0/pci1022,7458 at 3/pci11ab,11ab at 1: Apr 11 16:07:42 thumper-9.srv port 6: device reset Apr 11 16:07:42 thumper-9.srv scsi: [ID 107833 kern.warning] WARNING: /pci at 1,0/pci1022,7458 at 3/pci11ab,11ab at 1/disk at 6,0 (sd27): Apr 11 16:07:42 thumper-9.srv
2009 Jun 19
8
x4500 resilvering spare taking forever?
I''ve got a Thumper running snv_57 and a large ZFS pool. I recently noticed a drive throwing some read errors, so I did the right thing and zfs replaced it with a spare. Everything went well, but the resilvering process seems to be taking an eternity: # zpool status pool: bigpool state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was
2006 Oct 24
3
determining raidz pool configuration
Hi all, Sorry for the newbie question, but I''ve looked at the docs and haven''t been able to find an answer for this. I''m working with a system where the pool has already been configured and want to determine what the configuration is. I had thought that''d be with zpool status -v <poolname>, but it doesn''t seem to agree with the
2010 Jan 17
1
raidz2 import, some slices, some not
I am in the middle of converting a FreeBSD 8.0-Release system to OpenSolaris b130 In order to import my stuff, the only way i knew to make it work (from testing in virtualbox) was to do this: label a bunch of drives with an EFI label by using the opensolaris live cd, then use those drives in FreeBSD to create a zpool. This worked fine. (though i did get a warning in freebsd about GPT
2010 Dec 05
4
Zfs ignoring spares?
Hi all I have installed a new server with 77 2TB drives in 11 7-drive RAIDz2 VDEVs, all on WD Black drives. Now, it seems two of these drives were bad, one of them had a bunch of errors, the other was very slow. After zfs offlining these and then zfs replacing them with online spares, resilver ended and I thought it''d be ok. Appearently not. Albeit the resilver succeeds, the pool status
2009 Jan 13
12
OpenSolaris better Than Solaris10u6 with requards to ARECA Raid Card
Under Solaris 10 u6 , No matter how I configured my ARECA 1261ML Raid card I got errors on all drives that result from SCSI timeout errors. yoda:~ # tail -f /var/adm/messages Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice] Requested Block: 239683776 Error Block: 239683776 Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice] Vendor: Seagate
2007 Oct 08
16
Fileserver performance tests
Hi all, i want to replace a bunch of Apple Xserves with Xraids and HFS+ (brr) by Sun x4200 with SAS-Jbods and ZFS. The application will be the Helios UB+ fileserver suite. I installed the latest Solaris 10 on a x4200 with 8gig of ram and two Sun SAS controllers, attached two sas-jbods with 8 SATA-HDDs each und created a zfs pool as a raid 10 by doing something like the following: [i]zpool create
2008 Aug 12
2
ZFS, SATA, LSI and stability
After having massive problems with a supermicro X7DBE box using AOC-SAT2-MV8 Marvell controllers and opensolaris snv79 (same as described here: http://sunsolve.sun.com/search/document.do?assetkey=1-66-233341-1) we just start over using new hardware and opensolaris 2008.05 upgraded to snv94. We used again a supermicro X7DBE but now with two LSI SAS3081E SAS controllers. And guess what? Now we get
2010 Jan 21
1
Zpool is a bit Pessimistic at failures
Hello, Anyone else noticed that zpool is kind of negative when reporting back from some error conditions? Like: cannot import ''zpool01'': I/O error Destroy and re-create the pool from a backup source. or even worse: cannot import ''rpool'': pool already exists Destroy and re-create the pool from a backup source. The first one i
2010 Jan 20
4
OSOL Bug 13743
Anyone knows if this is something that will be looked at before b134 is released? Bug 13743 - virsh and xm is unable to start domain first time after boot http://defect.opensolaris.org/bz/show_bug.cgi?id=13743 Regards Henrik http://sparcv9.blogspot.com
2007 May 03
5
ZFS vs UFS2 overhead and may be a bug?
[originally reported for ZFS on FreeBSD but Pawel Jakub Dawid says this problem also exists on Solaris hence this email.] Summary: on ZFS, overhead for reading a hole seems far worse than actual reading from a disk. Small buffers are used to make this overhead more visible. I ran the following script on both ZFS and UF2 filesystems. [Note that on FreeBSD cat uses a 4k buffer and md5 uses a 1k
2008 Nov 24
2
replacing disk
somehow I have issue replacing my disk. [20:09:29] root at adas: /root > zpool status mypooladas pool: mypooladas state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and online it using ''zpool online''. see:
2018 Jan 23
0
Re: virt-resize changing cachedir
On Tue, Jan 23, 2018 at 06:01:08AM +0000, Ryan Lindsay wrote: > Hi Richard > > I have been playing around with your lovely libguestfs tools . I have however run into a bit of a problem > > Basically I have a 5.4T qcow2 virtual disk, which I made too small (bugger) > > So I had read that you can expand these with your virt-resize tools. > > So I tried this sort of
2011 Apr 18
3
kernel panic on 5.6
I am getting: kernel panic unable to mount root fs on unknown block (0,0) This is just a normal box that I have use many a time to test install. Basic one disk SATA 160G. Been using it for at least a year. Is this that glibc issue hitting me? or something else? Thanks, Jerry
2003 Sep 17
1
sftp reget/reput
Hello openssh@ I thought about sftp's reget/reput commands. Several days ago, Damien Miller write to tech at openbsd.org (it was reply for my letter): > Herein lies a problem which is not easy to detect or solve. For > performance reasons, the sftp client does pipelined reads/writes when > transferring files. The protocol spec allows for a server to process > these requests out