similar to: Long import due to spares.

Displaying 20 results from an estimated 3000 matches similar to: "Long import due to spares."

2011 Jun 30
1
cross platform (freebsd) zfs pool replication
Hi, I have two servers running: freebsd with a zpool v28 and a nexenta (opensolaris b134) running zpool v26. Replication (with zfs send/receive) from the nexenta box to the freebsd works fine, but I have a problem accessing my replicated volume. When I''m typing and autocomplete with tab key the command cd /remotepool/us (for /remotepool/users) I get a panic. check the panic @
2010 Jul 02
14
NexentaStor 3.0.3 vs OpenSolaris - Patches more up to date?
I see in NexentaStor''s announcement of Community Edition 3.0.3 they mention some backported patches in this release. Aside from their management features / UI what is the core OS difference if we move to Nexenta from OpenSolaris b134? These DeDup bugs are my main frustration - if a staff member does a rm * in a directory with dedup you can take down the whole storage server - all with
2011 May 13
0
sun (oracle) 7110 zfs low performace fith high latency and high disc util.
Hello! Our company have 2 sun 7110 with the following configuration: Primary: 7110 with 2 qc 1.9ghz HE opterons and 32GB ram 16 2.5" 10Krpm sas disc (2 system, 1 spare) a pool is configured from the rest so we have 13 active working discs in raidz-2 (called main) there is a sun J4200 jbod connected to this device with 12x750GB discs with 1 spare and 11active discs there is another pool
2010 May 07
0
confused about zpool import -f and export
Hi, all, I think I''m missing a concept with import and export. I''m working on installing a Nexenta b134 system under Xen, and I have to run the installer under hvm mode, then I''m trying to get it back up under pv mode. In that process the controller names change, and that''s where I''m getting tripped up. I do a successful install, then I boot OK,
2010 Apr 15
6
ZFS for ISCSI ntfs backing store.
I''m looking to move our file storage from Windows to Opensolaris/zfs. The windows box will be connected through 10g for iscsi to the storage. The windows box will continue to serve the windows clients and will be hosting approximately 4TB of data. The physical box is a sunfire x4240, single AMD 2435 processor, 16G ram, LSI 3801E HBA, ixgbe 10g card. I''m looking for suggestions
2010 May 04
8
iscsitgtd failed request to share on zpool import after upgrade from b104 to b134
Hi, I am posting my question to both storage-discuss and zfs-discuss as I am not quite sure what is causing the messages I am receiving. I have recently migrated my zfs volume from b104 to b134 and upgraded it from zfs version 14 to 22. It consist of two zvol''s ''vol01/zvol01'' and ''vol01/zvol02''. During zpool import I am getting a non-zero exit code,
2009 Aug 27
0
How are you supposed to remove faulted spares from pools?
We have a situation where all of the spares in a set of pools have gone into a faulted state and now, apparently, we can''t remove them or otherwise de-fault them. I''m confidant that the underlying disks are fine, but ZFS seems quite unwilling to do anything with the spares situation. (The specific faulted state is ''FAULTED corrupted data'' in ''zpool
2010 Sep 29
10
Resliver making the system unresponsive
This must be resliver day :) I just had a drive failure. The hot spare kicked in, and access to the pool over NFS was effectively zero for about 45 minutes. Currently the pool is still reslivering, but for some reason I can access the file system now. Resliver speed has been beaten to death I know, but is there a way to avoid this? For example, is more enterprisy hardware less susceptible to
2008 Aug 25
5
Unable to import zpool since system hang during zfs destroy
Hi all, I have a RAID-Z zpool made up of 4 x SATA drives running on Nexenta 1.0.1 (OpenSolaris b85 kernel). It has on it some ZFS filesystems and few volumes that are shared to various windows boxes over iSCSI. On one particular iSCSI volume, I discovered that I had mistakenly deleted some files from the FAT32 partition that is on it. The files were still in a ZFS snapshot that was made earlier
2010 Mar 29
0
FYI: Ben Rockwood: Solaris no longer free
Just FYI, flame wars please >/dev/null http://www.cuddletech.com/blog/pivot/entry.php?id=1120 Solaris No Longer Free 28 Mar ''10 - 10:14 by benr Hot on the heals of Oracle''s revamp of Solaris support, the licensing agreement for free downloads of Solaris 10 have changed. Infoworld broke the news on Friday. Here is the bit in question. Notice this paragraph in the Licensing
2010 Aug 18
1
Kernel panic on import / interrupted zfs destroy
I have a box running snv_134 that had a little boo-boo. The problem first started a couple of weeks ago with some corruption on two filesystems in a 11 disk 10tb raidz2 set. I ran a couple of scrubs that revealed a handful of corrupt files on my 2 de-duplicated zfs filesystems. No biggie. I thought that my problems had something to do with de-duplication in 134, so I went about the process of
2018 Feb 21
0
Duplicate column names created by base::merge() when by.x has the same name as a column in y
Hi all, For the record this approach isnt 100% backwards compatible, because names(mergeddf) will e incompatibly different. Thatx why i claimed bakcwards compatable-ish That said its still worth considering imho because of the reasons stated (and honestly one particular simple reading of the docs might suggest that this was thr intended behavior all along). Im not a member of Rcore through so i
2010 Sep 10
3
zpool upgrade and zfs upgrade behavior on b145
Not sure what the best list to send this to is right now, so I have selected a few, apologies in advance. A couple questions. First I have a physical host (call him bob) that was just installed with b134 a few days ago. I upgraded to b145 using the instructions on the Illumos wiki yesterday. The pool has been upgraded (27) and the zfs file systems have been upgraded (5). chris at bob:~# zpool
2010 Sep 10
3
zpool upgrade and zfs upgrade behavior on b145
Not sure what the best list to send this to is right now, so I have selected a few, apologies in advance. A couple questions. First I have a physical host (call him bob) that was just installed with b134 a few days ago. I upgraded to b145 using the instructions on the Illumos wiki yesterday. The pool has been upgraded (27) and the zfs file systems have been upgraded (5). chris at bob:~# zpool
2018 Feb 23
0
Duplicate column names created by base::merge() when by.x has the same name as a column in y
Thanks Martin! Can you clarify the functionality of the 'no.dups' argument so I can change my patch to `data.table:::merge.data.table` accordingly? - When `no.dups=TRUE` will the suffix to the by.x column name? Or will it take the functionality of the second functionality where only the column in y has the suffix added? - When `no.dups=FALSE` will the output be the same as it currently
2010 Apr 05
3
no hot spare activation?
While testing a zpool with a different storage adapter using my "blkdev" device, I did a test which made a disk unavailable -- all attempts to read from it report EIO. I expected my configuration (which is a 3 disk test, with 2 disks in a RAIDZ and a hot spare) to work where the hot spare would automatically be activated. But I''m finding that ZFS does not behave this way
2010 Sep 24
3
Kernel panic on ZFS import - how do I recover?
I posted this on the www.nexentastor.org forums, but no answer so far, so I apologize if you are seeing this twice. I am also engaged with nexenta support, but was hoping to get some additional insights here. I am running nexenta 3.0.3 community edition, based on 134. The box crashed yesterday, and goes into a reboot loop (kernel panic) when trying to import my data pool, screenshot attached.
2018 Feb 20
0
Duplicate column names created by base::merge() when by.x has the same name as a column in y
Hi Scott, I think that's a good idea and I tried your patch on my copy of the repository. But it looks to me like the recent patch is identical to the previous one, can you confirm this? Frederick On Mon, Feb 19, 2018 at 07:19:32AM +1100, Scott Ritchie wrote: > Thanks Gabriel, > > I think your suggested approach is 100% backwards compatible > > Currently in the case of
2008 Nov 17
14
Storage 7000
I''m not sure if this is the right place for the question or not, but I''ll throw it out there anyways. Does anyone know, if you create your pool(s) with a system running fishworks, can that pool later be imported by a standard solaris system? IE: If for some reason the head running fishworks were to go away, could I attach the JBOD/disks to a system running snv/mainline
2018 Feb 22
2
Duplicate column names created by base::merge() when by.x has the same name as a column in y
>>>>> Gabriel Becker <gmbecker at ucdavis.edu> >>>>> on Wed, 21 Feb 2018 07:11:44 -0800 writes: > Hi all, > For the record this approach isnt 100% backwards compatible, because > names(mergeddf) will e incompatibly different. Thatx why i claimed > bakcwards compatable-ish exactly. > That said its still worth considering