search for: b147

Displaying 10 results from an estimated 10 matches for "b147".

Did you mean: 147
2010 Sep 25
4
dedup testing?
Hi all Has anyone done any testing with dedup with OI? On opensolaris there is a nifty "feature" that allows the system to hang for hours or days if attempting to delete a dataset on a deduped pool. This is said to be fixed, but I haven''t seen that myself, so I''m just wondering... I''ll get a 10TB test box released for testing OI in a few weeks, but before
2017 Jun 13
0
How to remove dead peer, osrry urgent again :(
...We can also do "gluster peer detach <hostname> force right? Just to be sure I setup a test 3 node vm gluster cluster :) then shut down one of the nodes and tried to remove it. root at gh1:~# gluster peer status Number of Peers: 2 Hostname: gh2.brian.softlog Uuid: b59c32a5-eb10-4630-b147-890a98d0e51d State: Peer in Cluster (Connected) Hostname: gh3.brian.softlog Uuid: 825afc5c-ead6-4c83-97a0-fbc9d8e19e62 State: Peer in Cluster (Disconnected root at gh1:~# gluster peer detach gh3 force peer detach: failed: gh3 is not part of cluster -- Lindsay -------------- next part --------...
2017 Jun 13
1
How to remove dead peer, osrry urgent again :(
...gt; force right? > > > > Just to be sure I setup a test 3 node vm gluster cluster :) then shut down > one of the nodes and tried to remove it. > > > root at gh1:~# gluster peer status > Number of Peers: 2 > > Hostname: gh2.brian.softlog > Uuid: b59c32a5-eb10-4630-b147-890a98d0e51d > > State: Peer in Cluster (Connected) > > Hostname: gh3.brian.softlog > Uuid: 825afc5c-ead6-4c83-97a0-fbc9d8e19e62 > State: Peer in Cluster (Disconnected > > > root at gh1:~# gluster peer detach gh3 force > peer detach: failed: gh3 is not part of cluster...
2017 Jun 12
3
How to remove dead peer, osrry urgent again :(
On Sun, Jun 11, 2017 at 2:12 PM, Atin Mukherjee <amukherj at redhat.com> wrote: > > On Sun, 11 Jun 2017 at 06:25, Lindsay Mathieson < > lindsay.mathieson at gmail.com> wrote: > >> On 11/06/2017 10:46 AM, WK wrote: >> > I thought you had removed vna as defective and then ADDED in vnh as >> > the replacement? >> > >> > Why is vna
2010 Nov 29
9
Seagate ST32000542AS and ZFS perf
Hi, Does anyone use Seagate ST32000542AS disks with ZFS? I wonder if the performance is not that ugly as with WD Green WD20EARS disks. Thanks, -- Piotr Jasiukajtis | estibi | SCA OS0072 http://estseg.blogspot.com
2010 Sep 13
3
HYPERVISOR_update_va_mapping failed
Installed 134 on some IBM x3550 and keep getting the message: HYPERVISOR_update_va_mapping failed. Press any key to reboot. Is there anything I can do to try to get around it? 134 boots just fine without XEN. -- This message posted from opensolaris.org
2010 Nov 23
14
ashift and vdevs
zdb -C shows an shift value on each vdev in my pool, I was just wondering if it is vdev specific, or pool wide. Google didn''t seem to know. I''m considering a mixed pool with some "advanced format" (4KB sector) drives, and some normal 512B sector drives, and was wondering if the ashift can be set per vdev, or only per pool. Theoretically, this would save me some size on
2010 Nov 09
5
X4540 RIP
Oracle have deleted the best ZFS platform I know, the X4540. Does anyone know of an equivalent system? None of the current Oracle/Sun offerings come close. -- Ian.
2011 May 03
4
multipl disk failures cause zpool hang
Hi, There seems to be a few threads about zpool hang, do we have a workaround to resolve the hang issue without rebooting ? In my case, I have a pool with disks from external LUNs via a fiber cable. When the cable is unplugged while there is IO in the pool, All zpool related command hang (zpool status, zpool list, etc.), put the cable back does not solve the problem. Eventually, I
2007 Aug 29
11
tc not matching
Dear all, I''m having real problems getting tc to do anything useful at all. I''m also under pressure to get this fixed before the students start arriving later this month (I work in a university). In short, I want each IP address to be hard limited to 128kbit down, 64kbit up, never to be allowed more bandwidth than this. It is also important that the latency remains