similar to: zpool attach problem

Displaying 15 results from an estimated 15 matches similar to: "zpool attach problem"

2007 Jan 09
2
ZFS Hot Spare Behavior
I physically removed a disk (c3t8d0 used by ZFS ''pool01'') from a 3310 JBOD connected to a V210 running s10u3 (11/06) and ''zpool status'' reported this: # zpool status pool: pool01 state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the
2006 Jan 27
2
Do I have a problem? (longish)
Hi, to shorten the story, I describe the situation. I have 4 disks in a zfs/svm config: c2t9d0 9G c2t10d0 9G c2t11d0 18G c2t12d0 18G c2t11d0 is devided in two: selecting c2t11d0 [disk formatted] /dev/dsk/c2t11d0s0 is in use by zpool storedge. Please see zpool(1M). /dev/dsk/c2t11d0s1 is part of SVM volume stripe:d11. Please see metaclear(1M). /dev/dsk/c2t11d0s2 is in use by zpool storedge. Please
2009 Jul 13
2
questions regarding RFE 6334757 and CR 6322205 disk write cache. thanks (case 11356581)
Hello experts, I would like consult you some questions regarding RFE 6334757 and CR 6322205 (disk write cache). ========================================== RFE 6334757 disk write cache should be enabled and should have a tool to switch it on and off CR 6322205 Enable disk write cache if ZFS owns the disk ========================================== The cu found on SPARC Enterprise T5140,
2007 Sep 06
0
Zfs with storedge 6130
On 9/4/07 4:34 PM, "Richard Elling" <Richard.Elling at Sun.COM> wrote: > Hi Andy, > my comments below... > note that I didn''t see zfs-discuss at opensolaris.org in the CC for the > original... > > Andy Lubel wrote: >> Hi All, >> >> I have been asked to implement a zfs based solution using storedge 6130 and >> im chasing my own
2006 Oct 25
0
Xen and Linux-RDAC driver for Engenio storage controller
I am trying to use Xen with a Sun StorEdge 6130 disk array, but there are some IBM TotalStorage arrays that use the same RDAC driver, so hopefully someone has run across this. The driver will not compile with Xen and newer 2.6 sources/gcc4. At first there are some invalid lvalue assignments to clean up, and a few places where static declarations need to be removed, but I finally get to a point
2005 Jun 22
0
Samba and Sun StorEdge NAS
Hi I managed to get my Sun NAS filer to join my Samba domain (2.2.10). However, I cannot mount any of the shares from the NAS to a PC in the same domain, getting an error : 06/21/05 14:15:53 NetrSamLogon[BIOSS\janet]: SAMBA ACCESS_DENIED (Samba PDC) When I asked Sun about it they responded : "Further reseach has shown that a connection to SAMBA domain controller as is not supported by the
2010 Aug 19
0
Unable to mount legacy pool in to zone
So i''ve inherited this solaris 10 system with a sun storedge attached. pool: tol-pool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tol-pool ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c1t8d0 ONLINE 0 0 0 c1t9d0 ONLINE 0 0 0
2006 Apr 03
1
No UID associated with this user name
Hi sorry if this is the wrong place to post but I'm not sure where to go to and I'm a bit desperate. We just brought our Sunfire 6800 server and Storedge 9960 RAID array Solaris 8 back up after some maintenance and everything came back up 'clean' but I cannot get our SAMBA software to recognise any users or directories when logging in remotely from an apple or windows box.
2006 Aug 21
12
SCSI synchronize cache cmd
Hi, I work on a support team for the Sun StorEdge 6920 and have a question about the use of the SCSI sync cache command in Solaris and ZFS. We have a bug in our 6920 software that exposes us to a memory leak when we receive the SCSI sync cache command: 6456312 - SCSI Synchronize Cache Command is flawed It will take some time for this bug fix to role out to the field so we need to understand
2020 Aug 30
1
Log spam "Failed to bind to uuid ..."
Hi, I recently replaced an ageing server running Samba 4.5.10 with one running Samba 4.11.8. They both run FreeBSD / ZFS. The site is setup as an AD (I wrote https://wiki.freebsd.org/Samba4ZFS while doing it). My original plan was to join the new server to the domain, shut down the old one and rename it but renamedc complained the old name was still present so I just keep the name (gateway2).
2008 Oct 14
1
FreeBSD 7-STABLE, isp(4), QLE2462: panic & deadlocks
Hello everybody, we recently got three Dell PowerEdge servers equipped with Qlogic QLE2462 cards (dual FC 4Gbps ports) and an EMC CLARiiON CX3-40 SAN. The servers with the FC cards were successfully and extensively tested with an older SUN StorEdge T3 SAN. However, when we connect them to the CX3-40, create and mount a new partition and then do something as simple as "tar -C /san -xf
2012 Mar 25
2
avoiding for loops
I have data that looks like this: > df1 group id 1 red A 2 red B 3 red C 4 blue D 5 blue E 6 blue F I want a list of the groups containing vectors with the ids. I am avoiding subset(), as it is only recommended for interactive use. Here's what I have so far: df1 <- data.frame(group=c("red", "red", "red", "blue",
2017 Oct 26
2
not healing one file
Hi Karthik, thanks for taking a look at this. I'm not working with gluster long enough to make heads or tails out of the logs. The logs are attached to this mail and here is the other information: # gluster volume info home Volume Name: home Type: Replicate Volume ID: fe6218ae-f46b-42b3-a467-5fc6a36ad48a Status: Started Snapshot Count: 1 Number of Bricks: 1 x 3 = 3 Transport-type: tcp
2017 Oct 26
0
not healing one file
Hey Richard, Could you share the following informations please? 1. gluster volume info <volname> 2. getfattr output of that file from all the bricks getfattr -d -e hex -m . <brickpath/filepath> 3. glustershd & glfsheal logs Regards, Karthik On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote: > On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it does diagnose any issues in setup. Currently you may have to run it in all the three machines. On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote: > Thanks for this report. This week many of the developers are at Gluster > Summit in Prague, will be checking this and respond next