search for: jmcp

Displaying 20 results from an estimated 47 matches for "jmcp".

Did you mean: jmap
2011 Dec 21
8
Any rhyme or reason to disk dev names?
Hello, I am curious to know if there is an easy way to guess or identify the device names of disks. Previously the /dev/dsk/c0t0d0s0 system made sense to me... I had a SATA controller card with 8 ports, and they showed up with the numbers 1-8 in the "t" position of the device name. But I just built a new system with two LSI SAS HBAs in it, and my device names are along the lines of:
2010 Nov 18
9
WarpDrive SLP-300
http://www.lsi.com/channel/about_channel/whatsnew/warpdrive_slp300/index.html Good stuff for ZFS. Fred -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20101117/d48186f0/attachment.html>
2008 Jun 21
12
Bfu xVM to build 92 problems
Bfu 91 to 92 looks good Proceed with xVM upgrade . After running:- # sunos.hg/bin/build-all nondebug bash-3.2# cd packages-nondebug bash-3.2# pwd /usr/tmp/packages-nondebug bash-3.2# ls -l total 20 drwxr-xr-x 4 root root 512 Jun 21 08:57 SUNWlibvirt drwxr-xr-x 4 root root 512 Jun 21 08:57 SUNWlibvirtr drwxr-xr-x 4 root root 512 Jun 21 08:57 SUNWurlgrabber
2010 May 05
3
Another MPT issue - kernel crash
Hi all, I have faced yet another kernel panic that seems to be related to mpt driver. This time i was trying to add a new disk to a running system (snv_134) and this new disk was not being detected...following a tip i ran the lsitool to reset the bus and this lead to a system panic. MPT driver : BAD TRAP: type=e (#pf Page fault) rp=ffffff001fc98020 addr=4 occurred in module "mpt" due
2010 May 28
21
expand zfs for OpenSolaris running inside vm
hello, all I am have constraint disk space (only 8GB) while running os inside vm. Now i want to add more. It is easy to add for vm but how can i update fs in os? I cannot use autoexpand because it doesn''t implemented in my system: $ uname -a SunOS sopen 5.11 snv_111b i86pc i386 i86pc If it was 171 it would be grate, right? Doing following: o added new virtual HDD (it becomes
2008 Jul 09
8
Using zfs boot with MPxIO on T2000
Here is what I have configured: T2000 with OBP 4.28.6 2008/05/23 12:07 with 2 - 72 GB disks as the root disks OpenSolaris Nevada Build 91 Solaris Express Community Edition snv_91 SPARC Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Use is subject to license terms. Assembled 03 June 2008
2008 Jul 08
3
sub-function
I''m a newbie as dtrace, but couldn''t find an answer to this by searching : For example, I have function "lsearch" which is called in many different places. It''s also called by "composemessage". I have been able to successfully create a script which show the return values of "lsearch". Now what I would like to do, is for it to only show the
2008 May 29
1
>1TB ZFS thin provisioned partition prevents Opensolaris from booting.
Not sure where to put this but I am cc''ing the ZFS - discussion board. I was successfull in creating iscsi shares using ZFS set shareiscsi=on with 2 thin provisioned partitions of 1TB each (zfs create -s -V 1tb idrive/d1). Access to the shares with an iscsi initiator was successful, all was smooth, until the reboot. Upon reboot, the console reports the following errors. WARNING:
2009 Jul 31
4
Zfs deduplication
Will the material ever be posted. Looks there is some huge bugs with zfs deduplication that the organizers do not want to post it also there is no indication on sun website if there will be a deduplication feature. I think its best they concentrate on improving zfs performance and speed with compression enabled. -------------- next part -------------- An HTML attachment was scrubbed... URL:
2008 Feb 08
4
List of supported multipath drivers
Where can I find a list of supported multipath drivers for ZFS? Keith McAndrew Senior Systems Engineer Northern California SUN Microsystems - Data Management Group <mailto:Keith.McAndrew at SUN.com> Keith.McAndrew at SUN.com 916 715 8352 Cell CONFIDENTIALITY NOTICE The information contained in this transmission may contain privileged and confidential information of SUN
2008 Feb 21
3
raidz2 resilience on 3 disks
Hello, 1) If i create a raidz2 pool on some disks, start to use it, then the disks'' controllers change. What will happen to my zpool? Will it be lost or is there some disk tagging which allows zfs to recognise the disks? 2) if i create a raidz2 on 3 HDs, do i have any resilience? If any one of those drives fails, do i loose everything? I''ve got one such pool and
2009 May 18
11
Zfs and b114 version
http://dlc.sun.com/osol/on/downloads/b114/ This URL makes me think that if I just sit down and figure out how to compile OpenSolaris, I can try b114 now^h^h^h eventually ? I am really eager to try out the new quota support.. has someone already tried compiling it perhaps? How complicated is compiling osol compared to, say, NetBSD/FreeBSD, Linux etc ? (IRIX and its quickstarting??) --
2010 Jan 11
25
Is LSI SAS3081E-R suitable for a ZFS NAS ?
According to various posts the LSI SAS3081E-R seems to work well with OpenSolaris. But I''ve got pretty chilled-out from my recent problems with Areca-1680''s. Could anyone please confirm that the LSI SAS3081E-R works well ? Is hotplug supported ? Anything else I should know before buying one of these cards ? Thanks, Arnaud
2009 Feb 18
11
Confused about prerequisites for ZFS to work
I''m hoping to get some general clues about what all is required to get an experiment going with zfs. I''ve managed to install osol-11 in a vmware on windowsXP host from a recent *.iso. I''m following along with Simon''s blog showing how to set up ZFS. I''m newbie with both ZFS and Solaris but the instructions seem pretty clear. However I''m
2009 Nov 03
2
SunOS neptune 5.11 snv_127 sun4u sparc SUNW, Sun-Fire-880
I just went through a BFU update to snv_127 on a V880 : neptune console login: root Password: Nov 3 08:19:12 neptune login: ROOT LOGIN /dev/console Last login: Mon Nov 2 16:40:36 on console Sun Microsystems Inc. SunOS 5.11 snv_127 Nov. 02, 2009 SunOS Internal Development: root 2009-Nov-02 [onnv_127-tonic] bfu''ed from /build/archives-nightly-osol/sparc on 2009-11-03 I have [
2008 Jun 17
3
LSAI SAS SATA card and MB comptability questions?
Hello, I am new to open solaris and am trying to setup a ZFS based storage solution. I am looking at setting up a system with the following specs: Intel BOXDG33FBC Intel Core 2 Duo 2.66Ghz 2 or 4 GB ram For the drives I am looking at using a LSI SAS3081E-R I''ve been reading around and it sounds like LSI solutions work well in terms of compatability with solaris. Could someone help
2008 May 15
2
[storage-discuss] ZFS and fibre channel issues
The ZFS crew might be better to answer this question. (CC''d here) --jc William Yang wrote: > I am having issues creating a zpool using entire disks with a fibre > channel array. The array is a Dell PowerVault 660F. > When I run "zpool create bottlecap c6t21800080E512C872d14 > c6t21800080E512C872d15", I get the following error: > invalid vdev
2008 Aug 25
5
Debugging Xen domain with mdb
Hi, I was just wondering if it is possible to debug a live Xen domain with mdb i.e. use mdb on a running domain? I know that it is possible to debug a domain with mdb using a crash dump. I couldn''t find anything about using mdb on a live domain though so I thought I''d ask here. -Padraig This message posted from opensolaris.org
2008 Jul 10
49
Supermicro AOC-USAS-L8i
On Wed, Jul 9, 2008 at 1:12 PM, Tim <tim at tcsac.net> wrote: > Perfect. Which means good ol'' supermicro would come through :) WOHOO! > > AOC-USAS-L8i > > http://www.supermicro.com/products/accessories/addon/AOC-USAS-L8i.cfm Is this card new? I''m not finding it at the usual places like Newegg, etc. It looks like the LSI SAS3081E-R, but probably at 1/2 the
2008 Jul 17
4
RFE: -t flag for ''zfs destroy''
I would like to request an additional flag for the command line zfs tools. Specifically, I''d like to have a -t flag for "zfs destroy", as shown below. Suppose I have a pool "home" with child filesystem "will", and a snapshot "home/will at yesterday". Then I run the following commands: # zfs destroy -t volume home/will at yesterday zfs: not