similar to: Re: [caiman-discuss] Preliminary Text Install Images for b134

Displaying 20 results from an estimated 400 matches similar to: "Re: [caiman-discuss] Preliminary Text Install Images for b134"

2009 Jun 08
4
[caiman-discuss] Can not delete swap on AI sparc
Hi Richard, Richard Robinson wrote: > I should add that I also used truss and saw the same ENOMEM error. I am on a 4Gb system with swap -l reporting > > swapfile dev swaplo blocks free > /dev/zvol/dsk/rpool/swap 181,1 8 4194296 4194296 > > and I was trying to follow the directions for increasing swap here: >
2010 Oct 23
2
No ACL inheritance with aclmode=passthrough in onnv-134
Hi list, while preparing for the changed ACL/mode_t mapping semantics coming with onnv-147 [1], I discovered that in onnv-134 on my system ACLs are not inherited when aclmode is set to passthrough for the filesystem. This very much puzzles me. Example: $ uname -a SunOS os 5.11 snv_134 i86pc i386 i86pc $ pwd /Volumes/ACLs/dir1 $ zfs list | grep /Volumes rpool/Volumes 7,00G 39,7G 6,84G
2010 Mar 27
14
b134 - Mirrored rpool won''t boot unless both mirrors are present
I have two 500 GB drives on my system that are attached to built-in SATA ports on my Asus M4A785-M motherboard, running in AHCI mode. If I shut down the system, remove either drive, and then try to boot the system, it will fail to boot. If I disable the splash screen, I find that it will display the SunOS banner and the hostname, but it never gets as far as the "Reading ZFS config:"
2010 May 04
8
iscsitgtd failed request to share on zpool import after upgrade from b104 to b134
Hi, I am posting my question to both storage-discuss and zfs-discuss as I am not quite sure what is causing the messages I am receiving. I have recently migrated my zfs volume from b104 to b134 and upgraded it from zfs version 14 to 22. It consist of two zvol''s ''vol01/zvol01'' and ''vol01/zvol02''. During zpool import I am getting a non-zero exit code,
2010 Apr 21
2
HELP! zpool corrupted data
Hello, Due to a power outage our file server running FreeBSD 8.0p2 will no longer come up due to zpool corruption. I get the following output when trying to import the ZFS pool using either a FreeBSD 8.0p2 cd or the latest OpenSolaris snv_143 cd: FreeBSD mfsbsd 8.0-RELEASE-p2.vx.sk:/usr/obj/usr/src/sys/GENERIC amd64 mfsbsd# zpool import pool: tank id: 1998957762692994918 state: FAULTED
2010 Aug 18
1
Kernel panic on import / interrupted zfs destroy
I have a box running snv_134 that had a little boo-boo. The problem first started a couple of weeks ago with some corruption on two filesystems in a 11 disk 10tb raidz2 set. I ran a couple of scrubs that revealed a handful of corrupt files on my 2 de-duplicated zfs filesystems. No biggie. I thought that my problems had something to do with de-duplication in 134, so I went about the process of
2018 Mar 07
1
Build LLVM on RedHat 7
Hi All, I wonder if there is a procedure to build "official" RPMs for LLVM & Clang ? I provide tools to dev teams and they would like to have recent versions of Clang on Red Hat7 boxes. The one they currently use is the EPEL one: v3.4.2 Many thanks in advance for your help, Jean _________________________________________________________________ This message may contain
2011 Jan 25
0
VIF drop with snv_134
Hello, I have a Sun X4140 running snv_134 as a Dom0 with a CentOS 4.8 (for Oracle 9i) and an Opensolaris DomU running on it with 4 NICs configured. I''m experiencing the virtual NICs dropping on the host. The Dom0 NICs are up and fine, and I am able to xm console into the DomU. The only metric that I have which seem useful is that memory scan rate spiked from near-0 to 507 under a
2017 Oct 26
0
Error in UseMethod("xmlAttrs", node) : no applicable method for 'xmlAttrs' applied to an object of class "NULL"
I'm running R 3.4.1 on Linux. I'm getting the following error message. Error in UseMethod("xmlAttrs", node) : no applicable method for 'xmlAttrs' applied to an object of class "NULL" Calls: rdKMeans -> rdLoadModel -> xmlAttrs This appears to be a known issue in R Linux that was supposed to be patched. Is this really fixed? Was it fixed in 3.4.2?
2006 Jun 19
0
Hard Solaris 8 compile
We currently have a samba install on Solaris 8 providing a front-end to a Rational ClearCase system. Because of some recent changes, we are having Kerberose issues in validating files (too many open files). After some research, I found that the best way to resolve these is to re-compile samba as a 64bit application to increase the open file restrictions on a Solaris 32 bit app. The problem I am
2010 Nov 11
3
Booting fails with `Can not read the pool label'' error
I''m still trying to find a fix/workaround for the problem described in Unable to mount root pool dataset http://opensolaris.org/jive/thread.jspa?messageID=492460 Since the Blade 1500''s rpool is mirrored, I''ve decided to detach the second half of the mirror, relabel the disk, create an alternative rpool (rpool2) there, copy the current BE (snv_134) using beadm
2006 Feb 13
1
MinGW and the ld bug
Hi. I noticed that Brian Ripley found and corrected a bug in MinGW's ld.exe, see http://www.murdoch-sutherland.com/Rtools/. Thanks for this. I wonder if this is the same bug that cause my problems. I have tiny toy package with C code that installs perfectly on R Version 2.2.1 beta (2005-12-18 r36792) [this version was mislabelled "beta" the first few hours on CRAN when the stable
2010 May 05
3
Another MPT issue - kernel crash
Hi all, I have faced yet another kernel panic that seems to be related to mpt driver. This time i was trying to add a new disk to a running system (snv_134) and this new disk was not being detected...following a tip i ran the lsitool to reset the bus and this lead to a system panic. MPT driver : BAD TRAP: type=e (#pf Page fault) rp=ffffff001fc98020 addr=4 occurred in module "mpt" due
2010 Jul 14
1
Share permission problem if user is member in more than 16 groups on AD
Hi! Running OpenSolaris snv_134 with Samba 3.0.37. Samba is successfully joined to AD domain. AD user "user1" is member in 17 AD groups including "group1", but he cannot access Samba share which have read permissions for "group1". If user account is modified and "group1" becomes users primary group, then he can access shares. If user is member of only
2011 Jan 11
1
Fatal crash during a user search
Well, it looks like it occurred during the search to me... Jan 10 17:05:37 sysvolone dovecot: [ID 583609 mail.crit] imap(user at host.com): Panic: file istream-header-filter.c: line 520 (i_stream_create_header_filter): assertion failed: (ret < 0) Jan 10 17:05:37 sysvolone dovecot: [ID 583609 mail.error] imap(user at host.com): Error: Raw backtrace:
2010 Mar 29
19
sharing a ssd between rpool and l2arc
Hi, as Richard Elling wrote earlier: "For more background, low-cost SSDs intended for the boot market are perfect candidates. Take a X-25V @ 40GB and use 15-20 GB for root and the rest for an L2ARC. For small form factor machines or machines with max capacity of 8GB of RAM (a typical home system) this can make a pleasant improvement over a HDD-only implementation." For the upcoming
2011 Aug 10
9
zfs destory snapshot takes an hours.
Hi, I am facing issue with zfs destroy, this takes almost 3 Hours to delete the snapshot of size 150G. Could you please help me to resolve this issue, why zfs destroy takes this much time. While taking snapshot, it''s done within few seconds. I have tried with removing with old snapshot but still problem is same. =========================== I am using : Release : OpenSolaris
2010 Jan 20
4
OSOL Bug 13743
Anyone knows if this is something that will be looked at before b134 is released? Bug 13743 - virsh and xm is unable to start domain first time after boot http://defect.opensolaris.org/bz/show_bug.cgi?id=13743 Regards Henrik http://sparcv9.blogspot.com
2010 Jul 16
1
Making a zvol unavailable to iSCSI trips up ZFS
I''ve been experimenting with a two system setup in snv_134 where each system exports a zvol via COMSTAR iSCSI. One system imports both its own zvol and the one from the other system and puts them together in a ZFS mirror. I manually faulted the zvol on one system by physically removing some drives. What I expect to happen is that ZFS will fault the zvol pool and the iSCSI stack will
2010 Sep 09
3
Volsize for DomU
Hey all I''ve created a Xen DomU on snv_134 , its Debian Lenny. For the disk, I''ve used a ZFS volume, which I accidentally set to 1GB. I''ve tried setting the volsize of the volume to 3GB and rebooting the domain, but this still only sees the initial 1GB disk. I''ve read about rebooting for volsize to take effect, but this seems to be in the context of either