similar to: VIF drop with snv_134

Displaying 20 results from an estimated 1600 matches similar to: "VIF drop with snv_134"

2010 Aug 18
1
Kernel panic on import / interrupted zfs destroy
I have a box running snv_134 that had a little boo-boo. The problem first started a couple of weeks ago with some corruption on two filesystems in a 11 disk 10tb raidz2 set. I ran a couple of scrubs that revealed a handful of corrupt files on my 2 de-duplicated zfs filesystems. No biggie. I thought that my problems had something to do with de-duplication in 134, so I went about the process of
2011 May 19
8
Mapping sas address to physical disk in enclosure
Hi, we have SunFire X4140 connected to Dell MD1220 SAS enclosure, single path, MPxIO disabled, via LSI SAS9200-8e HBA. Disks are visible with sas-addresses such as this in "zpool status" output: NAME STATE READ WRITE CKSUM cuve ONLINE 0 0 0 mirror-0 ONLINE 0 0 0
2010 Mar 13
0
Re: [caiman-discuss] Preliminary Text Install Images for b134
Hi, It doesn''t work as a PVM domain within xVM: # uname -srv SunOS 5.11 snv_133 # virt-install --name osvm01 -p -r 1024 -f /export/xvm/osvm01/disk1 -l nfs://localhost/export/install --nographics Starting install... Retrieving file unix... 100% |=========================| 2.1 MB 00:00 Retrieving file boot_arch 100% |=========================| 44 MB 00:00 Creating
2014 Nov 20
0
Cannot find suitable CPU model for given data
Hello all, I have a new Sun fire X4140 server with two amd opteron 2435 CPUs running debian jessie, libvirt 1.2.9-3, virtinst 1.0.1-3, qemu/kvm 2.1. When attempting to create virtual machines (with --debug), I receive this: [Thu, 20 Nov 2014 13:56:36 virt-install 1842] DEBUG (cli:234) File "/usr/share/virt-manager/virt-install", line 876, in <module> sys.exit(main()) File
2010 Nov 11
3
Booting fails with `Can not read the pool label'' error
I''m still trying to find a fix/workaround for the problem described in Unable to mount root pool dataset http://opensolaris.org/jive/thread.jspa?messageID=492460 Since the Blade 1500''s rpool is mirrored, I''ve decided to detach the second half of the mirror, relabel the disk, create an alternative rpool (rpool2) there, copy the current BE (snv_134) using beadm
2005 Jul 08
4
Mtu/802.1Q limits on vif ??
Hello. I''m new to the list and I hope this is not a FAQ. I searched the archives and didn''t find an answer... Quick question : Is there a limitation if the vif code cutting at 1500 bytes ?? I''m trying to implement an IPv6 router/Firewall (2 XenU) Xen 2.0.6, Linux 2.6 is used on xen0 and XenU. The physical machine has 2 interfaces, eth0 (which is connected to an
2010 May 05
3
Another MPT issue - kernel crash
Hi all, I have faced yet another kernel panic that seems to be related to mpt driver. This time i was trying to add a new disk to a running system (snv_134) and this new disk was not being detected...following a tip i ran the lsitool to reset the bus and this lead to a system panic. MPT driver : BAD TRAP: type=e (#pf Page fault) rp=ffffff001fc98020 addr=4 occurred in module "mpt" due
2010 Jul 14
1
Share permission problem if user is member in more than 16 groups on AD
Hi! Running OpenSolaris snv_134 with Samba 3.0.37. Samba is successfully joined to AD domain. AD user "user1" is member in 17 AD groups including "group1", but he cannot access Samba share which have read permissions for "group1". If user account is modified and "group1" becomes users primary group, then he can access shares. If user is member of only
2005 Sep 30
0
error with vif parameter
I''m finding I cannot use the vif parameter with ''bridge'' only For example: vif=bridge=xen-br2 This errors out: [2005-09-30 14:23:55 xend] DEBUG (XendDomainInfo:1107) Creating vif dom=16 vif=0 mac=None [2005-09-30 14:23:55 xend] DEBUG (XendDomainInfo:665) Destroying vifs for domain 16 [2005-09-30 14:23:55 xend] DEBUG (XendDomainInfo:674) Destroying vbds for domain
2011 Jan 11
1
Fatal crash during a user search
Well, it looks like it occurred during the search to me... Jan 10 17:05:37 sysvolone dovecot: [ID 583609 mail.crit] imap(user at host.com): Panic: file istream-header-filter.c: line 520 (i_stream_create_header_filter): assertion failed: (ret < 0) Jan 10 17:05:37 sysvolone dovecot: [ID 583609 mail.error] imap(user at host.com): Error: Raw backtrace:
2011 May 30
4
OpenSUSE 11.4 (2.6.39-30.1), Xen 4.0.2 - Device 0 (vif) Could not be connected
All, This is a fresh, un-f#$ked-with OpenSUSE install after adding the Tumbleweed repository and doing a dup. I installed the Hypervisor with Tools for which it prompted me for Xen or QEMU and I chose Xen. No bridge was made. When I go to create a fully virtualised machine, it nao has the error: Error: Device 0 (vif) could not be connected. Could not find the bridge, and none was specified.
2005 Oct 14
0
Domain 0 crashes when booting a single VM with a large # of VIFs
If I create a single VM, one that has 123 VIFs (by adding 123 VIF entries in my VM configuration file - all VIFs go to the same bridge), my machine crashes while XEN is trying to boot my VM. XEN crashes at the point of the boot where it is initialing the ethernet interface (the machine reboots immediately). If the VM has 122 VIFs, it boots up and works fine (I use ifconfig to verify the VIFs are
2011 Aug 10
9
zfs destory snapshot takes an hours.
Hi, I am facing issue with zfs destroy, this takes almost 3 Hours to delete the snapshot of size 150G. Could you please help me to resolve this issue, why zfs destroy takes this much time. While taking snapshot, it''s done within few seconds. I have tried with removing with old snapshot but still problem is same. =========================== I am using : Release : OpenSolaris
2007 Jul 13
12
XEN 3.1: critical bug: vif init failure after creating 15-17 VMs (XENBUS: Timeout connecting to device: device/vif)
We have found a critical problem with the XEN 3.1 release (for those who are running 15-20 VMs on a single server). We are using the official XEN 3.1 release on a rackable server (Dual-Core AMD Opteron, 8GB RAM). The problem we are seeing is that intermittently vifs fail to work properly in VMs after we create around 15-17 VMs on our server (all running at the same time, created one by
2010 Apr 21
2
HELP! zpool corrupted data
Hello, Due to a power outage our file server running FreeBSD 8.0p2 will no longer come up due to zpool corruption. I get the following output when trying to import the ZFS pool using either a FreeBSD 8.0p2 cd or the latest OpenSolaris snv_143 cd: FreeBSD mfsbsd 8.0-RELEASE-p2.vx.sk:/usr/obj/usr/src/sys/GENERIC amd64 mfsbsd# zpool import pool: tank id: 1998957762692994918 state: FAULTED
2013 Feb 26
1
Bug#701744: [xen] Update to hypervisor 4.0.1-5.6 or linux-image-2.6.32-5-xen-amd64 2.6.32-48 causes networking (VIF) failures
Package: xen Version: 4.0.1-5.5 Severity: critical --- Please enter the report below this line. --- Hi! Since the update last weekind in stable/squeeze I'm experiencing problems with running Xen on amd64 and multiple domUs losing their network connection/VIFs. From http://blog.windfluechter.net/content/blog/2013/02/26/1597-xen-problems-vms-2632-5-xen-amd64 Unfortunately this update
2010 Jul 16
1
Making a zvol unavailable to iSCSI trips up ZFS
I''ve been experimenting with a two system setup in snv_134 where each system exports a zvol via COMSTAR iSCSI. One system imports both its own zvol and the one from the other system and puts them together in a ZFS mirror. I manually faulted the zvol on one system by physically removing some drives. What I expect to happen is that ZFS will fault the zvol pool and the iSCSI stack will
2010 May 03
2
Is the J4200 SAS array suitable for Sun Cluster?
I''m setting up a two-node cluster with 1U x86 servers. It needs a small amount of shared storage, with two or four disks. I understand that the J4200 with SAS disks is approved for this use, although I haven''t seen this information in writing. Does anyone have experience with this sort of configuration? I have a few questions. I understand that the J4200 with SATA disks will
2010 Oct 23
2
No ACL inheritance with aclmode=passthrough in onnv-134
Hi list, while preparing for the changed ACL/mode_t mapping semantics coming with onnv-147 [1], I discovered that in onnv-134 on my system ACLs are not inherited when aclmode is set to passthrough for the filesystem. This very much puzzles me. Example: $ uname -a SunOS os 5.11 snv_134 i86pc i386 i86pc $ pwd /Volumes/ACLs/dir1 $ zfs list | grep /Volumes rpool/Volumes 7,00G 39,7G 6,84G
2010 Sep 09
3
Volsize for DomU
Hey all I''ve created a Xen DomU on snv_134 , its Debian Lenny. For the disk, I''ve used a ZFS volume, which I accidentally set to 1GB. I''ve tried setting the volsize of the volume to 3GB and rebooting the domain, but this still only sees the initial 1GB disk. I''ve read about rebooting for volsize to take effect, but this seems to be in the context of either