similar to: RE: Help: Xen HVM Domain can ONLY support four hard drivesat most???

Displaying 20 results from an estimated 2000 matches similar to: "RE: Help: Xen HVM Domain can ONLY support four hard drivesat most???"

2006 Sep 11
3
Is RAMDISK required for domU boot?
Hi, I found one weird thing: I heard RAMDISK is not required for domU, but it seems I have to add RAMDISK to my domU config file, otherwiese the domU will hang up when it is booting. Please see the message below, the domU booting hangs up after "Continuing..." Does anyone have any clue for this issue? BTW, xen-friendly glibc is already installed. Thanks, Liang ---Begin of domU
2006 Sep 12
5
32E (64bit) VMX keyboard is out of control, if given an addition ''hde''
Hi, This issue only happens on my IA32E VMX domain. IA32 VMX domain is okay. I am trying VBD disk in IA32E VMX domain. I used following disk configuration to create an IA32E VMX domain. disk = [ ''file:/mnt/disk1.img,hda,w'', ''file:/mnt/disk2.img,hde,w'' ] After creating VMX, its keyboard can not be used properly. For example, if pressing
2006 Sep 12
2
RE: 32E (64bit) VMX keyboard is out of control, ifgiven an addition ''hde''
>-----Original Message----- >From: Jan Beulich [mailto:jbeulich@novell.com] >Sent: 2006年9月12日 17:06 >To: You, Yongkang >Cc: xen-devel >Subject: Re: [Xen-devel] 32E (64bit) VMX keyboard is out of control, ifgiven an >addition ''hde'' > >>After creating VMX, its keyboard can not be used properly. For example, if >pressing
2006 Sep 12
1
RE: 32E (64bit) VMX keyboard is out of control, ifgiven an addition ''hde''
>-----Original Message----- >From: Jan Beulich [mailto:jbeulich@novell.com] >Sent: 2006年9月12日 18:05 >To: You, Yongkang >Cc: xen-devel >Subject: RE: [Xen-devel] 32E (64bit) VMX keyboard is out of control,ifgiven an >addition ''hde'' > > >Not sure what RC3 is, but the other two are Linux-es: Can you try rebuilding >the guest kernels with attached
2016 May 25
1
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
On 2016-05-25 19:13, Kelly Lesperance wrote: > Hdparm didn?t get far: > > [root at r1k1 ~] # hdparm -tT /dev/sda > > /dev/sda: > Timing cached reads: Alarm clock > [root at r1k1 ~] # Hi Kelly, Try running 'iostat -xdmc 1'. Look for a single drive that has substantially greater await than ~10msec. If all the drives except one are taking 6-8msec, but one is very
2012 Sep 05
3
BTRFS thinks device is busy [kernel 3.5.3]
Hi, I''m running OpenSuse 12.2 with kernel 3.5.3 HBA= LSI 1068e using the MPTSAS driver (patched) (https://patchwork.kernel.org/patch/1379181/) SANOS1:/media # uname -a Linux SANOS1 3.5.3 #3 SMP Sun Sep 2 18:44:37 CEST 2012 x86_64 x86_64 x86_64 GNU/Linux I''ve tried to simulate a disk replacement but it seems that now /dev/sdg is stuck in the btrfs pool (RAID10) SANOS1:/media #
2016 May 25
6
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
I?ve posted this on the forums at https://www.centos.org/forums/viewtopic.php?f=47&t=57926&p=244614#p244614 - posting to the list in the hopes of getting more eyeballs on it. We have a cluster of 23 HP DL380p Gen8 hosts running Kafka. Basic specs: 2x E5-2650 128 GB RAM 12 x 4 TB 7200 RPM SATA drives connected to an HP H220 HBA Dual port 10 GB NIC The drives are configured as one large
2016 May 27
2
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
All of our Kafka clusters are fairly write-heavy. The cluster in question is our second-heaviest ? we haven?t yet upgraded the heaviest, due to the issues we?ve been experiencing in this one. Here is an iostat example from a host within the same cluster, but without the RAID check running: [root at r2k1 ~] # iostat -xdmc 1 10 Linux 3.10.0-327.13.1.el7.x86_64 (r2k1) 05/27/16 _x86_64_ (32 CPU)
2014 Oct 05
1
CentOS 7 - Have 2 disks, each with a biosboot partition, can only boot off one of them
Hi all, I used a kickstart script to setup a new machine of mine with RAID 1 (I couldn't get anaconda to create matching partition schemes). So I've now got /dev/sdg1 and /dev/sdh1 as 'bios_grub' (/dev/sd{a-f} are a separate array). 0 root at an-nas02:~# parted /dev/sdg print free Model: ATA ST3000NC000 (scsi) Disk /dev/sdg: 3001GB Sector size (logical/physical):
2008 Apr 17
2
Question about RAID 5 array rebuild with mdadm
I'm using Centos 4.5 right now, and I had a RAID 5 array stop because two drives became unavailable. After adjusting the cables on several occasions and shutting down and restarting, I was able to see the drives again. This is when I snatched defeat from the jaws of victory. Please, someone with vast knowledge of how RAID 5 with mdadm works, tell me if I have any chance at all
2009 Jan 18
9
Limited number of phy disks?
In my experiments with setting up a NAS as a VM, I can only successfully import three drives; the root drive, the cd and one more. However, I have plenty that I want to use: disk=[''file:/vserver/vm_disks/Patch.disk.xm,hda,w'', ''phy:/dev/sda,ioemu:hdd,w'', ''phy:/dev/sdb,ioemu:hde,w'', ''phy:sdc,ioemu:hdf,w'',
2014 Jun 20
1
iostat results for multi path disks
Here is a sample of running iostat on a server that has a LUN from a SAN with multiple paths. I am specifying a device list that just grabs the bits related to the multi path device: $ iostat -dxkt 1 2 sdf sdg sdh sdi dm-7 dm-8 dm-9 Linux 2.6.18-371.8.1.el5 (db21b.den.sans.org) 06/20/2014 Time: 02:30:23 PM Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await
2019 Jun 14
3
zfs
Hi, folks, testing zfs. I'd created a zpoolz2, ran a large backup onto it. Then I pulled one drive (11-drive, one hot spare pool), and it resilvered with the hot spare. zpool status -x shows me state: DEGRADED status: One or more devices could not be used because the label is missing or invalid. Sufficient replicas exist for the pool to continue functioning in a degraded state.
2010 May 28
2
permanently add md device
Hi All Currently i'm setting up a 5.4 server and try to create a 3rd raid device, when i run: $mdadm --create /dev/md2 -v --raid-devices=15 --chunk=32 --level=raid6 /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq the device file "md2" is created and the raid is being configured. but somehow
2018 Jan 10
2
Issues accessing ZFS-shares on Linux
I just noticed that by running by commands /usr/sbin/smbd -D or /usr/sbin/smbd -i without systemd's unit, all shares work perfectly so the problem must then be somehow related to systemd.. Let the testing continue.. I also tested what happens if I comment out everything and just use ExecStart=/usr/sbin/smbd -D as that command worked on the console. That did not help. For the record, this is
2009 Jan 13
2
mounted.ocfs2 -f return Unknown: Bad magic number in inode
Hello, I have installed ocfs2 without problem and use it for a RAC10gR2. Only Clusterware files are ocfs2 type. multipath is also used. When I issue : mounted.ocfs2 -f I have a strange result: Device FS Nodes /dev/sda ocfs2 Unknown: Bad magic number in inode /dev/sda1 ocfs2 pocrhel2, pocrhel1 /dev/sdb ocfs2 Not mounted /dev/sdf
2011 Feb 05
2
Strangeness on btrfs balance..
Hi there... I have kernel version 2.6.36.3, compiled with gcc 4.4.5, btrfstools version 0.19+20101101 I have a btrfs filesystem (/data) consisting of two 1TB hard disks, raid0. I added in another 1TB hard drive. root@X86-64:~# btrfs filesystem show failed to read /dev/sdh failed to read /dev/sdg failed to read /dev/sdf failed to read /dev/sde failed to read /dev/sr0 failed to read /dev/fd0u800
2013 Jan 15
1
xen device mapping/translation
Hello, list. Yesterday I was pleased to see that Centos has released official images at the aws marketplace. Nice job. Today I started playing with the Centos 6.3 image ( https://aws.amazon.com/marketplace/pp/B00A6L6F9I, on which I plan to deploy a gluster cluster in production soon) and noticed a weird thing. EBS Volumes attached to sd<X> are translated to xvd<Y> at the OS level.
2010 Sep 17
1
multipath troubleshoot
Hi, My storage admin just assigned a Lun (fibre) to my server. Then re scanned using echo "1" > /sys/class/fc_host/host5/issue_lip echo "1" > /sys/class/fc_host/host6/issue_lip I can see the scsi device using dmesg But mpath device are not created for this LUN Pleas see below. The last 4 should be active and I think this is the problem Kernel:
2011 Nov 22
1
Recovering data from old corrupted file system
I have a corrupted multi-device file system that got corrupted ages ago (as I recall, one of the drives stopped responding, causing btrfs to panic). I am hoping to recover some of the data. For what it''s worth, here is the dmesg output from trying to mount the file system on a 3.0 kernel: device label Media devid 6 transid 816153 /dev/sdq device label Media devid 7 transid 816153