Displaying 20 results from an estimated 41 matches for "xvda3".
Did you mean:
xvda
2017 Jul 21
0
kernel-4.9.37-29.el7 (and el6)
...still with SLUB). Have you
>> checked this build? It was moved to the stable repo on July 4th.
>
> Yes, 4.9.34 was failing too. And this was actually the worst case, with
> I/O error on guest:
>
> Jul 16 06:01:03 dom0 kernel: [452360.743312] CPU: 0 PID: 28450 Comm:
> 12.xvda3-0 Tainted: G O 4.9.34-29.el6.x86_64 #1
> Jul 16 06:01:03 guest kernel: end_request: I/O error, dev xvda3, sector
> 9200640
> Jul 16 06:01:03 dom0 kernel: [452360.758931] SLUB: Unable to allocate
> memory on node -1, gfp=0x2000000(GFP_NOWAIT)
> Jul 16 06:01:03 guest kerne...
2017 Jul 20
4
kernel-4.9.37-29.el7 (and el6)
...lems seem to be gone with 4.9.34 (still with SLUB). Have you
> checked this build? It was moved to the stable repo on July 4th.
Yes, 4.9.34 was failing too. And this was actually the worst case, with I/O error on guest:
Jul 16 06:01:03 dom0 kernel: [452360.743312] CPU: 0 PID: 28450 Comm: 12.xvda3-0 Tainted: G O 4.9.34-29.el6.x86_64 #1
Jul 16 06:01:03 guest kernel: end_request: I/O error, dev xvda3, sector 9200640
Jul 16 06:01:03 dom0 kernel: [452360.758931] SLUB: Unable to allocate memory on node -1, gfp=0x2000000(GFP_NOWAIT)
Jul 16 06:01:03 guest kernel: Buffer I/O error on de...
2017 Jul 20
2
kernel-4.9.37-29.el7 (and el6)
On Wed, 19 Jul 2017, Johnny Hughes wrote:
> On 07/19/2017 09:23 AM, Johnny Hughes wrote:
>> On 07/19/2017 04:27 AM, Piotr Gackiewicz wrote:
>>> On Mon, 17 Jul 2017, Johnny Hughes wrote:
>>>
>>>> Are the testing kernels (kernel-4.9.37-29.el7 and kernel-4.9.37-29.el6,
>>>> with the one config file change) working for everyone:
>>>>
2009 Aug 16
9
increase size for dom guest using lvm online
could we increase the size for dom guest online without rebooting or unmounting?
pls kindly advise any methods.
Yahoo!香港提供網上安全攻略,教你如何防範黑客! 請前往 http://hk.promo.yahoo.com/security/ 了解更多!
_______________________________________________
Xen-users mailing list
Xen-users@lists.xensource.com
http://lists.xensource.com/xen-users
2008 Sep 12
2
Can''t see changes in LV Size inside domU (after lvextend on dom0)
...it: lv_guest1_root,
lv_guest1_swap, lv_guest1_export.
I have them mapped on my xen config file:
disk=[ ''phy:/dev/vm_guests/lv_guest1_root,xvda2,w'',
''phy:/dev/vm_guests/lv_guest1_swap,xvda1,w'',
''phy:/dev/vm_guests/lv_guest1_export,xvda3,w'' ]
This works fine for me.
The problem here is:
On dom0 if I use lvextend to resize lv_guest1_export for example, I can''t
see the changes on my domU . I don''t know why. If I understand it clearly
(based on xen documentation and also on the post linked in th...
2009 Sep 30
9
du vs df size difference
...issue.. looking in to how much disk space is being used on a
machine (CentOS 5.3). When I compare the output of du vs df, I am
seeing a 12GB difference with du saying 8G used and df saying 20G used.
# du -hcx /
8.0G total
# df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/xvda3 22G 20G 637M 97% /
I recognize that in most cases du and df are not going to report the
same but I am concerned about having a 12GB disparity. Does anyone have
any thoughts about this or reason as to why there is a big difference?
I have read a few articles online about it and...
2010 Nov 05
2
i/o scheduler deadlocks with loopback devices
...sage.
The task varies between any of the tasks that might be active
(kjournald, loop0, etc.)
My setup is:
Xen dom0 version 3.4.2.
domU: Ubuntu 10.04, 2.6.36-rc6 based on Stefano Stabellini''s
v2.6.36-rc6-urgent-fixes tree.
Paravirtual disks and network interfaces.
Root filesystem on /dev/xvda3, formatted ext3, mounted with default options.
Both dom0 and domU are using the CFQ i/o scheduler.
The xvbd is based on LVM, on top of a local SATA RAID array.
To produce this, I can do one of the following:
Set up domU as a primary drbd node, with my drbd volume on top of a
local loopback devi...
2011 May 05
2
Kernel panic - not syncing: Attempted to kill init!
...ory = ''512''
root = ''/dev/xvda1 ro''
disk = [
''phy:/dev/atlantis/openfiler-disk,xvda1,w'',
''phy:/dev/atlantis/openfiler-swap,xvda2,w'',
''phy:/dev/sdb,xvda3,w''
]
name = ''openfiler''
vif = [ ''ip=192.168.0.204,mac=00:16:3E:C4:D8:AC'' ]
#vfb = [ "type=vnc,vncunused=1,keymap=fr" ]
extra = ''console=hcv0 xencons=tty0''
on_poweroff = ''destr...
2014 Jan 10
0
Slow IO on DomUs under Xen 4.1 kernel 3.2.0.4
...#
bootloader = '/usr/lib/xen-4.1/bin/pygrub'
vcpus = '1'
memory = '2048'
# Disk device(s).
root = '/dev/xvda2 ro'
disk = [
'phy:/dev/vg0/cri4-git-root,xvda2,w',
'phy:/dev/vg0/cri4-git-home,xvda3,w',
'phy:/dev/vg0/cri4-git-swap,xvda1,w',
]
#
# Hostname
#
name = 'cri4-git'
# Networking
vif = [ 'ip=10.5.62.7 ,mac=00:16:3E:ED:82:FC,bridge=xenbr0' ]
on_poweroff = 'destroy'
on_reboot = 'restart'
o...
2009 Mar 18
1
ERROR of gluster.2.0.rc1 client on suse reiserfs
...ructure needs cleaning " which happening when run "ls " or other linux command in gluster client mounted directory ,the client system is SUSE Linux Enterprise Server 10 SP2 (x86_64) , 2.6.16.60-0.21-xen .
glusterfs-2.0.0rc1 + fuse-2.7.4glfs11 + SUSE10 SP2
~ # mount
/dev/xvda3 on / type reiserfs (rw,acl,user_xattr)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
debugfs on /sys/kernel/debug type debugfs (rw)
udev on /dev type tmpfs (rw,size=8g)
devpts on /dev/pts type devpts (rw,mode=0620,gid=5)
/dev/xvda1 on /boot type reiserfs (rw,acl,user_xattr)
securityfs...
2007 May 02
0
hdparm strange behaviour on centos 5.0 using the latest kernel
...w the hard drive performance goes down dramatically in domain 0.
#> hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 384 MB in 2.00 seconds = 192.08 MB/sec
Timing buffered disk reads: 130 MB in 3.03 seconds = 42.93 MB/sec
The performance of domu is also very bad:
#>hdparm -tT /dev/xvda3
/dev/xvda3:
Timing cached reads: 72 MB in 2.08 seconds = 34.66 MB/sec
Timing buffered disk reads: 14 MB in 3.27 seconds = 4.28 MB/sec
Any idea how to trouble shoot this ?
--
Vikas
_______________________________________________
Xen-users mailing list
Xen-users@lists.xensource.com
h...
2008 Feb 27
0
Error: Had a bootloader specified, but no disks are bootable
..."VM1"
memory = 1890
vcpus=1
uuid="0a4a2999-bac9-6561-f9e0-a1494751c275"
on_crash="destroy"
on_poweroff="destroy"
on_reboot="restart"
localtime=0
builder="linux"
bootloader="/usr/lib/xen/boot/domUloader.py"
bootargs="--entry=xvda3:/boot/vmlinux-2.6.18.8-0.9-xen.gz,/boot/initrd-2.6.18.8-0.9-xen"
extra="TERM=xterm xencons=tty "
nographic=1
vif = [''mac=00:18:A4:78:F4:34,bridge=bridge0'']
disk=[ ''phy:/dev/sda8,xvda3,w'',
''phy:/dev/sda7,xvda2,w'',...
2009 Apr 12
1
Missing xvd devices in DomU
...g:
----------------------------------------------
root = ''/dev/xvda2 ro''
disk = [
''drbd:pgone,xvda2,w'',
''phy:/dev/vg/pgone-swap,xvda1,w'',
''phy:/dev/vg/pgone_backup_work,xvda3,w'',
]
However, i dont have any /dev/xvd* devices when i boot up my DomU. The
funny thing is that hda2 is mounted as root:
/dev/hda2 on / type ext3 (rw,noatime,nodiratime,errors=remount-ro)
fdisk -l does not show any disks. Any idea? The Xen Blockdevice is
compiled in (n...
2010 Apr 13
1
Help with domU config file (convert from kvm)
..."kpartx -a
/dev/vg/VM".
Usually my domu configs look like this:
bootloader=''/usr/bin/pygrub''
name="ipfire"
memory=512
acpi=1
apic=1
vif=[''mac=00:16:3e:be:a1:9a,bridge=brI'']
disk=[''phy:/dev/vg/ipfire-2.x,xvda1,w'']
root="xvda3"
extra=''xencons=tty console=hvc0''
but this is not working at all. if i use pygrub, i am getting not the
grub.cfg which is in /boot of the VM and it says that ne data were returned.
if i use a kernel and initrd directly inside the domu config it does not
boot at all.
so, does...
2007 Jun 25
1
I/O errors in domU with LVM on DRBD
...''
else:
arch_libdir = ''lib''
name = "centos45domU"
memory = "512"
disk = [ ''phy:/dev/mapper/VGvm1-LVroot,xvda1,w'',
''phy:/dev/mapper/VGvm1-LVtmp,xvda2,w'',
''phy:/dev/mapper/VGvm1-LVswap,xvda3,w'', ]
vif = [ ''mac=00:16:3e:d2:14:70, bridge=xenbr0'', ]
vfb = ["type=vnc,vncunused=1"]
bootloader="/usr/bin/pygrub"
vcpus=2
on_reboot = ''restart''
on_crash = ''restart''
DomU /etc/fstab
/dev/xvda1 /...
2014 Sep 16
1
quota doesn't appear to work - repquota only updates when quotacheck is run
Hi,
I have exactly the same problem that you experienced in Nov, 2013.
I am using ext4 with journaled quota and the quota usage is only updating when I run quotacheck manually.
Have you found a solution?
Regards,
Alex
> I have set up user quotas on an ext4 filesystem. It does not appear that
> the quota system is being updated, except when I manually run quotacheck.
>
> More detail:
2011 Oct 02
0
Bug#644100: pygrub error if the root disk value is not the first in the list.
...9;mode', 'w']]], ['device', ['vbd', ['uname',
'phy:/dev/data/builder
-usr'], ['dev', 'xvda4'], ['mode', 'w']]], ['device', ['vbd',
['uname', 'phy:/dev/data/builder-tmp'], ['dev', 'xvda3'], ['mode',
'w']]], ['device', ['vbd', ['uname', 'phy:/dev/data/builder-root'],
['dev', 'xvda
2'], ['mode', 'w']]], ['device', ['vbd', ['uname',
'phy:/dev/data/builder-swap'], ['d...
2010 May 28
1
Multi-partition domain not recognizing extra partitions
...'phy:/dev/vg/test-home,xvda7,w'',
''phy:/dev/vg/test-data,xvda6,w'',
''phy:/dev/vg/test-tmp,xvda5,w'',
''phy:/dev/vg/test-usr,xvda4,w'',
''phy:/dev/vg/test-var,xvda3,w'',
''phy:/dev/vg/test-root,xvda2,w'',
''phy:/dev/vg/test-swap,xvda1,w'',
]
What have I done wrong ? I have tried many things... DomU is a fresh Debian
Lenny and the partitions are ext3. dom0 is also Debian...
2013 Mar 03
1
sysvolreset failing on glusterfs
...rget fs without ACLs, although they do
work, as said before, and although I do have mounted the fs using -o
acl,rw. The underlying ext3 fs is of cause running with acls enabled,
too. This is what mount looks like for the involved fs's:
fusectl on /sys/fs/fuse/connections type fusectl (rw)
/dev/xvda3 on /var/glusterfs/brick1 type ext3 (rw,acl,user_xattr)
localhost:/dc-vol on /export/dc-vol type fuse.glusterfs
(rw,allow_other,max_read=131072)
Andreas
--
Andreas Gaiser, Berlin, Germany
2013 Nov 27
2
[BUG] domU kernel crash at igbvf module loading / __msix_mask_irq
...''] <= same crash
pci = [''02:10.0'', ''02:10.1'']
vif = [''bridge=xenbr1'',''bridge=xenbr2'']
disk = [''phy:/dev/loop4,xvda1,w'', ''phy:/dev/loop5,xvda2,w'',
''phy:/dev/loop6,xvda3,w'']
root = "/dev/xvda1 ro rootfstype=ext4 iommu=soft
xen-pcifront.verbose_request=1"
DomU crash log:
[ 71.124852] pcifront pci-0: write dev=0000:00:00.0 - offset 72 size
2 val c00
2
[ 71.124888] BUG: unable to handle kernel paging request at
ffffc9000015400c
[ 71.124...