similar to: Questions about pass through block devices

Displaying 20 results from an estimated 1000 matches similar to: "Questions about pass through block devices"

2007 Apr 29
3
Building Modules Against Xen Sources
I''m currently trying to build modules against the kernels created with Xen 3.0.5rc4. This used to not be such a problem, as Xen created a kernel directory and the built in it. Plain Jane, nothing fancy. I''ve noticed that somewhere since I did this (which was as recent as 3.0.4-1) the kernel build now does things a bit different. Apparently there is some sort of
2007 Feb 09
0
Xvd Problem
We recently added a disk to our Coraid SAN. We''ve used tons of devices off of this SAN with Xen before, but this time we experienced some weird behavior. I configured a DomU to have the device directly offered as a disk (i.e. ''phy/dev/etherd/e1.1,sdb1,rw''). When I tried to boot it, an xvd process showed up as usual. However, it consumed 100% processor
2008 Nov 06
0
SAN (Shared LUN) with CLVM
Hi@all System: 2 Server nodes connected to a SAN storage (shared LUNs). Shared storage holds a Volume group (lvm2) with all my hvm guests. Live migration works nice. But snapshots from logical volumes only usable, when i first deactivate logical volume on the second node - otherwise errors in metadata came up and snapshot is broken/inconsistent. both xen 3.3 are installed on ubuntu 8.04. can
2009 Apr 29
3
GFS and Small Files
Hi all, We are running CentOS 5.2 64bit as our file server. Currently, we used GFS (with CLVM underneath it) as our filesystem (for our multiple 2TB SAN volume exports) since we plan to add more file servers (serving the same contents) later on. The issue we are facing at the moment is we found out that command such as 'ls' gives a very slow response.(e.g 3-4minutes for the outputs of ls
2004 Dec 24
0
dst cache overflow in 2.6.8
There appears to be a pretty serious router bug in kernel 2.6.8. One reference to it is here: http://www.debiantalk.com/_Bug279666_kernel-image-2_6_8-1-k7_Runs_out_of_network_buffers-10116882-5788-a.html and a followup that it may now be fixed in later kernels here: http://lists.debian.org/debian-kernel/2004/12/msg00233.html. This is my personal experience with it.... My router fails few
2006 Oct 30
1
new BackgroundRB
Hey Greg- Yes I am sorry, the new architecture uses fork and named pipes and a bunch of unix stuff to do its magick. Now you may be able to port it to qwork on windows, but I don''t think it is possible :( I''m really sorry about this but I need this thing to be as robust and solid as it can be and in the end windows isn''t compatible. Now you may be able to
2006 Oct 24
1
Status Update
Heya folks- I just wanted to give a little status update on the new version of BackgrounDRb. Skaar has stepped up big time and done a ton of work getting the new architecture going so big thanks to him. I will be doing some documentation and a few more tweaks and we should have a new release shortly. Cheers- -- Ezra Zygmuntowicz -- Lead Rails Evangelist -- ez at engineyard.com --
2002 Feb 14
1
[BUG] [PATCH]: handling bad inodes in 2.4.x kernels
hi folks, i already posted this to the kernel mailing list a few days ago but nobody there seems to be interested in what i found out. since i believe this is a serious bug, i'm posting my perception again... the bug is about the handling of bad inodes in at least the 2.4.16, .17, .18-pre9 and .9 kernel releases (i suspect all 2.4 kernels are affected) and causes the names_cache to get
2012 Feb 23
2
lockmanager for use with clvm
Hi, i am setting up a cluster of kvm hypervisors managed with libvirt. The storage pool is on iscsi with clvm. To prevent that a vm is started on more than one hypervisor, I want to use a lockmanager with libvirt. I could only find sanlock as lockmanager, but AFSIK sanlock will not work in my setup as I don't have a shared filesystem. I have dlm running for clvm. Are there lockmanager
2010 Mar 08
4
Error with clvm
Hi, I get this error when i try to start clvm (debian lenny) : This is a clvm version with openais # /etc/init.d/clvm restart Deactivating VG ::. Stopping Cluster LVM Daemon: clvm. Starting Cluster LVM Daemon: clvmCLVMD[86475770]: Mar 8 11:25:27 CLVMD started CLVMD[86475770]: Mar 8 11:25:27 Our local node id is -1062730132 CLVMD[86475770]: Mar 8 11:25:27 Add_internal_client, fd = 7
2009 Feb 25
2
1/2 OFF-TOPIC: How to use CLVM (on top AoE vblades) instead just plain LVM for Xen based VMs on Debian 5.0?
Guys, I have setup my hard disc with 3 partitions: 1- 256MB on /boot; 2- 2GB on / for my dom0 (Debian 5.0) (eth0 default bridge for guests LAN); 3- 498GB exported with vblade-persist to my network (eth1 for the AoE protocol). On dom0 hypervisor01: vblade-persist setup 0 0 eth1 /dev/sda3 vblade-persist start all How to create a CVLM VG with /dev/etherd/e0.0 on each of my dom0s? Including the
2012 Nov 04
3
Problem with CLVM (really openais)
I'm desparately looking for more ideas on how to debug what's going on with our CLVM cluster. Background: 4 node "cluster"-- machines are Dell blades with Dell M6220/M6348 switches. Sole purpose of Cluster Suite tools is to use CLVM against an iSCSI storage array. Machines are running CentOS 5.8 with the Xen kernels. These blades host various VMs for a project. The iSCSI
2011 Oct 13
1
pvresize on a cLVM
Hi, I'm needing to expand a LUN on my EMC CX4-120 SAN. (Well I already had done it). On this LUN I had a PV of a cLVM VG. Know I need to run pvresize on it. Has anybody done this on a cLVM PV ? I'm trying to rescan the devices, but I can't "see" the new size. And, googling on it I can only find RHEL5.2 responses. Thanks in advance,
2004 Oct 07
1
kmem_cache_destroy: Can't free all objects
Hello! I am writing a FS filter that will be above the ext3 filesystem. For my own purposes I need severl bytes in inode. There are not enough space current inode so I need to create my own inode functions alloc_inode() & destroy_inode(). The problem causes when destroying slab cache at removing my module (rmmod). kmem_cache_destroy: Can't free all objects What I do: - install my
2005 Nov 08
0
Re: ATA-over-Ethernet v's iSCSI -- CORAID is NOT SAN , also check multi-target SAS
From: Bryan J. Smith [mailto:thebs413 at earthlink.net] > > CORAID will _refuse_ to allow anything to access to volume after one > system mounts it. It is not multi-targettable. SCSI-2, iSCSI and > FC/FC-AL are. AoE is not. As I understand it, Coraid will allow multiple machines to mount a volume, it just doesn't handle the synchronization. So you can have more than one
2007 Nov 13
2
lvm over nbd?
I have a system with a large LVM VG partition. I was wondering if there is a way i could share the partition using nbd and have the nbd-client have access the LVM as if it was local. SYSTEM A: /dev/sda3 is a LVM partition and is assigned to VG volgroup1. I want to share /dev/sda3 via nbd-server SYSTEM B: receives A''s /dev/sda3 as /dev/nbd0. I want to access it as VG volgroup1. I am
2006 Nov 05
1
Testing custom mongrel handlers?
Hey Folks- I''m trying to setup a new test/spec harness for testing Merb. I was wondering if there is a way to mock the Mongrel request, response objects easily to test my handler without actually running a server? I can easily do the env hash but I''m not entirely sure what needs to go in the request StringIO object that gets passed into my mongrel handler''s
2006 Nov 29
0
BackgrounDRb 0.2.1 Release
It''s that time again friends. skaar has been at it again and has greatly improved the stability of the new system. And Ara Howard has helped a ton by working with us on the slave library to iron out the wrinkles./ The results is a lot nicer backgroundrb for everyone. I have to say another huge thanks to skaar. He has singlehandedly wrote almost all of this new version and many
2006 Nov 01
0
Scheduler not useable yet
Hey there Gang- The scheduler in the new BackgrounDRb is not really useable yet. So please avoid it for a few days until we get the kinks worked out. Thanks- -- Ezra Zygmuntowicz -- Lead Rails Evangelist -- ez at engineyard.com -- Engine Yard, Serious Rails Hosting -- (866) 518-YARD (9273)
2006 Sep 25
3
Engine Yard blog
Just received the news from Tom Mornini. Congrats Ezra for the new Engine Yard site and the blog you will be collaborating. Hope to read you there soon. http://www.engineyard.com/ Jonathan -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/backgroundrb-devel/attachments/20060925/3f251fa4/attachment.html