similar to: iSCSI, windows, & local linux access

Displaying 20 results from an estimated 10000 matches similar to: "iSCSI, windows, & local linux access"

2013 Jan 20
10
iscsi on xen
I wonder if someone can point me in right directions. I have two dell servers I setup iscsi so I have four 2 tb hard drives and i had used lvm to create one big partiton and share it using iscsi. How I go about assigning sections of iscsi for virtual hard drives . should go about assigning Should I export the whole 8TB as one iscsi and then use lvm to create smaller virtual disk. Or should I
2008 Aug 31
2
LVM and hotswap (USB/iSCSI) devices?
Hi list, I'm having one of those 'I'm stupid' -problems with LVM on CentOS 5.2. I've been working with traditional partitions until now, but I've finally been sold on the theoretical benefits of using LVM, but for now I only have a huge pile of broken filesystems to show for my efforts. My scenario; I attach a disk, either over USB or iSCSI. I create a PV on this
2010 Jun 14
49
iSCSI and LVM
Hi Everyone, I am going to get a storage server which will be connected to my Xen hosts via iSCSI/Ethernet. I wish to use LVM for the DomU disks. The storage server will have a RAID10 array, and 2 Xen hosts will connect to this (Each will have a 50% share of the RAID10 array, space wise). What is the best way to go about this? Should I: a) Split the RAID10 array into 2 partition on the
2015 Sep 17
3
Guest agent is not responding
hello, in my windows vm i installed qemu-guest-agent and rebootet the vm. In the settings for the vm i set via virt-manager a new channel "unix socket" "org.qemu.guest_agent.0" "virtio". when i try to do a snapshot via shell i get: virsh snapshot-create-as --domain win7new win7new-snap1 --disk-only --atomic --quiesce error: Guest agent is not responding:
2010 Apr 15
6
ZFS for ISCSI ntfs backing store.
I''m looking to move our file storage from Windows to Opensolaris/zfs. The windows box will be connected through 10g for iscsi to the storage. The windows box will continue to serve the windows clients and will be hosting approximately 4TB of data. The physical box is a sunfire x4240, single AMD 2435 processor, 16G ram, LSI 3801E HBA, ixgbe 10g card. I''m looking for suggestions
2013 Jan 18
8
migrate from physical disk problems in xen
I''ve been trying to migrate a win nt 4 machine to a xen domu for the past few months with no success. However, on my current attempt, the original hardware no longer boots, so I''m trying to resolve the issues with xen properly, or else take a long holiday... Anyway, the physical machine had a 9G drive (OS drive), a 147 G drive (not in use) and a 300G drive (all SCSI Ultra320 on
2016 Feb 11
2
safest way to mount iscsi loopback..
On 2/11/2016 5:14 AM, lejeczek wrote: > nobody does use iscsi loopback over an lvm? I'm not sure what 'iscsi loopback' even means. iSCSI is used to mount a virtual block device hosted on another system (initiator mode) or to share a virtual block device (target mode), while loopback is used to mount a local file as a device, such as an .iso image of an optical disc. can you
2012 Nov 17
2
iSCSI Question
Hey everyone, Is anybody aware of a /true/ active/active multi-head and multi-target clustered iSCSI daemon? IE: Server 1: Hostname: host1.test.com IP Address: 10.0.0.1 Server 2: Hostname: host2.test.com IP Address: 10.0.0.2 Then they would utilize a CLVM disk between them, let's call that VG "disk" and then directly map each LUN (1,2,3,4,etc) to LV's named 1,2,3,4,... and
2008 Jun 25
6
dm-multipath use
Are folks in the Centos community succesfully using device-mapper-multipath? I am looking to deploy it for error handling on our iSCSI setup but there seems to be little traffic about this package on the Centos forums, as far as I can tell, and there seems to be a number of small issues based on my reading the dm-multipath developer lists and related resources. -geoff Geoff Galitz Blankenheim
2011 Nov 03
5
Fully-Virtualized XEN domU not Booting over iSCSI
Hello, I am currently trying to move my VMs from running on local host storage to a shared storage (trying out iSCSI) but I am facing a bit of a booting dilemma. The domUs are a mix of paravirtualized and fully-virtualized VMs. They all boot and run like clockwork when on local storage. The paravirtualized domUs appear not to have a problem when I relocate and boot them from the shared storage
2009 Sep 07
3
iSCSI domU - introducing more stability
Hi there, during peak load on some running domU, I noticed random iSCSI "Reported LUNs data has changed" which forced me to shutdown the respective domU, re-login the target and do a fsck before starting domU again. This occurred on a 16 core machine, having only about 14 domUs running. Spare memory has been occupied by dom0 (about 40G). Each domU has it''s own iSCSI target.
2011 Feb 02
1
iSCSI storage pool questions
Hi All, I've been trying to figure out the best way of using an iSCSI SAN with KVM and thanks to a helpful post by Tom Georgoulias that I found on this list (https://www.redhat.com/archives/libvirt-users/2010-May/msg00008.html), it appears I have a solution. What I'm wondering is the following: 1) If I use an iSCSI LUN as the storage pool (instead of creating an LVM VG from this iSCSI
2010 Jan 14
8
XCP - GFS - ISCSI
Hi everyone! I have 2 hosts + 1 ISCSI device. I want to create a shared storage repository and both hosts use together. I wont use NFS. prepared sr: xe sr-create host-uuid=xxx content-type=user name-label=NAS1 shared=true type=iscsi device-config:target=xxxx device-config:targetIQN=xxxx hosts see the iscsi device: scsi4 : iSCSI Initiator over TCP/IP scsi 4:0:0:0: Direct-Access NAS
2009 Jan 15
8
Can you convert Windows LVM domU to sparse img file?
I have a Windows 2000 domU running in an LVM partition. I need to move it to another host, but none of my other xen servers have lvm or free space to create an lvm. So I''d like to convert it to a sparse img file. The file system in the domU is ntfs. Can anyone suggest how to do this? Thanks, James _______________________________________________ Xen-users mailing list
2013 Sep 25
3
Best Practice to remove an ISCSI LVM from a system
Hi, I'd like to know what would be the best way to remove an iscsi lvm storage from a server. (removing all reference to that storage etc.) The storage in question will be reset and reformated and used on a different server; so no LVM export is needed. Do I have to do lvremove ..., vgremove ..., pvremove ... and do an iscsiadm -m node -T ... -p ... -u and iscsiadm -m node -o delete -T ...
2009 Jun 18
12
Best way to use iSCSI in domU
Hello, We need to use iSCSI in some of our domUs. By the moment, iSCSI is not for system filesystem, but for data filesystem. I am wondering what is the best way to use it. Is it better to configure it in dom0 and then attach the device to the domU? Or is it better to configure it directly in the domU? I am thinking that if we configure it in the dom0, then we can''t share that iscsi
2015 Jan 13
3
[PATCH] mkfs: add 'label' optional argument
Add the 'label' optional argument to the mkfs action, so it is possible to set a filesystem label direct when creating it. There may be filesystems not supporting changing the label of existing filesystems but only setting it at creation time, so this new optarg will help. Implement it for the most common filesystems (ext*, fat, ntfs, btrfs, xfs), giving an error for all the others, just
2008 Jun 18
1
mkfs.ocfs2: double free or corruption
Dear Srs, I get this error when running "mkfs.ocfs2": ================================================================================= # mkfs.ocfs2 -b 4K -C 32K -N 255 -L backup_ocfs2_001 /dev/sdb1 mkfs.ocfs2 1.2.7 Filesystem label=backup_ocfs2_001 Block size=4096 (bits=12) Cluster size=32768 (bits=15) Volume size=6000488677376 (183120382 clusters) (1464963056 blocks) 5678 cluster
2015 Jan 10
3
LVM - pvmove and multiple servers
Hi All. Looking for some guidance/experience with LVM and pvmove. I have a LUN/PV being presented from a iscsi SAN. The LUN/PV is presented to 5 servers as a shared VG they all have LV's they use for data, they are all connected via iSCSI. As the SAN I am using is being replaced I need to move onto a new unit. My migration strategy at this time is to 1. Present a new LUN from the new SAN
2010 Jan 02
27
Pool import with failed ZIL device now possible ?
Hello list, someone (actually neil perrin (CC)) mentioned in this thread: http://mail.opensolaris.org/pipermail/zfs-discuss/2009-December/034340.html that is should be possible to import a pool with failed log devices (with or without data loss ?). >/ />/ Has the following error no consequences? />/ />/ Bug ID 6538021 />/ Synopsis Need a way to force pool startup when