Displaying 20 results from an estimated 31656 matches for "diske".
Did you mean:
disk
2014 Mar 12
2
OT: missing /dev paths
Looking for help kind of in a hurry. I've been searching google but not
finding any options.
Is there any way to fix missing /dev paths to luns without rebooting?
For example, see the output from lsscsi below. The only way I know to
fix this is with a reboot, but I REALLY Need to avoid that if possible.
Thanks
James
[2:0:1:150] disk DataCore Virtual Disk DCS -
[2:0:1:151]
2016 Aug 17
2
[PATCH 06/15] genhd: Add return code to device_add_disk
On Wed, 17 Aug 2016 15:15:06 +0800
Fam Zheng <famz at redhat.com> wrote:
> @@ -613,10 +614,8 @@ void device_add_disk(struct device *parent, struct gendisk *disk)
> disk->flags |= GENHD_FL_UP;
>
> retval = blk_alloc_devt(&disk->part0, &devt);
> - if (retval) {
> - WARN_ON(1);
> - return;
> - }
> + if (retval)
> + goto fail;
>
2016 Aug 17
2
[PATCH 06/15] genhd: Add return code to device_add_disk
On Wed, 17 Aug 2016 15:15:06 +0800
Fam Zheng <famz at redhat.com> wrote:
> @@ -613,10 +614,8 @@ void device_add_disk(struct device *parent, struct gendisk *disk)
> disk->flags |= GENHD_FL_UP;
>
> retval = blk_alloc_devt(&disk->part0, &devt);
> - if (retval) {
> - WARN_ON(1);
> - return;
> - }
> + if (retval)
> + goto fail;
>
2013 Dec 05
0
Re: correct way to hot-add cdrom ?
Alexandr писал 2013-12-02 09:36:
> Good day to all. i have problems with cdrom hot adding code. currently
> i using virDomainAttachDevice with type=file, device=cdrom, dev=hdc,
> this code works for machine with one ide hdd and one ide cdrom, but
> this not work for machine with only one ide hdd, and i looking for
> solution to hot add cdrom to machine independent of existing
2013 Dec 02
2
correct way to hot-add cdrom ?
Good day to all. i have problems with cdrom hot adding code. currently i
using virDomainAttachDevice with type=file, device=cdrom, dev=hdc, this
code works for machine with one ide hdd and one ide cdrom, but this not
work for machine with only one ide hdd, and i looking for solution to
hot add cdrom to machine independent of existing devices or i need way
to determinate which target device
2019 Apr 26
2
5.2.0 xen and maxGrantFrames
Hi
libvirt 5.2.0 should support maxGrantFrames setting for xen (changelog).
I get ever an error if I use it in the config:
<domain type='xen'>
<name>satan.chao5.int</name>
<uuid>f1f96b1c-fb75-4707-afb7-604d696d29cc</uuid>
<memory unit='KiB'>3145728</memory>
<currentMemory unit='KiB'>3145728</currentMemory>
2017 Apr 20
1
[PATCH] tests: Replace test-max-disks with several tests.
Replace the monolithic 'test-max-disks.pl' script with a test program
written in C. The program is completely equivalent to the old script,
except for the enhancement that it is able to detect if disks are
added to the appliance in the wrong order.
The tests themselves are split out into some shell scripts:
- test-27-disks: Fully tests 27 disks.
This is the minimum supported
2016 Aug 17
20
[PATCH 00/15] Fix issue with KOBJ_ADD uevent versus disk attributes
This is an attempt to fix the issue that some disks' sysfs attributes are not
ready at the time its KOBJ_ADD event is sent.
The symptom is during device hotplug, udev may fail to find certain attributes,
such as serial or wwn, of the disk. As a result the /dev/disk/by-id entries are
not created.
The cause is device_add_disk emits the uevent before returning, and the callers
have to create
2016 Aug 17
20
[PATCH 00/15] Fix issue with KOBJ_ADD uevent versus disk attributes
This is an attempt to fix the issue that some disks' sysfs attributes are not
ready at the time its KOBJ_ADD event is sent.
The symptom is during device hotplug, udev may fail to find certain attributes,
such as serial or wwn, of the disk. As a result the /dev/disk/by-id entries are
not created.
The cause is device_add_disk emits the uevent before returning, and the callers
have to create
2018 May 16
3
[PATCH] tests: Increase appliance memory when testing 256+ disks.
Currently the tests fail on x86 with recent kernels:
FAIL: test-255-disks.sh
This confused me for a while because our other test program
(utils/max-disks/max-disks.pl) reports that it should be possible to
add 255 disks.
Well it turns out that the default amount of appliance memory is
sufficient if you're just adding disks, but if you try to add _and_
partition those disks there's
2007 May 31
3
zfs boot error recovery
hi all,
i would like to ask some questions regarding best practices for zfs
recovery if disk errors occur.
currently i have zfs boot (nv62) and the following setup:
2 si3224 controllers (each 4 sata disks)
8 sata disks, same size, same type
i have two pools:
a) rootpool
b) datapool
the rootpool is a mirrored pool, where every disk has a slice (the s0,
which is 5 % of the whole disk) and this
2005 May 20
3
How NOT to have a disk recognized by grub?
Greetings:
I'm upgrading a fileserver running 3.4 (upgrade to a larger disk). I
backed up the data from the "old" disk and slapped in a newer, larger
disk and installed Centos-3.4. No problems.
Now, there are some files on the "old" disk that I forgot to move to the
back-up disk, so I'd like to mount the "old" disk as /dev/hdd and reboot
the system and
2009 Aug 04
6
[Q} how can O.S. predicate a disk going to failure??
we have CENTOS 4.X on DELL server and one one of virtual disk include 4 disk configure as REID5 (one more disk for hot spare). I saw /var/log/messages file have:
Aug 4 06:27:02 host1 Server Administrator: Storage Service EventID: 2094 Predictive Failure reported: Physical Disk 1:5 Controller 0, Connector 1
Aug 4 06:27:02 host1 Server Administrator: Storage Service EventID: 2051 Physical
2016 Aug 17
2
[PATCH 06/15] genhd: Add return code to device_add_disk
On Wed, 17 Aug 2016 16:48:23 +0800
Fam Zheng <famz at redhat.com> wrote:
> On Wed, 08/17 10:49, Cornelia Huck wrote:
> > On Wed, 17 Aug 2016 15:15:06 +0800
> > Fam Zheng <famz at redhat.com> wrote:
> >
> > > @@ -613,10 +614,8 @@ void device_add_disk(struct device *parent, struct gendisk *disk)
> > > disk->flags |= GENHD_FL_UP;
> >
2016 Aug 17
2
[PATCH 06/15] genhd: Add return code to device_add_disk
On Wed, 17 Aug 2016 16:48:23 +0800
Fam Zheng <famz at redhat.com> wrote:
> On Wed, 08/17 10:49, Cornelia Huck wrote:
> > On Wed, 17 Aug 2016 15:15:06 +0800
> > Fam Zheng <famz at redhat.com> wrote:
> >
> > > @@ -613,10 +614,8 @@ void device_add_disk(struct device *parent, struct gendisk *disk)
> > > disk->flags |= GENHD_FL_UP;
> >
2012 Nov 30
13
Remove disk
Hi all,
I would like to knwon if with ZFS it''s possible to do something like that :
http://tldp.org/HOWTO/LVM-HOWTO/removeadisk.html
meaning :
I have a zpool with 48 disks with 4 raidz2 (12 disk). Inside those 48 disk
I''ve 36x 3T and 12 x 2T.
Can I buy new 12x4 To disk put in the server, add in the zpool, ask zpool
to migrate all data on those 12 old disk on the new and
2016 Jun 30
17
[PATCH v2 00/12] gendisk: Generate uevent after attribute available
The race condition is noticed between disk_add() and disk attributes, on
virtio-blk hotplug.
Userspace listens to the KOBJ_ADD uevent generated in add_disk(). At that
point we haven't created the serial attribute file, therefore depending
on how fast udev reacts, the /dev/disk/by-id/ entry doesn't always get
created.
As pointed out by Christoph Hellwig in the specific fix [1], virtio-blk
2016 Jun 30
17
[PATCH v2 00/12] gendisk: Generate uevent after attribute available
The race condition is noticed between disk_add() and disk attributes, on
virtio-blk hotplug.
Userspace listens to the KOBJ_ADD uevent generated in add_disk(). At that
point we haven't created the serial attribute file, therefore depending
on how fast udev reacts, the /dev/disk/by-id/ entry doesn't always get
created.
As pointed out by Christoph Hellwig in the specific fix [1], virtio-blk
2013 Jun 26
2
Re: snapshot-create-as for a single disk not all disks
try snapshot-create-as like below:
virsh snapshot-create-as vm --disk-only --diskspec "vda,snapshot=external"
2013/6/25 cmcc.dylan <dx10years@126.com>
>
> Hi, everyone,
> I have found the API snapshotCreateXML() can create a snapshot for a
> virtual machine, and the xml configuration file - snapshot.xml as folllows:
> <domainsnapshot>
>
2008 Nov 19
7
Upgrading from a single disk.
Suppose I have a single ZFS pool on a single disk;
I want to upgrade the system to use two different, larger disks
and I want to mirror.
Can I do something like:
- I start with disk #0
- add mirror on disk #1
(resilver)
- replace first disk (#0) with disk #2
(resilver)
Casper