Displaying 12 results from an estimated 12 matches for "ndisks".
Did you mean:
disks
2017 Apr 20
1
[PATCH] tests: Replace test-max-disks with several tests.
...;
+#include <sys/time.h>
+#include <sys/resource.h>
+#include <assert.h>
+
+#include <guestfs.h>
+#include "guestfs-internal-frontend.h"
+
+#include "getprogname.h"
+
+static ssize_t get_max_disks (guestfs_h *g);
+static void do_test (guestfs_h *g, size_t ndisks, bool just_add);
+static void make_disks (const char *tmpdir);
+static void rm_disks (void);
+
+static size_t ndisks;
+static char **disks;
+
+static void __attribute__((noreturn))
+usage (int status)
+{
+ if (status != EXIT_SUCCESS)
+ fprintf (stderr, "Try ‘%s --help’ for more information...
2005 Dec 07
2
raidz mismatched disk sizes
Is it possible to build a raidz with different disk sizes? If so, are you limited to the
size of the smallest disk (*ndisks)?
-frank
This message posted from opensolaris.org
2020 Oct 26
1
Re: unable to migrate: virPortAllocatorSetUsed:299 : internal error: Failed to reserve port 49153
...t:
>
> https://gitlab.com/libvirt/libvirt/-/commit/e74d627bb3b
>
> The problem is, if you have two or more disks that need to be copied
> over to the destination, the @server_started variable is not set after
> the first iteration of the "for (i = 0; i < vm->def->ndisks; i++)" loop.
> I think this should be the fix:
>
Actually, you will need a second patch too. Here's the series:
https://www.redhat.com/archives/libvir-list/2020-October/msg01358.html
Michal
2018 May 16
3
[PATCH] tests: Increase appliance memory when testing 256+ disks.
Currently the tests fail on x86 with recent kernels:
FAIL: test-255-disks.sh
This confused me for a while because our other test program
(utils/max-disks/max-disks.pl) reports that it should be possible to
add 255 disks.
Well it turns out that the default amount of appliance memory is
sufficient if you're just adding disks, but if you try to add _and_
partition those disks there's
2013 Apr 17
10
xl network-attach SEGV in 4.2 and 4.1
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi all,
4.2 and 4.1 suffers from SEGV during xl network-attach in
libxl__device_nic_add. In 4.3-unstable it is fixed by:
5420f2650 libxl: Set vfb and vkb devid if not done so by the caller
So either the patch need to be backported to 4.1 and 4.2, or fixed by this one:
- ------
libxl: Fix SEGV in network-attach
When "device/vif" directory
2020 Oct 12
3
unable to migrate: virPortAllocatorSetUsed:299 : internal error: Failed to reserve port 49153
On libvirt 6.8.0 and qemu 5.1.0, when trying to live migrate "error:
internal error: Failed to reserve port" error is received and
migration does not succeed:
virsh # migrate cartridge qemu+tls://ratchet.lan/system --live
--persistent --undefinesource --copy-storage-all --verbose
error: internal error: Failed to reserve port 49153
virsh #
On target host with debug logs, nothing
2018 May 16
0
[PATCH] tests: Increase appliance memory when testing 256+ disks.
...++ b/tests/disks/test-add-disks.c
@@ -98,6 +98,7 @@ main (int argc, char *argv[])
guestfs_h *g;
char *tmpdir;
ssize_t n = -1; /* -1: not set 0: max > 0: specific value */
+ int m;
g = guestfs_create ();
if (g == NULL)
@@ -158,6 +159,14 @@ main (int argc, char *argv[])
}
ndisks = n;
+ /* Increase memory available to the appliance. On x86 the default
+ * is not enough to both detect and partition 256 disks.
+ */
+ m = guestfs_get_memsize (g);
+ if (m == -1 ||
+ guestfs_set_memsize (g, m * 20 / 5) == -1)
+ error (EXIT_FAILURE, 0, "get or set memsize f...
2020 Oct 26
0
Re: unable to migrate: virPortAllocatorSetUsed:299 : internal error: Failed to reserve port 49153
...roduced by the following commit:
https://gitlab.com/libvirt/libvirt/-/commit/e74d627bb3b
The problem is, if you have two or more disks that need to be copied
over to the destination, the @server_started variable is not set after
the first iteration of the "for (i = 0; i < vm->def->ndisks; i++)" loop.
I think this should be the fix:
diff --git i/src/qemu/qemu_migration.c w/src/qemu/qemu_migration.c
index 2f5d61f8e7..6f764b0c73 100644
--- i/src/qemu/qemu_migration.c
+++ w/src/qemu/qemu_migration.c
@@ -479,9 +479,11 @@ qemuMigrationDstStartNBDServer(virQEMUDriverPtr driver,...
2018 May 16
1
Re: [PATCH] tests: Increase appliance memory when testing 256+ disks.
...@ main (int argc, char *argv[])
> guestfs_h *g;
> char *tmpdir;
> ssize_t n = -1; /* -1: not set 0: max > 0: specific value */
> + int m;
>
> g = guestfs_create ();
> if (g == NULL)
> @@ -158,6 +159,14 @@ main (int argc, char *argv[])
> }
> ndisks = n;
>
> + /* Increase memory available to the appliance. On x86 the default
> + * is not enough to both detect and partition 256 disks.
> + */
> + m = guestfs_get_memsize (g);
> + if (m == -1 ||
> + guestfs_set_memsize (g, m * 20 / 5) == -1)
> + error (EXI...
2011 Feb 23
0
[PATCH 1/2] libvirt/qemu : allow persistent modification of disks via A(De)ttachDeviceFlags
...0f25a2a..703f86a 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -4082,16 +4082,174 @@ cleanup:
return ret;
}
+static int qemuDomainFindDiskByName(virDomainDefPtr vmdef, const char *name)
+{
+ virDomainDiskDefPtr vdisk;
+ int i;
+
+ for (i = 0; i < vmdef->ndisks; i++) {
+ vdisk = vmdef->disks[i];
+ if (!strcmp(vdisk->dst, name))
+ return i;
+ }
+ return -1;
+}
+/*
+ * Attach a device given by XML, the change will be persistent
+ * and domain XML definition file is updated.
+ */
+static int qemuDomainAttachDevicePersiste...
2011 Apr 21
7
[PATCHv11 0/6] libvirt/qemu - persistent modification of devices
Here is v11. Fixed comments/bugs and updated against the latest libvirt.git.
Changes v10->v11:
- fixed comments on each patches
- fixed cgroup handling in patch 3.
- fixed MODIFY_CURRENT handling in patch 4.
most of diff comes from refactoring qemu/qemu_driver.c
--
conf/domain_conf.c | 40 ++
conf/domain_conf.h | 5
libvirt_private.syms | 3
qemu/qemu_driver.c | 727
2007 Jul 07
17
Raid-Z expansion
Apologies for the blank message (if it came through).
I have heard here and there that there might be in development a plan
to make it such that a raid-z can grow its "raid-z''ness" to
accommodate a new disk added to it.
Example:
I have 4Disks in a raid-z[12] configuration. I am uncomfortably low on
space, and would like to add a 5th disk. The idea is to pop in disk 5
and have