Stefano Stabellini
2011-Oct-27 16:19 UTC
[Xen-devel] [PATCH DOCDAY] introduce an xl man page in pod format
This is the initial version of an xl man page, based on the old xm man page. Almost every command implemented in xl should be present, a notable exception are the tmem commands that are currently missing. Further improvements and clarifications to this man page are very welcome. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> diff -r 39aa9b2441da docs/man/xl.pod.1 --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/docs/man/xl.pod.1 Thu Oct 27 15:59:03 2011 +0000 @@ -0,0 +1,805 @@ +=head1 NAME + +XL - Xen management tool, based on LibXenlight + +=head1 SYNOPSIS + +B<xl> I<subcommand> [I<args>] + +=head1 DESCRIPTION + +The B<xl> program is the new tool for managing Xen guest +domains. The program can be used to create, pause, and shutdown +domains. It can also be used to list current domains, enable or pin +VCPUs, and attach or detach virtual block devices. +The old B<xm> tool is deprecated and should not be used. + +The basic structure of every B<xl> command is almost always: + +=over 2 + +B<xl> I<subcommand> [I<OPTIONS>] I<domain-id> + +=back + +Where I<subcommand> is one of the subcommands listed below, I<domain-id> +is the numeric domain id, or the domain name (which will be internally +translated to domain id), and I<OPTIONS> are subcommand specific +options. There are a few exceptions to this rule in the cases where +the subcommand in question acts on all domains, the entire machine, +or directly on the Xen hypervisor. Those exceptions will be clear for +each of those subcommands. + +=head1 NOTES + +Most B<xl> operations rely upon B<xenstored> and B<xenconsoled>: make +sure you start the script B</etc/init.d/xencommons> at boot time to +initialize all the daemons needed by B<xl>. + +In the most common network configuration, you need to setup a bridge in dom0 +named B<xenbr0> in order to have a working network in the guest domains. +Please refer to the documentation of your Linux distribution to know how to +setup the bridge. + +Most B<xl> commands require root privileges to run due to the +communications channels used to talk to the hypervisor. Running as +non root will return an error. + +=head1 DOMAIN SUBCOMMANDS + +The following subcommands manipulate domains directly. As stated +previously, most commands take I<domain-id> as the first parameter. + +=over 4 + +=item B<create> [I<OPTIONS>] I<configfile> + +The create subcommand requires a config file: see L<xldomain.cfg> for +full details of that file format and possible options. + +I<configfile> can either be an absolute path to a file, or a relative +path to a file located in /etc/xen. + +Create will return B<as soon> as the domain is started. This B<does +not> mean the guest OS in the domain has actually booted, or is +available for input. + +B<OPTIONS> + +=over 4 + +=item B<-q>, B<--quiet> + +No console output. + +=item B<-f=FILE>, B<--defconfig=FILE> + +Use the given configuration file. + +=item B<-n>, B<--dryrun> + +Dry run - prints the resulting configuration in SXP but does not create +the domain. + +=item B<-p> + +Leave the domain paused after it is created. + +=item B<-c> + +Attach console to the domain as soon as it has started. This is +useful for determining issues with crashing domains. + +=back + +B<EXAMPLES> + +=over 4 + +=item I<with config file> + + xl create DebianLenny + +This creates a domain with the file /etc/xen/DebianLenny, and returns as +soon as it is run. + +=back + +=item B<console> I<domain-id> + +Attach to domain I<domain-id>''s console. If you''ve set up your domains to +have a traditional log in console this will look much like a normal +text log in screen. + +Use the key combination Ctrl+] to detach the domain console. + +=item B<vncviewer> [I<OPTIONS>] I<domain-id> + +Attach to domain''s VNC server, forking a vncviewer process. + +B<OPTIONS> + +=over 4 + +=item I<--autopass> + +Pass VNC password to vncviewer via stdin. + +=back + +=item B<destroy> I<domain-id> + +Immediately terminate the domain I<domain-id>. This doesn''t give the +domain OS any chance to react, and is the equivalent of ripping the +power cord out on a physical machine. In most cases you will want to +use the B<shutdown> command instead. + +=item B<domid> I<domain-name> + +Converts a domain name to a domain id. + +=item B<domname> I<domain-id> + +Converts a domain id to a domain name. + +=item B<rename> I<domain-id> I<new-name> + +Change the domain name of I<domain-id> to I<new-name>. + +=item B<dump-core> I<domain-id> [I<filename>] + +Dumps the virtual machine''s memory for the specified domain to the +I<filename> specified, without pausing the domain. The dump file will +be written to a distribution specific directory for dump files. Such +as: /var/lib/xen/dump or /var/xen/dump. + +=item B<help> [I<--long>] + +Displays the short help message (i.e. common commands). + +The I<--long> option prints out the complete set of B<xl> subcommands, +grouped by function. + +=item B<list> [I<OPTIONS>] [I<domain-id> ...] + +Prints information about one or more domains. If no domains are +specified it prints out information about all domains. + + +B<OPTIONS> + +=over 4 + +=item B<-l>, B<--long> + +The output for B<xl list> is not the table view shown below, but +instead presents the data in SXP compatible format. + +=item B<-Z>, B<--context> +Also prints the security labels. + +=item B<-v>, B<--verbose> + +Also prints the domain UUIDs, the shutdown reason and security labels. + +=back + +B<EXAMPLE> + +An example format for the list is as follows: + + Name ID Mem VCPUs State Time(s) + Domain-0 0 750 4 r----- 11794.3 + win 1 1019 1 r----- 0.3 + linux 2 2048 2 r----- 5624.2 + +Name is the name of the domain. ID the numeric domain id. Mem is the +desired amount of memory to allocate to the domain (although it may +not be the currently allocated amount). VCPUs is the number of +virtual CPUs allocated to the domain. State is the run state (see +below). Time is the total run time of the domain as accounted for by +Xen. + +B<STATES> + +The State field lists 6 states for a Xen domain, and which ones the +current domain is in. + +=over 4 + +=item B<r - running> + +The domain is currently running on a CPU. + +=item B<b - blocked> + +The domain is blocked, and not running or runnable. This can be caused +because the domain is waiting on IO (a traditional wait state) or has +gone to sleep because there was nothing else for it to do. + +=item B<p - paused> + +The domain has been paused, usually occurring through the administrator +running B<xl pause>. When in a paused state the domain will still +consume allocated resources like memory, but will not be eligible for +scheduling by the Xen hypervisor. + +=item B<s - shutdown> + +FIXME: Why would you ever see this state? + +=item B<c - crashed> + +The domain has crashed, which is always a violent ending. Usually +this state can only occur if the domain has been configured not to +restart on crash. See L<xldomain.cfg> for more info. + +=item B<d - dying> + +The domain is in process of dying, but hasn''t completely shutdown or +crashed. + +FIXME: Is this right? + +=back + +B<NOTES> + +=over 4 + +The Time column is deceptive. Virtual IO (network and block devices) +used by domains requires coordination by Domain0, which means that +Domain0 is actually charged for much of the time that a DomainU is +doing IO. Use of this time value to determine relative utilizations +by domains is thus very suspect, as a high IO workload may show as +less utilized than a high CPU workload. Consider yourself warned. + +=back + +=item B<mem-max> I<domain-id> I<mem> + +Specify the maximum amount of memory the domain is able to use, appending ''t'' +for terabytes, ''g'' for gigabytes, ''m'' for megabytes, ''k'' for kilobytes and ''b'' +for bytes. + +The mem-max value may not correspond to the actual memory used in the +domain, as it may balloon down its memory to give more back to the OS. + +=item B<mem-set> I<domain-id> I<mem> + +Set the domain''s used memory using the balloon driver; append ''t'' for +terabytes, ''g'' for gigabytes, ''m'' for megabytes, ''k'' for kilobytes and ''b'' for +bytes. + +Because this operation requires cooperation from the domain operating +system, there is no guarantee that it will succeed. This command will +definitely not work unless the domain has the required paravirt +driver. + +B<Warning:> There is no good way to know in advance how small of a +mem-set will make a domain unstable and cause it to crash. Be very +careful when using this command on running domains. + +=item B<migrate> [I<OPTIONS>] I<domain-id> I<host> + +Migrate a domain to another host machine. By default B<xl> relies on ssh as a +transport mechanism between the two hosts. + +B<OPTIONS> + +=over 4 + +=item B<-s> I<sshcommand> + +Use <sshcommand> instead of ssh. String will be passed to sh. If empty, run +<host> instead of ssh <host> xl migrate-receive [-d -e]. + +=item B<-e> + +On the new host, do not wait in the background (on <host>) for the death of the +domain. + +=item B<-C> I<config> + +Send <config> instead of config file from creation. + +=back + +=item B<pause> I<domain-id> + +Pause a domain. When in a paused state the domain will still consume +allocated resources such as memory, but will not be eligible for +scheduling by the Xen hypervisor. + +=item B<reboot> [I<OPTIONS>] I<domain-id> + +Reboot a domain. This acts just as if the domain had the B<reboot> +command run from the console. The command returns as soon as it has +executed the reboot action, which may be significantly before the +domain actually reboots. + +The behavior of what happens to a domain when it reboots is set by the +B<on_reboot> parameter of the xldomain.cfg file when the domain was +created. + +=item B<restore> [I<OPTIONS>] [I<ConfigFile>] I<CheckpointFile> + +Build a domain from an B<xl save> state file. See B<save> for more info. + +B<OPTIONS> + +=over 4 + +=item B<-p> + +Do not unpause domain after restoring it. + +=item B<-e> + +Do not wait in the background for the death of the domain on the new host. + +=item B<-d> + +Enable debug messages. + +=back + +=item B<save> [I<OPTIONS>] I<domain-id> I<CheckpointFile> [I<ConfigFile>] + +Saves a running domain to a state file so that it can be restored +later. Once saved, the domain will no longer be running on the +system, unless the -c option is used. +B<xl restore> restores from this checkpoint file. +Passing a config file argument allows the user to manually select the VM config +file used to create the domain. + + +=over 4 + +=item B<-c> + +Leave domain running after creating the snapshot. + +=back + + +=item B<shutdown> [I<OPTIONS>] I<domain-id> + +Gracefully shuts down a domain. This coordinates with the domain OS +to perform graceful shutdown, so there is no guarantee that it will +succeed, and may take a variable length of time depending on what +services must be shutdown in the domain. The command returns +immediately after signally the domain unless that B<-w> flag is used. + +The behavior of what happens to a domain when it reboots is set by the +B<on_shutdown> parameter of the xldomain.cfg file when the domain was +created. + +B<OPTIONS> + +=over 4 + +=item B<-w> + +Wait for the domain to complete shutdown before returning. + +=back + +=item B<sysrq> I<domain-id> I<letter> + +Send a I<Magic System Request> signal to the domain. For more +information on available magic sys req operations, see sysrq.txt in +your Linux Kernel sources. + +=item B<unpause> I<domain-id> + +Moves a domain out of the paused state. This will allow a previously +paused domain to now be eligible for scheduling by the Xen hypervisor. + +=item B<vcpu-set> I<domain-id> I<vcpu-count> + +Enables the I<vcpu-count> virtual CPUs for the domain in question. +Like mem-set, this command can only allocate up to the maximum virtual +CPU count configured at boot for the domain. + +If the I<vcpu-count> is smaller than the current number of active +VCPUs, the highest number VCPUs will be hotplug removed. This may be +important for pinning purposes. + +Attempting to set the VCPUs to a number larger than the initially +configured VCPU count is an error. Trying to set VCPUs to < 1 will be +quietly ignored. + +Because this operation requires cooperation from the domain operating +system, there is no guarantee that it will succeed. This command will +not work with a full virt domain. + +=item B<vcpu-list> [I<domain-id>] + +Lists VCPU information for a specific domain. If no domain is +specified, VCPU information for all domains will be provided. + +=item B<vcpu-pin> I<domain-id> I<vcpu> I<cpus> + +Pins the VCPU to only run on the specific CPUs. The keyword +B<all> can be used to apply the I<cpus> list to all VCPUs in the +domain. + +Normally VCPUs can float between available CPUs whenever Xen deems a +different run state is appropriate. Pinning can be used to restrict +this, by ensuring certain VCPUs can only run on certain physical +CPUs. + +=item B<button-press> I<domain-id> I<button> + +Indicate an ACPI button press to the domain. I<button> is may be ''power'' or +''sleep''. + +=item B<trigger> I<domain-id> I<nmi|reset|init|power|sleep> [I<VCPU>] + +Send a trigger to a domain, where the trigger can be: nmi, reset, init, power +or sleep. Optionally a specific vcpu number can be passed as an argument. + +=item B<getenforce> + +Returns the current enforcing mode of the Flask Xen security module. + +=item B<setenforce> I<1|0|Enforcing|Permissive> + +Sets the current enforcing mode of the Flask Xen security module + +=item B<loadpolicy> I<policyfile> + +Loads a new policy int the Flask Xen security module. + +=back + +=head1 XEN HOST SUBCOMMANDS + +=over 4 + +=item B<debug-keys> I<keys> + +Send debug I<keys> to Xen. + +=item B<dmesg> [B<-c>] + +Reads the Xen message buffer, similar to dmesg on a Linux system. The +buffer contains informational, warning, and error messages created +during Xen''s boot process. If you are having problems with Xen, this +is one of the first places to look as part of problem determination. + +B<OPTIONS> + +=over 4 + +=item B<-c>, B<--clear> + +Clears Xen''s message buffer. + +=back + +=item B<info> [B<-n>, B<--numa>] + +Print information about the Xen host in I<name : value> format. When +reporting a Xen bug, please provide this information as part of the +bug report. + +Sample output looks as follows (lines wrapped manually to make the man +page more readable): + + host : talon + release : 2.6.12.6-xen0 + version : #1 Mon Nov 14 14:26:26 EST 2005 + machine : i686 + nr_cpus : 2 + nr_nodes : 1 + cores_per_socket : 1 + threads_per_core : 1 + cpu_mhz : 696 + hw_caps : 0383fbff:00000000:00000000:00000040 + total_memory : 767 + free_memory : 37 + xen_major : 3 + xen_minor : 0 + xen_extra : -devel + xen_caps : xen-3.0-x86_32 + xen_scheduler : credit + xen_pagesize : 4096 + platform_params : virt_start=0xfc000000 + xen_changeset : Mon Nov 14 18:13:38 2005 +0100 + 7793:090e44133d40 + cc_compiler : gcc version 3.4.3 (Mandrakelinux + 10.2 3.4.3-7mdk) + cc_compile_by : sdague + cc_compile_domain : (none) + cc_compile_date : Mon Nov 14 14:16:48 EST 2005 + xend_config_format : 4 + +B<FIELDS> + +Not all fields will be explained here, but some of the less obvious +ones deserve explanation: + +=over 4 + +=item B<hw_caps> + +A vector showing what hardware capabilities are supported by your +processor. This is equivalent to, though more cryptic, the flags +field in /proc/cpuinfo on a normal Linux machine. + +=item B<free_memory> + +Available memory (in MB) not allocated to Xen, or any other domains. + +=item B<xen_caps> + +The Xen version and architecture. Architecture values can be one of: +x86_32, x86_32p (i.e. PAE enabled), x86_64, ia64. + +=item B<xen_changeset> + +The Xen mercurial changeset id. Very useful for determining exactly +what version of code your Xen system was built from. + +=back + +B<OPTIONS> + +=over 4 + +=item B<-n>, B<--numa> + +List host NUMA topology information + +=back + +=item B<top> + +Executes the B<xentop> command, which provides real time monitoring of +domains. Xentop is a curses interface, and reasonably self +explanatory. + +=item B<uptime> + +Prints the current uptime of the domains running. + +=item B<pci-list-assignable-devices> + +List all the assignable PCI devices. + +=back + +=head1 SCHEDULER SUBCOMMANDS + +Xen ships with a number of domain schedulers, which can be set at boot +time with the B<sched=> parameter on the Xen command line. By +default B<credit> is used for scheduling. + +=over 4 + +=item B<sched-credit> [ B<-d> I<domain-id> [ B<-w>[B<=>I<WEIGHT>] | B<-c>[B<=>I<CAP>] ] ] + +Set credit scheduler parameters. The credit scheduler is a +proportional fair share CPU scheduler built from the ground up to be +work conserving on SMP hosts. + +Each domain (including Domain0) is assigned a weight and a cap. + +B<PARAMETERS> + +=over 4 + +=item I<WEIGHT> + +A domain with a weight of 512 will get twice as much CPU as a domain +with a weight of 256 on a contended host. Legal weights range from 1 +to 65535 and the default is 256. + +=item I<CAP> + +The cap optionally fixes the maximum amount of CPU a domain will be +able to consume, even if the host system has idle CPU cycles. The cap +is expressed in percentage of one physical CPU: 100 is 1 physical CPU, +50 is half a CPU, 400 is 4 CPUs, etc. The default, 0, means there is +no upper cap. + +=back + +=back + +=head1 CPUPOOLS COMMANDS + +Xen can group the physical cpus of a server in cpu-pools. Each physical CPU is +assigned at most to one cpu-pool. Domains are each restricted to a single +cpu-pool. Scheduling does not cross cpu-pool boundaries, so each cpu-pool has +an own scheduler. +Physical cpus and domains can be moved from one pool to another only by an +explicit command. + +=over 4 + +=item B<cpupool-create> [I<OPTIONS>] I<ConfigFile> + +Create a cpu pool based an I<ConfigFile>. + +B<OPTIONS> + +=over 4 + +=item B<-f=FILE>, B<--defconfig=FILE> + +Use the given configuration file. + +=item B<-n>, B<--dryrun> + +Dry run - prints the resulting configuration. + +=back + +=item B<cpupool-list> [I<-c|--cpus> I<cpu-pool>] + +List CPU pools on the host. +If I<-c> is specified, B<xl> prints a list of CPUs used by I<cpu-pool>. + +=item B<cpupool-destroy> I<cpu-pool> + +Deactivates a cpu pool. + +=item B<cpupool-rename> I<cpu-pool> <newname> + +Renames a cpu pool to I<newname>. + +=item B<cpupool-cpu-add> I<cpu-pool> I<cpu-nr|node-nr> + +Adds a cpu or a numa node to a cpu pool. + +=item B<cpupool-cpu-remove> I<cpu-nr|node-nr> + +Removes a cpu or a numa node from a cpu pool. + +=item B<cpupool-migrate> I<domain-id> I<cpu-pool> + +Moves a domain into a cpu pool. + +=item B<cpupool-numa-split> + +Splits up the machine into one cpu pool per numa node. + +=back + +=head1 VIRTUAL DEVICE COMMANDS + +Most virtual devices can be added and removed while guests are +running. The effect to the guest OS is much the same as any hotplug +event. + +=head2 BLOCK DEVICES + +=over 4 + +=item B<block-attach> I<domain-id> I<disc-spec-component(s)> ... + +Create a new virtual block device. This will trigger a hotplug event +for the guest. + +B<OPTIONS> + +=over 4 + +=item I<domain-id> + +The domain id of the guest domain that the device will be attached to. + +=item I<disc-spec-component> + +A disc specification in the same format used for the B<disk> variable in +the domain config file. See L<xldomain.cfg>. + +=back + +=item B<block-detach> I<domain-id> I<devid> [B<--force>] + +Detach a domain''s virtual block device. I<devid> may be the symbolic +name or the numeric device id given to the device by domain 0. You +will need to run B<xl block-list> to determine that number. + +Detaching the device requires the cooperation of the domain. If the +domain fails to release the device (perhaps because the domain is hung +or is still using the device), the detach will fail. The B<--force> +parameter will forcefully detach the device, but may cause IO errors +in the domain. + +=item B<block-list> I<domain-id> + +List virtual block devices for a domain. + +=item B<cd-insert> I<domain-id> I<VirtualDevice> I<be-dev> + +Insert a cdrom into a guest domain''s cd drive. Only works with HVM domains. + +B<OPTIONS> + +=over 4 + +=item I<VirtualDevice> + +How the device should be presented to the guest domain; for example /dev/hdc. + +=item I<be-dev> + +the device in the backend domain (usually domain 0) to be exported; it can be a +path to a file (file://path/to/file.iso). See B<disk> in L<xldomain.cfg> for the +details. + +=back + +=item B<cd-eject> I<domain-id> I<VirtualDevice> + +Eject a cdrom from a guest''s cd drive. Only works with HVM domains. +I<VirtualDevice> is the cdrom device in the guest to eject. + +=back + +=head2 NETWORK DEVICES + +=over 4 + +=item B<network-attach> I<domain-id> I<network-device> + +Creates a new network device in the domain specified by I<domain-id>. +I<network-device> describes the device to attach, using the same format as the +B<vif> string in the domain config file. See L<xldomain.cfg> for the +description. + +=item B<network-detach> I<domain-id> I<devid|mac> + +Removes the network device from the domain specified by I<domain-id>. +I<devid> is the virtual interface device number within the domain +(i.e. the 3 in vif22.3). Alternatively the I<mac> address can be used to +select the virtual interface to detach. + +=item B<network-list> I<domain-id> + +List virtual network interfaces for a domain. + +=back + +=head2 PCI PASS-THROUGH + +=over 4 + +=item B<pci-attach> I<domain-id> I<BDF> + +Hot-plug a new pass-through pci device to the specified domain. +B<BDF> is the PCI Bus/Device/Function of the physical device to pass-through. + +=item B<pci-detach> [I<-f>] I<domain-id> I<BDF> + +Hot-unplug a previously assigned pci device from a domain. B<BDF> is the PCI +Bus/Device/Function of the physical device to be removed from the guest domain. + +If B<-f> is specified, B<xl> is going to forcefully remove the device even +without guest''s collaboration. + +=item B<pci-list> I<domain-id> + +List pass-through pci devices for a domain. + +=back + +=head1 SEE ALSO + +B<xldomain.cfg>(5), B<xentop>(1) + +=head1 AUTHOR + + Stefano Stabellini <stefano.stabellini@eu.citrix.com> + Vincent Hanquez <vincent.hanquez@eu.citrix.com> + Ian Jackson <ian.jackson@eu.citrix.com> + Ian Campbell <Ian.Campbell@citrix.com> + +=head1 BUGS + +Send bugs to xen-devel@lists.xensource.com. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2011-Oct-28 10:10 UTC
Re: [Xen-devel] [PATCH DOCDAY] introduce an xl man page in pod format
Hi Juergen, Are you the best person to review this part of the xl manpage? Can you provide a reference to the documentation for I<ConfigFile> mentioned below? If nothing exists could you maybe write something up, e.g. a man page or markdown document. Thanks, Ian. On Thu, 2011-10-27 at 17:19 +0100, Stefano Stabellini wrote:> > +=head1 CPUPOOLS COMMANDS > + > +Xen can group the physical cpus of a server in cpu-pools. Each > physical CPU is > +assigned at most to one cpu-pool. Domains are each restricted to a > single > +cpu-pool. Scheduling does not cross cpu-pool boundaries, so each > cpu-pool has > +an own scheduler. > +Physical cpus and domains can be moved from one pool to another only > by an > +explicit command. > + > +=over 4 > + > +=item B<cpupool-create> [I<OPTIONS>] I<ConfigFile> > + > +Create a cpu pool based an I<ConfigFile>. > + > +B<OPTIONS> > + > +=over 4 > + > +=item B<-f=FILE>, B<--defconfig=FILE> > + > +Use the given configuration file. > + > +=item B<-n>, B<--dryrun> > + > +Dry run - prints the resulting configuration. > + > +=back > + > +=item B<cpupool-list> [I<-c|--cpus> I<cpu-pool>] > + > +List CPU pools on the host. > +If I<-c> is specified, B<xl> prints a list of CPUs used by > I<cpu-pool>. > + > +=item B<cpupool-destroy> I<cpu-pool> > + > +Deactivates a cpu pool. > + > +=item B<cpupool-rename> I<cpu-pool> <newname> > + > +Renames a cpu pool to I<newname>. > + > +=item B<cpupool-cpu-add> I<cpu-pool> I<cpu-nr|node-nr> > + > +Adds a cpu or a numa node to a cpu pool. > + > +=item B<cpupool-cpu-remove> I<cpu-nr|node-nr> > + > +Removes a cpu or a numa node from a cpu pool. > + > +=item B<cpupool-migrate> I<domain-id> I<cpu-pool> > + > +Moves a domain into a cpu pool. > + > +=item B<cpupool-numa-split> > + > +Splits up the machine into one cpu pool per numa node. > + > +=back > +_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2011-Oct-28 10:43 UTC
Re: [Xen-devel] [PATCH DOCDAY] introduce an xl man page in pod format
On Thu, 2011-10-27 at 17:19 +0100, Stefano Stabellini wrote:> This is the initial version of an xl man page, based on the old xm man > page. > Almost every command implemented in xl should be present, a notable > exception are the tmem commands that are currently missing.I think it''s worth enumerating all the commands, even with a TBD, since it marks what is missing.> Further improvements and clarifications to this man page are very welcome. > > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> > > diff -r 39aa9b2441da docs/man/xl.pod.1 > --- /dev/null Thu Jan 01 00:00:00 1970 +0000 > +++ b/docs/man/xl.pod.1 Thu Oct 27 15:59:03 2011 +0000 > @@ -0,0 +1,805 @@ > +=head1 NAME > + > +XL - Xen management tool, based on LibXenlight > + > +=head1 SYNOPSIS > + > +B<xl> I<subcommand> [I<args>]B<xl> [I<global-args>] I<subcommand> [I<args>] The interesting global-args are -v (verbose, can be used repeatedly) and -N (dry-run).> + > +=head1 DESCRIPTION > + > +The B<xl> program is the new tool for managing Xen guest > +domains. The program can be used to create, pause, and shutdown > +domains. It can also be used to list current domains, enable or pin > +VCPUs, and attach or detach virtual block devices. > +The old B<xm> tool is deprecated and should not be used. > + > +The basic structure of every B<xl> command is almost always: > + > +=over 2 > + > +B<xl> I<subcommand> [I<OPTIONS>] I<domain-id> > + > +=back > + > +Where I<subcommand> is one of the subcommands listed below, I<domain-id> > +is the numeric domain id, or the domain name (which will be internally > +translated to domain id), and I<OPTIONS> are subcommand specific > +options. There are a few exceptions to this rule in the cases where > +the subcommand in question acts on all domains, the entire machine, > +or directly on the Xen hypervisor. Those exceptions will be clear for > +each of those subcommands. > + > +=head1 NOTES > + > +Most B<xl> operations rely upon B<xenstored> and B<xenconsoled>: make > +sure you start the script B</etc/init.d/xencommons> at boot time to > +initialize all the daemons needed by B<xl>. > + > +In the most common network configuration, you need to setup a bridge in dom0 > +named B<xenbr0> in order to have a working network in the guest domains. > +Please refer to the documentation of your Linux distribution to know how to > +setup the bridge. > + > +Most B<xl> commands require root privileges to run due to the > +communications channels used to talk to the hypervisor. Running as > +non root will return an error. > + > +=head1 DOMAIN SUBCOMMANDS > + > +The following subcommands manipulate domains directly. As stated > +previously, most commands take I<domain-id> as the first parameter. > + > +=over 4 > + > +=item B<create> [I<OPTIONS>] I<configfile>The I<configfile> is optional and if it present it must come before the options. In addition to the normal --option stuff you can also pass key=value to provide options as if they were written in a configuration file, these override whatever is in the config file. While checking this I noticed that before processing arguments main_create() does: if (argv[1] && argv[1][0] != ''-'' && !strchr(argv[1], ''='')) { filename = argv[1]; argc--; argv++; } that use of argv[1] without checking argc is a little dubious (ok if argc<1 then argc==0 and therefore argv[argc+1]==NULL, but still...).> + > +The create subcommand requires a config file: see L<xldomain.cfg> for > +full details of that file format and possible options. > + > +I<configfile> can either be an absolute path to a file, or a relative > +path to a file located in /etc/xen.This isn''t actually true for xl. Arguably that''s a bug in xl rather than this doc but I seem to recall that someone had a specific reason for not doing this.> + > +Create will return B<as soon> as the domain is started. This B<does > +not> mean the guest OS in the domain has actually booted, or is > +available for input. > + > +B<OPTIONS> > + > +=over 4 > + > +=item B<-q>, B<--quiet> > + > +No console output. > + > +=item B<-f=FILE>, B<--defconfig=FILE> > + > +Use the given configuration file. > + > +=item B<-n>, B<--dryrun> > + > +Dry run - prints the resulting configuration in SXP but does not create > +the domain. > + > +=item B<-p> > + > +Leave the domain paused after it is created. > + > +=item B<-c> > + > +Attach console to the domain as soon as it has started. This is > +useful for determining issues with crashing domains.... and just as a general convenience since you often want to watch the domain boot.> + > +=back > + > +B<EXAMPLES> > + > +=over 4 > + > +=item I<with config file> > + > + xl create DebianLenny > + > +This creates a domain with the file /etc/xen/DebianLenny, and returns as > +soon as it is run. > + > +=back > + > +=item B<console> I<domain-id> > + > +Attach to domain I<domain-id>''s console. If you''ve set up your domains to > +have a traditional log in console this will look much like a normal > +text log in screen. > + > +Use the key combination Ctrl+] to detach the domain console.This takes -t [pv|serial] and -n (num) options.> + > +=item B<vncviewer> [I<OPTIONS>] I<domain-id> > + > +Attach to domain''s VNC server, forking a vncviewer process. > + > +B<OPTIONS> > + > +=over 4 > + > +=item I<--autopass> > + > +Pass VNC password to vncviewer via stdin.What is the behaviour if you don''t do this? Are the sub-commands intended to be in some sort of order. In general they seem to be alphabetical but in that case vncviewer does not belong here. [...]> +=item B<list> [I<OPTIONS>] [I<domain-id> ...] > + > +Prints information about one or more domains. If no domains are > +specified it prints out information about all domains. > + > + > +B<OPTIONS> > + > +=over 4 > + > +=item B<-l>, B<--long> > + > +The output for B<xl list> is not the table view shown below, but > +instead presents the data in SXP compatible format. > + > +=item B<-Z>, B<--context> > +Also prints the security labels. > + > +=item B<-v>, B<--verbose> > + > +Also prints the domain UUIDs, the shutdown reason and security labels. > + > +=back > + > +B<EXAMPLE> > + > +An example format for the list is as follows: > + > + Name ID Mem VCPUs State Time(s) > + Domain-0 0 750 4 r----- 11794.3 > + win 1 1019 1 r----- 0.3 > + linux 2 2048 2 r----- 5624.2 > + > +Name is the name of the domain. ID the numeric domain id. Mem is the > +desired amount of memory to allocate to the domain (although it may > +not be the currently allocated amount). VCPUs is the number of > +virtual CPUs allocated to the domain. State is the run state (see > +below). Time is the total run time of the domain as accounted for by > +Xen. > + > +B<STATES> > + > +The State field lists 6 states for a Xen domain, and which ones the > +current domain is in. > + > +=over 4 > + > +=item B<r - running> > + > +The domain is currently running on a CPU. > + > +=item B<b - blocked> > + > +The domain is blocked, and not running or runnable. This can be caused > +because the domain is waiting on IO (a traditional wait state) or has > +gone to sleep because there was nothing else for it to do. > + > +=item B<p - paused> > + > +The domain has been paused, usually occurring through the administrator > +running B<xl pause>. When in a paused state the domain will still > +consume allocated resources like memory, but will not be eligible for > +scheduling by the Xen hypervisor. > + > +=item B<s - shutdown> > + > +FIXME: Why would you ever see this state?This is XEN_DOMINF_shutdown which just says "/* The guest OS has shut down. */". It is set in response to the guest calling SCHEDOP_shutdown. I think it corresponds to the period between the guest shutting down and the toolstack noticing and beginning to tear it down (when it moves to dying).> +=item B<c - crashed> > + > +The domain has crashed, which is always a violent ending. Usually > +this state can only occur if the domain has been configured not to > +restart on crash. See L<xldomain.cfg> for more info. > + > +=item B<d - dying> > + > +The domain is in process of dying, but hasn''t completely shutdown or > +crashed. > + > +FIXME: Is this right?I think so. This is XEN_DOMINF_dying which says "/* Domain is scheduled to die. */"> + > +=item B<migrate> [I<OPTIONS>] I<domain-id> I<host> > + > +Migrate a domain to another host machine. By default B<xl> relies on ssh as a > +transport mechanism between the two hosts. > + > +B<OPTIONS> > + > +=over 4 > + > +=item B<-s> I<sshcommand> > + > +Use <sshcommand> instead of ssh. String will be passed to sh. If empty, run > +<host> instead of ssh <host> xl migrate-receive [-d -e]. > + > +=item B<-e> > + > +On the new host, do not wait in the background (on <host>) for the death of the > +domain.Would be useful to reference the equivalent option to "xl create" here just to clarify that they mean the same.> +=item B<reboot> [I<OPTIONS>] I<domain-id> > + > +Reboot a domain. This acts just as if the domain had the B<reboot> > +command run from the console.This relies on PV drivers, I think. Not all guests have the option of typing "reboot" on the console but I suppose it is clear enough what you mean.> The command returns as soon as it has > +executed the reboot action, which may be significantly before the > +domain actually reboots. > + > +The behavior of what happens to a domain when it reboots is set by the > +B<on_reboot> parameter of the xldomain.cfg file when the domain was > +created. > + > +=item B<restore> [I<OPTIONS>] [I<ConfigFile>] I<CheckpointFile> > + > +Build a domain from an B<xl save> state file. See B<save> for more info. > + > +B<OPTIONS> > + > +=over 4 > + > +=item B<-p> > + > +Do not unpause domain after restoring it. > + > +=item B<-e> > + > +Do not wait in the background for the death of the domain on the new host.Reference xl create?> + > +=item B<-d> > + > +Enable debug messages. > + > +=back > + > +=item B<save> [I<OPTIONS>] I<domain-id> I<CheckpointFile> [I<ConfigFile>] > + > +Saves a running domain to a state file so that it can be restored > +later. Once saved, the domain will no longer be running on the > +system, unless the -c option is used. > +B<xl restore> restores from this checkpoint file. > +Passing a config file argument allows the user to manually select the VM config > +file used to create the domain. > + > + > +=over 4 > + > +=item B<-c> > + > +Leave domain running after creating the snapshot. > + > +=back > + > + > +=item B<shutdown> [I<OPTIONS>] I<domain-id> > + > +Gracefully shuts down a domain. This coordinates with the domain OS > +to perform graceful shutdown, so there is no guarantee that it will > +succeed, and may take a variable length of time depending on what > +services must be shutdown in the domain. The command returns > +immediately after signally the domain unless that B<-w> flag is used.Does this rely on pv drivers or does it inject ACPI events etc on HVM?> + > +The behavior of what happens to a domain when it reboots is set by thebehaviour ?> +B<on_shutdown> parameter of the xldomain.cfg file when the domain was > +created. > + > +B<OPTIONS> > + > +=over 4 > + > +=item B<-w> > + > +Wait for the domain to complete shutdown before returning. > + > +=back > + > +=item B<sysrq> I<domain-id> I<letter> > + > +Send a I<Magic System Request> signal to the domain. For more > +information on available magic sys req operations, see sysrq.txt in > +your Linux Kernel sources.It would be nice to word this in a more generic fashion and point out that the specific implementation on Linux behaves like sysrq. Other guests might do other things? Relies on PV drivers.> [...] > + > +=item B<vcpu-set> I<domain-id> I<vcpu-count> > + > +Enables the I<vcpu-count> virtual CPUs for the domain in question. > +Like mem-set, this command can only allocate up to the maximum virtual > +CPU count configured at boot for the domain. > + > +If the I<vcpu-count> is smaller than the current number of active > +VCPUs, the highest number VCPUs will be hotplug removed. This may be > +important for pinning purposes. > + > +Attempting to set the VCPUs to a number larger than the initially > +configured VCPU count is an error. Trying to set VCPUs to < 1 will be > +quietly ignored. > + > +Because this operation requires cooperation from the domain operating > +system, there is no guarantee that it will succeed. This command will > +not work with a full virt domain.I thought we supported some VCPU hotplug for HVM (using ACPI and such) these days? [...]> +=item B<button-press> I<domain-id> I<button> > + > +Indicate an ACPI button press to the domain. I<button> is may be ''power'' or > +''sleep''.HVM only?> + > +=item B<trigger> I<domain-id> I<nmi|reset|init|power|sleep> [I<VCPU>] > + > +Send a trigger to a domain, where the trigger can be: nmi, reset, init, power > +or sleep. Optionally a specific vcpu number can be passed as an argument.HVM only? nmi might work for PV, not sure about the rest.> +=item B<getenforce> > + > +Returns the current enforcing mode of the Flask Xen security module. > + > +=item B<setenforce> I<1|0|Enforcing|Permissive> > + > +Sets the current enforcing mode of the Flask Xen security module > + > +=item B<loadpolicy> I<policyfile> > + > +Loads a new policy int the Flask Xen security module.I suppose flask is something which needs to go onto the "to be documented" list such that we can reference it from here.> +=back > + > +=head1 XEN HOST SUBCOMMANDS > + > +=over 4 > + > +=item B<debug-keys> I<keys> > + > +Send debug I<keys> to Xen.The same as pressing the Xen "conswitch" (Ctrl-A by default) three times and then pressing "keys".> + > +=item B<dmesg> [B<-c>] > + > +Reads the Xen message buffer, similar to dmesg on a Linux system. Thedmesg(1) ^Unix or ;-)> +buffer contains informational, warning, and error messages created > +during Xen''s boot process. If you are having problems with Xen, this > +is one of the first places to look as part of problem determination. > + > +B<OPTIONS> > + > +=over 4 > + > +=item B<-c>, B<--clear> > + > +Clears Xen''s message buffer. > + > +=back > + > +=item B<info> [B<-n>, B<--numa>] > + > +Print information about the Xen host in I<name : value> format. When > +reporting a Xen bug, please provide this information as part of the > +bug report.I''m not sure this is useful people reporting bugs will look for information on reporting bugs (which should include this info) rather than scanning the xl man page for options which say "please include.." I have added the need for this to http://wiki.xen.org/xenwiki/ReportingBugs> + > +Sample output looks as follows (lines wrapped manually to make the man > +page more readable):> + > + host : talon > + release : 2.6.12.6-xen0Heh. Perhaps a more up to date example if one is needed at all?> + version : #1 Mon Nov 14 14:26:26 EST 2005 > + machine : i686 > + nr_cpus : 2 > + nr_nodes : 1 > + cores_per_socket : 1 > + threads_per_core : 1 > + cpu_mhz : 696 > + hw_caps : 0383fbff:00000000:00000000:00000040 > + total_memory : 767 > + free_memory : 37 > + xen_major : 3 > + xen_minor : 0 > + xen_extra : -devel > + xen_caps : xen-3.0-x86_32 > + xen_scheduler : credit > + xen_pagesize : 4096 > + platform_params : virt_start=0xfc000000 > + xen_changeset : Mon Nov 14 18:13:38 2005 +0100 > + 7793:090e44133d40 > + cc_compiler : gcc version 3.4.3 (Mandrakelinux > + 10.2 3.4.3-7mdk) > + cc_compile_by : sdague > + cc_compile_domain : (none) > + cc_compile_date : Mon Nov 14 14:16:48 EST 2005 > + xend_config_format : 4 > + > +B<FIELDS> > + > +Not all fields will be explained here, but some of the less obvious > +ones deserve explanation: > + > +=over 4 > + > +=item B<hw_caps> > + > +A vector showing what hardware capabilities are supported by your > +processor. This is equivalent to, though more cryptic, the flags > +field in /proc/cpuinfo on a normal Linux machine.Does this correspond to some cpuid output somewhere? That might be a good thing to reference. (checks, hmm, it all very processor specific)> +=back > + > +B<OPTIONS> > + > +=over 4 > + > +=item B<-n>, B<--numa> > + > +List host NUMA topology information > + > +=back[...]> +=item B<pci-list-assignable-devices> > + > +List all the assignable PCI devices.Perhaps add: That is, though devices in the system which are configured to be available for passthrough and are bound to a suitable PCI backend driver in domain 0 rather than a real driver.> +=head1 CPUPOOLS COMMANDS > + > +Xen can group the physical cpus of a server in cpu-pools. Each physical CPU is > +assigned at most to one cpu-pool. Domains are each restricted to a single > +cpu-pool. Scheduling does not cross cpu-pool boundaries, so each cpu-pool has > +an own scheduler. > +Physical cpus and domains can be moved from one pool to another only by an > +explicit command. > + > +=over 4 > + > +=item B<cpupool-create> [I<OPTIONS>] I<ConfigFile> > + > +Create a cpu pool based an I<ConfigFile>. > + > +B<OPTIONS> > + > +=over 4 > + > +=item B<-f=FILE>, B<--defconfig=FILE> > + > +Use the given configuration file. > + > +=item B<-n>, B<--dryrun> > + > +Dry run - prints the resulting configuration.Is this deprecated in favour of global -N option? I think it should be.> + > +=back > + > +=item B<cpupool-list> [I<-c|--cpus> I<cpu-pool>] > + > +List CPU pools on the host. > +If I<-c> is specified, B<xl> prints a list of CPUs used by I<cpu-pool>.Is cpu-pool a name or a number, or both? (this info would be useful in the intro to the section I suppose).> + > +=item B<cpupool-destroy> I<cpu-pool> > + > +Deactivates a cpu pool. > + > +=item B<cpupool-rename> I<cpu-pool> <newname> > + > +Renames a cpu pool to I<newname>. > + > +=item B<cpupool-cpu-add> I<cpu-pool> I<cpu-nr|node-nr> > + > +Adds a cpu or a numa node to a cpu pool. > + > +=item B<cpupool-cpu-remove> I<cpu-nr|node-nr> > + > +Removes a cpu or a numa node from a cpu pool. > + > +=item B<cpupool-migrate> I<domain-id> I<cpu-pool> > + > +Moves a domain into a cpu pool. > + > +=item B<cpupool-numa-split> > + > +Splits up the machine into one cpu pool per numa node. > + > +=back > + > +=head1 VIRTUAL DEVICE COMMANDS > + > +Most virtual devices can be added and removed while guests are > +running.... assuming the necessary support exists in the guest.> The effect to the guest OS is much the same as any hotplug > +event. > + > +=head2 BLOCK DEVICES > + > +=over 4 > + > +=item B<block-attach> I<domain-id> I<disc-spec-component(s)> ... > + > +Create a new virtual block device. This will trigger a hotplug event > +for the guest.Should add a reference to the docs/misc/xl-disk-configuration.txt doc to your SEE ALSO section.> + > +B<OPTIONS> > + > +=over 4 > + > +=item I<domain-id> > + > +The domain id of the guest domain that the device will be attached to. > + > +=item I<disc-spec-component> > + > +A disc specification in the same format used for the B<disk> variable in > +the domain config file. See L<xldomain.cfg>. > + > +=back > + > +=item B<block-detach> I<domain-id> I<devid> [B<--force>] > + > +Detach a domain''s virtual block device. I<devid> may be the symbolic > +name or the numeric device id given to the device by domain 0. You > +will need to run B<xl block-list> to determine that number. > + > +Detaching the device requires the cooperation of the domain. If the > +domain fails to release the device (perhaps because the domain is hung > +or is still using the device), the detach will fail. The B<--force> > +parameter will forcefully detach the device, but may cause IO errors > +in the domain. > + > +=item B<block-list> I<domain-id> > + > +List virtual block devices for a domain. > + > +=item B<cd-insert> I<domain-id> I<VirtualDevice> I<be-dev> > + > +Insert a cdrom into a guest domain''s cd drive. Only works with HVM domains. > + > +B<OPTIONS> > + > +=over 4 > + > +=item I<VirtualDevice> > + > +How the device should be presented to the guest domain; for example /dev/hdc. > + > +=item I<be-dev> > + > +the device in the backend domain (usually domain 0) to be exported; it can be a > +path to a file (file://path/to/file.iso). See B<disk> in L<xldomain.cfg> for the > +details. > + > +=back > + > +=item B<cd-eject> I<domain-id> I<VirtualDevice> > + > +Eject a cdrom from a guest''s cd drive. Only works with HVM domains. > +I<VirtualDevice> is the cdrom device in the guest to eject. > + > +=back > + > +=head2 NETWORK DEVICES > + > +=over 4 > + > +=item B<network-attach> I<domain-id> I<network-device> > + > +Creates a new network device in the domain specified by I<domain-id>. > +I<network-device> describes the device to attach, using the same format as the > +B<vif> string in the domain config file. See L<xldomain.cfg> for the > +description.I sent out a patch to add docs/misc/xl-network-configuration.markdown as well.> + > +=item B<network-detach> I<domain-id> I<devid|mac> > + > +Removes the network device from the domain specified by I<domain-id>. > +I<devid> is the virtual interface device number within the domain > +(i.e. the 3 in vif22.3). Alternatively the I<mac> address can be used to > +select the virtual interface to detach. > + > +=item B<network-list> I<domain-id> > + > +List virtual network interfaces for a domain. > + > +=back > + > +=head2 PCI PASS-THROUGH > + > +=over 4 > + > +=item B<pci-attach> I<domain-id> I<BDF> > + > +Hot-plug a new pass-through pci device to the specified domain. > +B<BDF> is the PCI Bus/Device/Function of the physical device to pass-through. > + > +=item B<pci-detach> [I<-f>] I<domain-id> I<BDF> > + > +Hot-unplug a previously assigned pci device from a domain. B<BDF> is the PCI > +Bus/Device/Function of the physical device to be removed from the guest domain. > + > +If B<-f> is specified, B<xl> is going to forcefully remove the device even > +without guest''s collaboration. > + > +=item B<pci-list> I<domain-id> > + > +List pass-through pci devices for a domain. > + > +=back > + > +=head1 SEE ALSO > + > +B<xldomain.cfg>(5), B<xentop>(1) > + > +=head1 AUTHOR > + > + Stefano Stabellini <stefano.stabellini@eu.citrix.com> > + Vincent Hanquez <vincent.hanquez@eu.citrix.com> > + Ian Jackson <ian.jackson@eu.citrix.com> > + Ian Campbell <Ian.Campbell@citrix.com>This list seems so incomplete/unlikely to be updated that it may as well not be included. (also I think AUTHOR in a man page refers to the author of the page, not the authors of the software)> +=head1 BUGS > + > +Send bugs to xen-devel@lists.xensource.com.Reference http://wiki.xen.org/xenwiki/ReportingBugs Ian. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Juergen Gross
2011-Oct-28 11:19 UTC
Re: [Xen-devel] [PATCH DOCDAY] introduce an xl man page in pod format
On 10/28/2011 12:10 PM, Ian Campbell wrote:> Hi Juergen, > > Are you the best person to review this part of the xl manpage?I think so...> Can you provide a reference to the documentation for I<ConfigFile> > mentioned below? If nothing exists could you maybe write something up, > e.g. a man page or markdown document.Sure. I''ll respond to Stefanos original mail.> Thanks, > Ian. > > On Thu, 2011-10-27 at 17:19 +0100, Stefano Stabellini wrote: >> +=head1 CPUPOOLS COMMANDSJuergen -- Juergen Gross Principal Developer Operating Systems PDG ES&S SWE OS6 Telephone: +49 (0) 89 3222 2967 Fujitsu Technology Solutions e-mail: juergen.gross@ts.fujitsu.com Domagkstr. 28 Internet: ts.fujitsu.com D-80807 Muenchen Company details: ts.fujitsu.com/imprint.html _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2011-Oct-28 11:38 UTC
Re: [Xen-devel] [PATCH DOCDAY] introduce an xl man page in pod format
On Thu, 2011-10-27 at 17:19 +0100, Stefano Stabellini wrote:> > +The create subcommand requires a config file: see L<xldomain.cfg> for > +full details of that file format and possible options.[...]> +B<xldomain.cfg>(5), B<xentop>(1)The doc IanJ has been writing (although I think I''m going to pickup the remainder) is docs/user/xl-domain-config.markdown rather than a manpage xldomain.cfg(5). Referencing such documents is a bit tricky, given the various paths and formats this might live in (e.g. http://www,xen.org/docs/xl-domain-config.html, /usr/share/doc/xen/xl-domain-config.txt etc) I think just referring to them by basename makes most sense. Ian. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Lars Kurth
2011-Oct-28 12:51 UTC
Re: [Xen-devel] [PATCH DOCDAY] introduce an xl man page in pod format
When I went through the Wiki, there were a few documents that looked like man page material. Do check: - XenConfigurationFileOptions - XenHypervisorBootOptions - XenBooting Please do consider these Lars _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2011-Oct-31 09:25 UTC
Re: [Xen-devel] [PATCH DOCDAY] introduce an xl man page in pod format
On Thu, 2011-10-27 at 17:19 +0100, Stefano Stabellini wrote:> This is the initial version of an xl man page, based on the old xm man > page. > Almost every command implemented in xl should be present, a notable > exception are the tmem commands that are currently missing. > Further improvements and clarifications to this man page are very welcome.I had a thought over the weekend... It took me a while to get used to but the git style of a manpage per subcommand and having "git COMMAND --help" spawn man of that page works really well. A long list of subcommands such as this gets a bit unwieldy. Having a page per command also means you can take advantage of e.g. a SEE ALSO for each command individually and things like that. I think you''ve got all the content already it''s just a matter of separating it. If we were feeling sneaky we could probably arrange for the build to fail if no man page is available for a command listed in xl_cmdtable.c };-) Ian.> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> > > diff -r 39aa9b2441da docs/man/xl.pod.1 > --- /dev/null Thu Jan 01 00:00:00 1970 +0000 > +++ b/docs/man/xl.pod.1 Thu Oct 27 15:59:03 2011 +0000 > @@ -0,0 +1,805 @@ > +=head1 NAME > + > +XL - Xen management tool, based on LibXenlight > + > +=head1 SYNOPSIS > + > +B<xl> I<subcommand> [I<args>] > + > +=head1 DESCRIPTION > + > +The B<xl> program is the new tool for managing Xen guest > +domains. The program can be used to create, pause, and shutdown > +domains. It can also be used to list current domains, enable or pin > +VCPUs, and attach or detach virtual block devices. > +The old B<xm> tool is deprecated and should not be used. > + > +The basic structure of every B<xl> command is almost always: > + > +=over 2 > + > +B<xl> I<subcommand> [I<OPTIONS>] I<domain-id> > + > +=back > + > +Where I<subcommand> is one of the subcommands listed below, I<domain-id> > +is the numeric domain id, or the domain name (which will be internally > +translated to domain id), and I<OPTIONS> are subcommand specific > +options. There are a few exceptions to this rule in the cases where > +the subcommand in question acts on all domains, the entire machine, > +or directly on the Xen hypervisor. Those exceptions will be clear for > +each of those subcommands. > + > +=head1 NOTES > + > +Most B<xl> operations rely upon B<xenstored> and B<xenconsoled>: make > +sure you start the script B</etc/init.d/xencommons> at boot time to > +initialize all the daemons needed by B<xl>. > + > +In the most common network configuration, you need to setup a bridge in dom0 > +named B<xenbr0> in order to have a working network in the guest domains. > +Please refer to the documentation of your Linux distribution to know how to > +setup the bridge. > + > +Most B<xl> commands require root privileges to run due to the > +communications channels used to talk to the hypervisor. Running as > +non root will return an error. > + > +=head1 DOMAIN SUBCOMMANDS > + > +The following subcommands manipulate domains directly. As stated > +previously, most commands take I<domain-id> as the first parameter. > + > +=over 4 > + > +=item B<create> [I<OPTIONS>] I<configfile> > + > +The create subcommand requires a config file: see L<xldomain.cfg> for > +full details of that file format and possible options. > + > +I<configfile> can either be an absolute path to a file, or a relative > +path to a file located in /etc/xen. > + > +Create will return B<as soon> as the domain is started. This B<does > +not> mean the guest OS in the domain has actually booted, or is > +available for input. > + > +B<OPTIONS> > + > +=over 4 > + > +=item B<-q>, B<--quiet> > + > +No console output. > + > +=item B<-f=FILE>, B<--defconfig=FILE> > + > +Use the given configuration file. > + > +=item B<-n>, B<--dryrun> > + > +Dry run - prints the resulting configuration in SXP but does not create > +the domain. > + > +=item B<-p> > + > +Leave the domain paused after it is created. > + > +=item B<-c> > + > +Attach console to the domain as soon as it has started. This is > +useful for determining issues with crashing domains. > + > +=back > + > +B<EXAMPLES> > + > +=over 4 > + > +=item I<with config file> > + > + xl create DebianLenny > + > +This creates a domain with the file /etc/xen/DebianLenny, and returns as > +soon as it is run. > + > +=back > + > +=item B<console> I<domain-id> > + > +Attach to domain I<domain-id>''s console. If you''ve set up your domains to > +have a traditional log in console this will look much like a normal > +text log in screen. > + > +Use the key combination Ctrl+] to detach the domain console. > + > +=item B<vncviewer> [I<OPTIONS>] I<domain-id> > + > +Attach to domain''s VNC server, forking a vncviewer process. > + > +B<OPTIONS> > + > +=over 4 > + > +=item I<--autopass> > + > +Pass VNC password to vncviewer via stdin. > + > +=back > + > +=item B<destroy> I<domain-id> > + > +Immediately terminate the domain I<domain-id>. This doesn''t give the > +domain OS any chance to react, and is the equivalent of ripping the > +power cord out on a physical machine. In most cases you will want to > +use the B<shutdown> command instead. > + > +=item B<domid> I<domain-name> > + > +Converts a domain name to a domain id. > + > +=item B<domname> I<domain-id> > + > +Converts a domain id to a domain name. > + > +=item B<rename> I<domain-id> I<new-name> > + > +Change the domain name of I<domain-id> to I<new-name>. > + > +=item B<dump-core> I<domain-id> [I<filename>] > + > +Dumps the virtual machine''s memory for the specified domain to the > +I<filename> specified, without pausing the domain. The dump file will > +be written to a distribution specific directory for dump files. Such > +as: /var/lib/xen/dump or /var/xen/dump. > + > +=item B<help> [I<--long>] > + > +Displays the short help message (i.e. common commands). > + > +The I<--long> option prints out the complete set of B<xl> subcommands, > +grouped by function. > + > +=item B<list> [I<OPTIONS>] [I<domain-id> ...] > + > +Prints information about one or more domains. If no domains are > +specified it prints out information about all domains. > + > + > +B<OPTIONS> > + > +=over 4 > + > +=item B<-l>, B<--long> > + > +The output for B<xl list> is not the table view shown below, but > +instead presents the data in SXP compatible format. > + > +=item B<-Z>, B<--context> > +Also prints the security labels. > + > +=item B<-v>, B<--verbose> > + > +Also prints the domain UUIDs, the shutdown reason and security labels. > + > +=back > + > +B<EXAMPLE> > + > +An example format for the list is as follows: > + > + Name ID Mem VCPUs State Time(s) > + Domain-0 0 750 4 r----- 11794.3 > + win 1 1019 1 r----- 0.3 > + linux 2 2048 2 r----- 5624.2 > + > +Name is the name of the domain. ID the numeric domain id. Mem is the > +desired amount of memory to allocate to the domain (although it may > +not be the currently allocated amount). VCPUs is the number of > +virtual CPUs allocated to the domain. State is the run state (see > +below). Time is the total run time of the domain as accounted for by > +Xen. > + > +B<STATES> > + > +The State field lists 6 states for a Xen domain, and which ones the > +current domain is in. > + > +=over 4 > + > +=item B<r - running> > + > +The domain is currently running on a CPU. > + > +=item B<b - blocked> > + > +The domain is blocked, and not running or runnable. This can be caused > +because the domain is waiting on IO (a traditional wait state) or has > +gone to sleep because there was nothing else for it to do. > + > +=item B<p - paused> > + > +The domain has been paused, usually occurring through the administrator > +running B<xl pause>. When in a paused state the domain will still > +consume allocated resources like memory, but will not be eligible for > +scheduling by the Xen hypervisor. > + > +=item B<s - shutdown> > + > +FIXME: Why would you ever see this state? > + > +=item B<c - crashed> > + > +The domain has crashed, which is always a violent ending. Usually > +this state can only occur if the domain has been configured not to > +restart on crash. See L<xldomain.cfg> for more info. > + > +=item B<d - dying> > + > +The domain is in process of dying, but hasn''t completely shutdown or > +crashed. > + > +FIXME: Is this right? > + > +=back > + > +B<NOTES> > + > +=over 4 > + > +The Time column is deceptive. Virtual IO (network and block devices) > +used by domains requires coordination by Domain0, which means that > +Domain0 is actually charged for much of the time that a DomainU is > +doing IO. Use of this time value to determine relative utilizations > +by domains is thus very suspect, as a high IO workload may show as > +less utilized than a high CPU workload. Consider yourself warned. > + > +=back > + > +=item B<mem-max> I<domain-id> I<mem> > + > +Specify the maximum amount of memory the domain is able to use, appending ''t'' > +for terabytes, ''g'' for gigabytes, ''m'' for megabytes, ''k'' for kilobytes and ''b'' > +for bytes. > + > +The mem-max value may not correspond to the actual memory used in the > +domain, as it may balloon down its memory to give more back to the OS. > + > +=item B<mem-set> I<domain-id> I<mem> > + > +Set the domain''s used memory using the balloon driver; append ''t'' for > +terabytes, ''g'' for gigabytes, ''m'' for megabytes, ''k'' for kilobytes and ''b'' for > +bytes. > + > +Because this operation requires cooperation from the domain operating > +system, there is no guarantee that it will succeed. This command will > +definitely not work unless the domain has the required paravirt > +driver. > + > +B<Warning:> There is no good way to know in advance how small of a > +mem-set will make a domain unstable and cause it to crash. Be very > +careful when using this command on running domains. > + > +=item B<migrate> [I<OPTIONS>] I<domain-id> I<host> > + > +Migrate a domain to another host machine. By default B<xl> relies on ssh as a > +transport mechanism between the two hosts. > + > +B<OPTIONS> > + > +=over 4 > + > +=item B<-s> I<sshcommand> > + > +Use <sshcommand> instead of ssh. String will be passed to sh. If empty, run > +<host> instead of ssh <host> xl migrate-receive [-d -e]. > + > +=item B<-e> > + > +On the new host, do not wait in the background (on <host>) for the death of the > +domain. > + > +=item B<-C> I<config> > + > +Send <config> instead of config file from creation. > + > +=back > + > +=item B<pause> I<domain-id> > + > +Pause a domain. When in a paused state the domain will still consume > +allocated resources such as memory, but will not be eligible for > +scheduling by the Xen hypervisor. > + > +=item B<reboot> [I<OPTIONS>] I<domain-id> > + > +Reboot a domain. This acts just as if the domain had the B<reboot> > +command run from the console. The command returns as soon as it has > +executed the reboot action, which may be significantly before the > +domain actually reboots. > + > +The behavior of what happens to a domain when it reboots is set by the > +B<on_reboot> parameter of the xldomain.cfg file when the domain was > +created. > + > +=item B<restore> [I<OPTIONS>] [I<ConfigFile>] I<CheckpointFile> > + > +Build a domain from an B<xl save> state file. See B<save> for more info. > + > +B<OPTIONS> > + > +=over 4 > + > +=item B<-p> > + > +Do not unpause domain after restoring it. > + > +=item B<-e> > + > +Do not wait in the background for the death of the domain on the new host. > + > +=item B<-d> > + > +Enable debug messages. > + > +=back > + > +=item B<save> [I<OPTIONS>] I<domain-id> I<CheckpointFile> [I<ConfigFile>] > + > +Saves a running domain to a state file so that it can be restored > +later. Once saved, the domain will no longer be running on the > +system, unless the -c option is used. > +B<xl restore> restores from this checkpoint file. > +Passing a config file argument allows the user to manually select the VM config > +file used to create the domain. > + > + > +=over 4 > + > +=item B<-c> > + > +Leave domain running after creating the snapshot. > + > +=back > + > + > +=item B<shutdown> [I<OPTIONS>] I<domain-id> > + > +Gracefully shuts down a domain. This coordinates with the domain OS > +to perform graceful shutdown, so there is no guarantee that it will > +succeed, and may take a variable length of time depending on what > +services must be shutdown in the domain. The command returns > +immediately after signally the domain unless that B<-w> flag is used. > + > +The behavior of what happens to a domain when it reboots is set by the > +B<on_shutdown> parameter of the xldomain.cfg file when the domain was > +created. > + > +B<OPTIONS> > + > +=over 4 > + > +=item B<-w> > + > +Wait for the domain to complete shutdown before returning. > + > +=back > + > +=item B<sysrq> I<domain-id> I<letter> > + > +Send a I<Magic System Request> signal to the domain. For more > +information on available magic sys req operations, see sysrq.txt in > +your Linux Kernel sources. > + > +=item B<unpause> I<domain-id> > + > +Moves a domain out of the paused state. This will allow a previously > +paused domain to now be eligible for scheduling by the Xen hypervisor. > + > +=item B<vcpu-set> I<domain-id> I<vcpu-count> > + > +Enables the I<vcpu-count> virtual CPUs for the domain in question. > +Like mem-set, this command can only allocate up to the maximum virtual > +CPU count configured at boot for the domain. > + > +If the I<vcpu-count> is smaller than the current number of active > +VCPUs, the highest number VCPUs will be hotplug removed. This may be > +important for pinning purposes. > + > +Attempting to set the VCPUs to a number larger than the initially > +configured VCPU count is an error. Trying to set VCPUs to < 1 will be > +quietly ignored. > + > +Because this operation requires cooperation from the domain operating > +system, there is no guarantee that it will succeed. This command will > +not work with a full virt domain. > + > +=item B<vcpu-list> [I<domain-id>] > + > +Lists VCPU information for a specific domain. If no domain is > +specified, VCPU information for all domains will be provided. > + > +=item B<vcpu-pin> I<domain-id> I<vcpu> I<cpus> > + > +Pins the VCPU to only run on the specific CPUs. The keyword > +B<all> can be used to apply the I<cpus> list to all VCPUs in the > +domain. > + > +Normally VCPUs can float between available CPUs whenever Xen deems a > +different run state is appropriate. Pinning can be used to restrict > +this, by ensuring certain VCPUs can only run on certain physical > +CPUs. > + > +=item B<button-press> I<domain-id> I<button> > + > +Indicate an ACPI button press to the domain. I<button> is may be ''power'' or > +''sleep''. > + > +=item B<trigger> I<domain-id> I<nmi|reset|init|power|sleep> [I<VCPU>] > + > +Send a trigger to a domain, where the trigger can be: nmi, reset, init, power > +or sleep. Optionally a specific vcpu number can be passed as an argument. > + > +=item B<getenforce> > + > +Returns the current enforcing mode of the Flask Xen security module. > + > +=item B<setenforce> I<1|0|Enforcing|Permissive> > + > +Sets the current enforcing mode of the Flask Xen security module > + > +=item B<loadpolicy> I<policyfile> > + > +Loads a new policy int the Flask Xen security module. > + > +=back > + > +=head1 XEN HOST SUBCOMMANDS > + > +=over 4 > + > +=item B<debug-keys> I<keys> > + > +Send debug I<keys> to Xen. > + > +=item B<dmesg> [B<-c>] > + > +Reads the Xen message buffer, similar to dmesg on a Linux system. The > +buffer contains informational, warning, and error messages created > +during Xen''s boot process. If you are having problems with Xen, this > +is one of the first places to look as part of problem determination. > + > +B<OPTIONS> > + > +=over 4 > + > +=item B<-c>, B<--clear> > + > +Clears Xen''s message buffer. > + > +=back > + > +=item B<info> [B<-n>, B<--numa>] > + > +Print information about the Xen host in I<name : value> format. When > +reporting a Xen bug, please provide this information as part of the > +bug report. > + > +Sample output looks as follows (lines wrapped manually to make the man > +page more readable): > + > + host : talon > + release : 2.6.12.6-xen0 > + version : #1 Mon Nov 14 14:26:26 EST 2005 > + machine : i686 > + nr_cpus : 2 > + nr_nodes : 1 > + cores_per_socket : 1 > + threads_per_core : 1 > + cpu_mhz : 696 > + hw_caps : 0383fbff:00000000:00000000:00000040 > + total_memory : 767 > + free_memory : 37 > + xen_major : 3 > + xen_minor : 0 > + xen_extra : -devel > + xen_caps : xen-3.0-x86_32 > + xen_scheduler : credit > + xen_pagesize : 4096 > + platform_params : virt_start=0xfc000000 > + xen_changeset : Mon Nov 14 18:13:38 2005 +0100 > + 7793:090e44133d40 > + cc_compiler : gcc version 3.4.3 (Mandrakelinux > + 10.2 3.4.3-7mdk) > + cc_compile_by : sdague > + cc_compile_domain : (none) > + cc_compile_date : Mon Nov 14 14:16:48 EST 2005 > + xend_config_format : 4 > + > +B<FIELDS> > + > +Not all fields will be explained here, but some of the less obvious > +ones deserve explanation: > + > +=over 4 > + > +=item B<hw_caps> > + > +A vector showing what hardware capabilities are supported by your > +processor. This is equivalent to, though more cryptic, the flags > +field in /proc/cpuinfo on a normal Linux machine. > + > +=item B<free_memory> > + > +Available memory (in MB) not allocated to Xen, or any other domains. > + > +=item B<xen_caps> > + > +The Xen version and architecture. Architecture values can be one of: > +x86_32, x86_32p (i.e. PAE enabled), x86_64, ia64. > + > +=item B<xen_changeset> > + > +The Xen mercurial changeset id. Very useful for determining exactly > +what version of code your Xen system was built from. > + > +=back > + > +B<OPTIONS> > + > +=over 4 > + > +=item B<-n>, B<--numa> > + > +List host NUMA topology information > + > +=back > + > +=item B<top> > + > +Executes the B<xentop> command, which provides real time monitoring of > +domains. Xentop is a curses interface, and reasonably self > +explanatory. > + > +=item B<uptime> > + > +Prints the current uptime of the domains running. > + > +=item B<pci-list-assignable-devices> > + > +List all the assignable PCI devices. > + > +=back > + > +=head1 SCHEDULER SUBCOMMANDS > + > +Xen ships with a number of domain schedulers, which can be set at boot > +time with the B<sched=> parameter on the Xen command line. By > +default B<credit> is used for scheduling. > + > +=over 4 > + > +=item B<sched-credit> [ B<-d> I<domain-id> [ B<-w>[B<=>I<WEIGHT>] | B<-c>[B<=>I<CAP>] ] ] > + > +Set credit scheduler parameters. The credit scheduler is a > +proportional fair share CPU scheduler built from the ground up to be > +work conserving on SMP hosts. > + > +Each domain (including Domain0) is assigned a weight and a cap. > + > +B<PARAMETERS> > + > +=over 4 > + > +=item I<WEIGHT> > + > +A domain with a weight of 512 will get twice as much CPU as a domain > +with a weight of 256 on a contended host. Legal weights range from 1 > +to 65535 and the default is 256. > + > +=item I<CAP> > + > +The cap optionally fixes the maximum amount of CPU a domain will be > +able to consume, even if the host system has idle CPU cycles. The cap > +is expressed in percentage of one physical CPU: 100 is 1 physical CPU, > +50 is half a CPU, 400 is 4 CPUs, etc. The default, 0, means there is > +no upper cap. > + > +=back > + > +=back > + > +=head1 CPUPOOLS COMMANDS > + > +Xen can group the physical cpus of a server in cpu-pools. Each physical CPU is > +assigned at most to one cpu-pool. Domains are each restricted to a single > +cpu-pool. Scheduling does not cross cpu-pool boundaries, so each cpu-pool has > +an own scheduler. > +Physical cpus and domains can be moved from one pool to another only by an > +explicit command. > + > +=over 4 > + > +=item B<cpupool-create> [I<OPTIONS>] I<ConfigFile> > + > +Create a cpu pool based an I<ConfigFile>. > + > +B<OPTIONS> > + > +=over 4 > + > +=item B<-f=FILE>, B<--defconfig=FILE> > + > +Use the given configuration file. > + > +=item B<-n>, B<--dryrun> > + > +Dry run - prints the resulting configuration. > + > +=back > + > +=item B<cpupool-list> [I<-c|--cpus> I<cpu-pool>] > + > +List CPU pools on the host. > +If I<-c> is specified, B<xl> prints a list of CPUs used by I<cpu-pool>. > + > +=item B<cpupool-destroy> I<cpu-pool> > + > +Deactivates a cpu pool. > + > +=item B<cpupool-rename> I<cpu-pool> <newname> > + > +Renames a cpu pool to I<newname>. > + > +=item B<cpupool-cpu-add> I<cpu-pool> I<cpu-nr|node-nr> > + > +Adds a cpu or a numa node to a cpu pool. > + > +=item B<cpupool-cpu-remove> I<cpu-nr|node-nr> > + > +Removes a cpu or a numa node from a cpu pool. > + > +=item B<cpupool-migrate> I<domain-id> I<cpu-pool> > + > +Moves a domain into a cpu pool. > + > +=item B<cpupool-numa-split> > + > +Splits up the machine into one cpu pool per numa node. > + > +=back > + > +=head1 VIRTUAL DEVICE COMMANDS > + > +Most virtual devices can be added and removed while guests are > +running. The effect to the guest OS is much the same as any hotplug > +event. > + > +=head2 BLOCK DEVICES > + > +=over 4 > + > +=item B<block-attach> I<domain-id> I<disc-spec-component(s)> ... > + > +Create a new virtual block device. This will trigger a hotplug event > +for the guest. > + > +B<OPTIONS> > + > +=over 4 > + > +=item I<domain-id> > + > +The domain id of the guest domain that the device will be attached to. > + > +=item I<disc-spec-component> > + > +A disc specification in the same format used for the B<disk> variable in > +the domain config file. See L<xldomain.cfg>. > + > +=back > + > +=item B<block-detach> I<domain-id> I<devid> [B<--force>] > + > +Detach a domain''s virtual block device. I<devid> may be the symbolic > +name or the numeric device id given to the device by domain 0. You > +will need to run B<xl block-list> to determine that number. > + > +Detaching the device requires the cooperation of the domain. If the > +domain fails to release the device (perhaps because the domain is hung > +or is still using the device), the detach will fail. The B<--force> > +parameter will forcefully detach the device, but may cause IO errors > +in the domain. > + > +=item B<block-list> I<domain-id> > + > +List virtual block devices for a domain. > + > +=item B<cd-insert> I<domain-id> I<VirtualDevice> I<be-dev> > + > +Insert a cdrom into a guest domain''s cd drive. Only works with HVM domains. > + > +B<OPTIONS> > + > +=over 4 > + > +=item I<VirtualDevice> > + > +How the device should be presented to the guest domain; for example /dev/hdc. > + > +=item I<be-dev> > + > +the device in the backend domain (usually domain 0) to be exported; it can be a > +path to a file (file://path/to/file.iso). See B<disk> in L<xldomain.cfg> for the > +details. > + > +=back > + > +=item B<cd-eject> I<domain-id> I<VirtualDevice> > + > +Eject a cdrom from a guest''s cd drive. Only works with HVM domains. > +I<VirtualDevice> is the cdrom device in the guest to eject. > + > +=back > + > +=head2 NETWORK DEVICES > + > +=over 4 > + > +=item B<network-attach> I<domain-id> I<network-device> > + > +Creates a new network device in the domain specified by I<domain-id>. > +I<network-device> describes the device to attach, using the same format as the > +B<vif> string in the domain config file. See L<xldomain.cfg> for the > +description. > + > +=item B<network-detach> I<domain-id> I<devid|mac> > + > +Removes the network device from the domain specified by I<domain-id>. > +I<devid> is the virtual interface device number within the domain > +(i.e. the 3 in vif22.3). Alternatively the I<mac> address can be used to > +select the virtual interface to detach. > + > +=item B<network-list> I<domain-id> > + > +List virtual network interfaces for a domain. > + > +=back > + > +=head2 PCI PASS-THROUGH > + > +=over 4 > + > +=item B<pci-attach> I<domain-id> I<BDF> > + > +Hot-plug a new pass-through pci device to the specified domain. > +B<BDF> is the PCI Bus/Device/Function of the physical device to pass-through. > + > +=item B<pci-detach> [I<-f>] I<domain-id> I<BDF> > + > +Hot-unplug a previously assigned pci device from a domain. B<BDF> is the PCI > +Bus/Device/Function of the physical device to be removed from the guest domain. > + > +If B<-f> is specified, B<xl> is going to forcefully remove the device even > +without guest''s collaboration. > + > +=item B<pci-list> I<domain-id> > + > +List pass-through pci devices for a domain. > + > +=back > + > +=head1 SEE ALSO > + > +B<xldomain.cfg>(5), B<xentop>(1) > + > +=head1 AUTHOR > + > + Stefano Stabellini <stefano.stabellini@eu.citrix.com> > + Vincent Hanquez <vincent.hanquez@eu.citrix.com> > + Ian Jackson <ian.jackson@eu.citrix.com> > + Ian Campbell <Ian.Campbell@citrix.com> > + > +=head1 BUGS > + > +Send bugs to xen-devel@lists.xensource.com. > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2011-Oct-31 09:32 UTC
Re: [Xen-devel] [PATCH DOCDAY] introduce an xl man page in pod format
On Thu, 2011-10-27 at 17:19 +0100, Stefano Stabellini wrote:> > Almost every command implemented in xl should be present, a notable > exception are the tmem commands that are currently missing.That should be easy to fix -- Dan could you provide some words about these commands please? I assume Stefano is referring to: $ grep \"tmem tools/libxl/xl_cmdtable.c { "tmem-list", { "tmem-freeze", { "tmem-destroy", { "tmem-thaw", { "tmem-set", { "tmem-shared-auth", { "tmem-freeable", These mirror the xm commands but there''s nothing we can crib there either. POD format per Stefano''s original post would be ideal but if you don''t feel like learning that (although it is pretty simple) I think we can offer to format stuff up if you just provide the words. Ian. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Jackson
2011-Nov-01 18:45 UTC
Re: [Xen-devel] [PATCH DOCDAY] introduce an xl man page in pod format
Stefano Stabellini writes ("[Xen-devel] [PATCH DOCDAY] introduce an xl man page in pod format"):> This is the initial version of an xl man page, based on the old xm man > page.Thanks. I have applied this. There were various suggestions for improvements in the thread, but I think this manpage is better than nothing so it should go in ASAP. Further improvents are indeed welcome and should come as patches against this. Ian. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2011-Nov-02 19:34 UTC
Re: [Xen-devel] [PATCH DOCDAY] introduce an xl man page in pod format
On Tue, 2011-11-01 at 14:45 -0400, Ian Jackson wrote:> Stefano Stabellini writes ("[Xen-devel] [PATCH DOCDAY] introduce an xl man page in pod format"): > > This is the initial version of an xl man page, based on the old xm man > > page. > > Thanks. I have applied this. There were various suggestions for > improvements in the thread, but I think this manpage is better than > nothing so it should go in ASAP. Further improvents are indeed > welcome and should come as patches against this.Sure. Stefano, are you going to address the review or shall I do it? Ian. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Stefano Stabellini
2011-Nov-08 17:50 UTC
Re: [Xen-devel] [PATCH DOCDAY] introduce an xl man page in pod format
On Wed, 2 Nov 2011, Ian Campbell wrote:> On Tue, 2011-11-01 at 14:45 -0400, Ian Jackson wrote: > > Stefano Stabellini writes ("[Xen-devel] [PATCH DOCDAY] introduce an xl man page in pod format"): > > > This is the initial version of an xl man page, based on the old xm man > > > page. > > > > Thanks. I have applied this. There were various suggestions for > > improvements in the thread, but I think this manpage is better than > > nothing so it should go in ASAP. Further improvents are indeed > > welcome and should come as patches against this. > > Sure. Stefano, are you going to address the review or shall I do it?I am going to address the review. However I am not so sure about splitting the man page, after all using / to search through it has worked very well in the past 40 years. Are you sure you haven''t been drinking the kool-aid? ;) _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2011-Nov-08 19:57 UTC
Re: [Xen-devel] [PATCH DOCDAY] introduce an xl man page in pod format
On Tue, 2011-11-08 at 17:50 +0000, Stefano Stabellini wrote:> On Wed, 2 Nov 2011, Ian Campbell wrote: > > On Tue, 2011-11-01 at 14:45 -0400, Ian Jackson wrote: > > > Stefano Stabellini writes ("[Xen-devel] [PATCH DOCDAY] introduce an xl man page in pod format"): > > > > This is the initial version of an xl man page, based on the old xm man > > > > page. > > > > > > Thanks. I have applied this. There were various suggestions for > > > improvements in the thread, but I think this manpage is better than > > > nothing so it should go in ASAP. Further improvents are indeed > > > welcome and should come as patches against this. > > > > Sure. Stefano, are you going to address the review or shall I do it? > > I am going to address the review.Great.> However I am not so sure about splitting the man page, after all using / > to search through it has worked very well in the past 40 years. Are you > sure you haven''t been drinking the kool-aid? ;)By that rationale we only need one manpage for all of POSIX and another for SysV and we are done ;-) Long manpages documenting lots of commands don''t scale that well, have you ever tried to use e.g. bash-builtins(7) or perlfunc(1) to find the documentation for a particular function? They are basically unusable, even with search, because they glom everything into a single page. The xl one is reasonably short right now but I expect it will grow as it gets flesh out, we add more examples etc etc. As well as keeping each individual doc shorter (which I think makes them more manageable, less intimidating and easier on the reader etc) splitting things up also means that each command can be documented in the more traditional style, i.e. with its own SYNOPSYS, DESCRIPTION, OPTIONS, RETURNS, EXAMPLES, SEE ALSO etc. If you merge them altogether then this becomes cumbersome. xl happens to be a single binary but in reality it is implementing multiple commands, in some sense those commands are even unrelated (e.g. what has vcpu pinning really got to do with migration?). It could just as well been implemented that way too (e.g. the xl-<subcommand> form) and then we would naturally have had separate pages. Ian. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Stefano Stabellini
2011-Nov-09 14:41 UTC
Re: [Xen-devel] [PATCH DOCDAY] introduce an xl man page in pod format
On Fri, 28 Oct 2011, Ian Campbell wrote:> On Thu, 2011-10-27 at 17:19 +0100, Stefano Stabellini wrote: > > This is the initial version of an xl man page, based on the old xm man > > page. > > Almost every command implemented in xl should be present, a notable > > exception are the tmem commands that are currently missing. > > I think it''s worth enumerating all the commands, even with a TBD, since > it marks what is missing.the only ones that are missing are the tmem commands so I am going to add them> > Further improvements and clarifications to this man page are very welcome. > > > > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> > > > > diff -r 39aa9b2441da docs/man/xl.pod.1 > > --- /dev/null Thu Jan 01 00:00:00 1970 +0000 > > +++ b/docs/man/xl.pod.1 Thu Oct 27 15:59:03 2011 +0000 > > @@ -0,0 +1,805 @@ > > +=head1 NAME > > + > > +XL - Xen management tool, based on LibXenlight > > + > > +=head1 SYNOPSIS > > + > > +B<xl> I<subcommand> [I<args>] > > B<xl> [I<global-args>] I<subcommand> [I<args>] > > The interesting global-args are -v (verbose, can be used repeatedly) and > -N (dry-run).OK> > + > > +=head1 DESCRIPTION > > + > > +The B<xl> program is the new tool for managing Xen guest > > +domains. The program can be used to create, pause, and shutdown > > +domains. It can also be used to list current domains, enable or pin > > +VCPUs, and attach or detach virtual block devices. > > +The old B<xm> tool is deprecated and should not be used. > > + > > +The basic structure of every B<xl> command is almost always: > > + > > +=over 2 > > + > > +B<xl> I<subcommand> [I<OPTIONS>] I<domain-id> > > + > > +=back > > + > > +Where I<subcommand> is one of the subcommands listed below, I<domain-id> > > +is the numeric domain id, or the domain name (which will be internally > > +translated to domain id), and I<OPTIONS> are subcommand specific > > +options. There are a few exceptions to this rule in the cases where > > +the subcommand in question acts on all domains, the entire machine, > > +or directly on the Xen hypervisor. Those exceptions will be clear for > > +each of those subcommands. > > + > > +=head1 NOTES > > + > > +Most B<xl> operations rely upon B<xenstored> and B<xenconsoled>: make > > +sure you start the script B</etc/init.d/xencommons> at boot time to > > +initialize all the daemons needed by B<xl>. > > + > > +In the most common network configuration, you need to setup a bridge in dom0 > > +named B<xenbr0> in order to have a working network in the guest domains. > > +Please refer to the documentation of your Linux distribution to know how to > > +setup the bridge. > > + > > +Most B<xl> commands require root privileges to run due to the > > +communications channels used to talk to the hypervisor. Running as > > +non root will return an error. > > + > > +=head1 DOMAIN SUBCOMMANDS > > + > > +The following subcommands manipulate domains directly. As stated > > +previously, most commands take I<domain-id> as the first parameter. > > + > > +=over 4 > > + > > +=item B<create> [I<OPTIONS>] I<configfile> > > The I<configfile> is optional and if it present it must come before the > options. > In addition to the normal --option stuff you can also pass key=value to > provide options as if they were written in a configuration file, these > override whatever is in the config file.OK> While checking this I noticed that before processing arguments > main_create() does: > > if (argv[1] && argv[1][0] != ''-'' && !strchr(argv[1], ''='')) { > filename = argv[1]; > argc--; argv++; > } > > that use of argv[1] without checking argc is a little dubious (ok if > argc<1 then argc==0 and therefore argv[argc+1]==NULL, but still...). > > > + > > +The create subcommand requires a config file: see L<xldomain.cfg> for > > +full details of that file format and possible options. > > + > > +I<configfile> can either be an absolute path to a file, or a relative > > +path to a file located in /etc/xen. > > This isn''t actually true for xl. Arguably that''s a bug in xl rather than > this doc but I seem to recall that someone had a specific reason for not > doing this.OK, I am going to update the doc> > + > > +Create will return B<as soon> as the domain is started. This B<does > > +not> mean the guest OS in the domain has actually booted, or is > > +available for input. > > + > > +B<OPTIONS> > > + > > +=over 4 > > + > > +=item B<-q>, B<--quiet> > > + > > +No console output. > > + > > +=item B<-f=FILE>, B<--defconfig=FILE> > > + > > +Use the given configuration file. > > + > > +=item B<-n>, B<--dryrun> > > + > > +Dry run - prints the resulting configuration in SXP but does not create > > +the domain. > > + > > +=item B<-p> > > + > > +Leave the domain paused after it is created. > > + > > +=item B<-c> > > + > > +Attach console to the domain as soon as it has started. This is > > +useful for determining issues with crashing domains. > > ... and just as a general convenience since you often want to watch the > domain boot.OK> > + > > +=back > > + > > +B<EXAMPLES> > > + > > +=over 4 > > + > > +=item I<with config file> > > + > > + xl create DebianLenny > > + > > +This creates a domain with the file /etc/xen/DebianLenny, and returns as > > +soon as it is run. > > + > > +=back > > + > > +=item B<console> I<domain-id> > > + > > +Attach to domain I<domain-id>''s console. If you''ve set up your domains to > > +have a traditional log in console this will look much like a normal > > +text log in screen. > > + > > +Use the key combination Ctrl+] to detach the domain console. > > This takes -t [pv|serial] and -n (num) options.I''ll add those options> > + > > +=item B<vncviewer> [I<OPTIONS>] I<domain-id> > > + > > +Attach to domain''s VNC server, forking a vncviewer process. > > + > > +B<OPTIONS> > > + > > +=over 4 > > + > > +=item I<--autopass> > > + > > +Pass VNC password to vncviewer via stdin. > > What is the behaviour if you don''t do this?I am not sure. Maybe Ian knows.> Are the sub-commands intended to be in some sort of order. In general > they seem to be alphabetical but in that case vncviewer does not belong > here.I''ll order them alphabetically.> [...] > > +=item B<list> [I<OPTIONS>] [I<domain-id> ...] > > + > > +Prints information about one or more domains. If no domains are > > +specified it prints out information about all domains. > > + > > + > > +B<OPTIONS> > > + > > +=over 4 > > + > > +=item B<-l>, B<--long> > > + > > +The output for B<xl list> is not the table view shown below, but > > +instead presents the data in SXP compatible format. > > + > > +=item B<-Z>, B<--context> > > +Also prints the security labels. > > + > > +=item B<-v>, B<--verbose> > > + > > +Also prints the domain UUIDs, the shutdown reason and security labels. > > + > > +=back > > + > > +B<EXAMPLE> > > + > > +An example format for the list is as follows: > > + > > + Name ID Mem VCPUs State Time(s) > > + Domain-0 0 750 4 r----- 11794.3 > > + win 1 1019 1 r----- 0.3 > > + linux 2 2048 2 r----- 5624.2 > > + > > +Name is the name of the domain. ID the numeric domain id. Mem is the > > +desired amount of memory to allocate to the domain (although it may > > +not be the currently allocated amount). VCPUs is the number of > > +virtual CPUs allocated to the domain. State is the run state (see > > +below). Time is the total run time of the domain as accounted for by > > +Xen. > > + > > +B<STATES> > > + > > +The State field lists 6 states for a Xen domain, and which ones the > > +current domain is in. > > + > > +=over 4 > > + > > +=item B<r - running> > > + > > +The domain is currently running on a CPU. > > + > > +=item B<b - blocked> > > + > > +The domain is blocked, and not running or runnable. This can be caused > > +because the domain is waiting on IO (a traditional wait state) or has > > +gone to sleep because there was nothing else for it to do. > > + > > +=item B<p - paused> > > + > > +The domain has been paused, usually occurring through the administrator > > +running B<xl pause>. When in a paused state the domain will still > > +consume allocated resources like memory, but will not be eligible for > > +scheduling by the Xen hypervisor. > > + > > +=item B<s - shutdown> > > + > > +FIXME: Why would you ever see this state? > > This is XEN_DOMINF_shutdown which just says "/* The guest OS has shut > down. */". It is set in response to the guest calling SCHEDOP_shutdown. > I think it corresponds to the period between the guest shutting down and > the toolstack noticing and beginning to tear it down (when it moves to > dying).OK> > +=item B<c - crashed> > > + > > +The domain has crashed, which is always a violent ending. Usually > > +this state can only occur if the domain has been configured not to > > +restart on crash. See L<xldomain.cfg> for more info. > > + > > +=item B<d - dying> > > + > > +The domain is in process of dying, but hasn''t completely shutdown or > > +crashed. > > + > > +FIXME: Is this right? > > I think so. This is XEN_DOMINF_dying which says "/* Domain is scheduled > to die. */"OK> > + > > +=item B<migrate> [I<OPTIONS>] I<domain-id> I<host> > > + > > +Migrate a domain to another host machine. By default B<xl> relies on ssh as a > > +transport mechanism between the two hosts. > > + > > +B<OPTIONS> > > + > > +=over 4 > > + > > +=item B<-s> I<sshcommand> > > + > > +Use <sshcommand> instead of ssh. String will be passed to sh. If empty, run > > +<host> instead of ssh <host> xl migrate-receive [-d -e]. > > + > > +=item B<-e> > > + > > +On the new host, do not wait in the background (on <host>) for the death of the > > +domain. > > Would be useful to reference the equivalent option to "xl create" here > just to clarify that they mean the same.Yes, good idea.> > +=item B<reboot> [I<OPTIONS>] I<domain-id> > > + > > +Reboot a domain. This acts just as if the domain had the B<reboot> > > +command run from the console. > > This relies on PV drivers, I think.yes, I''ll add that> Not all guests have the option of typing "reboot" on the console but I > suppose it is clear enough what you mean. > > > The command returns as soon as it has > > +executed the reboot action, which may be significantly before the > > +domain actually reboots. > > + > > +The behavior of what happens to a domain when it reboots is set by the > > +B<on_reboot> parameter of the xldomain.cfg file when the domain was > > +created. > > + > > +=item B<restore> [I<OPTIONS>] [I<ConfigFile>] I<CheckpointFile> > > + > > +Build a domain from an B<xl save> state file. See B<save> for more info. > > + > > +B<OPTIONS> > > + > > +=over 4 > > + > > +=item B<-p> > > + > > +Do not unpause domain after restoring it. > > + > > +=item B<-e> > > + > > +Do not wait in the background for the death of the domain on the new host. > > Reference xl create? >yep> > + > > +=item B<-d> > > + > > +Enable debug messages. > > + > > +=back > > + > > +=item B<save> [I<OPTIONS>] I<domain-id> I<CheckpointFile> [I<ConfigFile>] > > + > > +Saves a running domain to a state file so that it can be restored > > +later. Once saved, the domain will no longer be running on the > > +system, unless the -c option is used. > > +B<xl restore> restores from this checkpoint file. > > +Passing a config file argument allows the user to manually select the VM config > > +file used to create the domain. > > + > > + > > +=over 4 > > + > > +=item B<-c> > > + > > +Leave domain running after creating the snapshot. > > + > > +=back > > + > > + > > +=item B<shutdown> [I<OPTIONS>] I<domain-id> > > + > > +Gracefully shuts down a domain. This coordinates with the domain OS > > +to perform graceful shutdown, so there is no guarantee that it will > > +succeed, and may take a variable length of time depending on what > > +services must be shutdown in the domain. The command returns > > +immediately after signally the domain unless that B<-w> flag is used. > > Does this rely on pv drivers or does it inject ACPI events etc on HVM?Yes, it requires PV drivers, I''ll add that.> > + > > +The behavior of what happens to a domain when it reboots is set by the > behaviour ? > > > +B<on_shutdown> parameter of the xldomain.cfg file when the domain was > > +created. > > + > > +B<OPTIONS> > > + > > +=over 4 > > + > > +=item B<-w> > > + > > +Wait for the domain to complete shutdown before returning. > > + > > +=back > > + > > +=item B<sysrq> I<domain-id> I<letter> > > + > > +Send a I<Magic System Request> signal to the domain. For more > > +information on available magic sys req operations, see sysrq.txt in > > +your Linux Kernel sources. > > It would be nice to word this in a more generic fashion and point out > that the specific implementation on Linux behaves like sysrq. Other > guests might do other things? > > Relies on PV drivers.OK> > [...] > > + > > +=item B<vcpu-set> I<domain-id> I<vcpu-count> > > + > > +Enables the I<vcpu-count> virtual CPUs for the domain in question. > > +Like mem-set, this command can only allocate up to the maximum virtual > > +CPU count configured at boot for the domain. > > + > > +If the I<vcpu-count> is smaller than the current number of active > > +VCPUs, the highest number VCPUs will be hotplug removed. This may be > > +important for pinning purposes. > > + > > +Attempting to set the VCPUs to a number larger than the initially > > +configured VCPU count is an error. Trying to set VCPUs to < 1 will be > > +quietly ignored. > > + > > +Because this operation requires cooperation from the domain operating > > +system, there is no guarantee that it will succeed. This command will > > +not work with a full virt domain. > > I thought we supported some VCPU hotplug for HVM (using ACPI and such) > these days?Yes you are right, I''ll remove it.> [...] > > +=item B<button-press> I<domain-id> I<button> > > + > > +Indicate an ACPI button press to the domain. I<button> is may be ''power'' or > > +''sleep''. > > HVM only?yes> > + > > +=item B<trigger> I<domain-id> I<nmi|reset|init|power|sleep> [I<VCPU>] > > + > > +Send a trigger to a domain, where the trigger can be: nmi, reset, init, power > > +or sleep. Optionally a specific vcpu number can be passed as an argument. > > HVM only? nmi might work for PV, not sure about the rest.I think the current implementation is HVM only> > +=item B<getenforce> > > + > > +Returns the current enforcing mode of the Flask Xen security module. > > + > > +=item B<setenforce> I<1|0|Enforcing|Permissive> > > + > > +Sets the current enforcing mode of the Flask Xen security module > > + > > +=item B<loadpolicy> I<policyfile> > > + > > +Loads a new policy int the Flask Xen security module. > > I suppose flask is something which needs to go onto the "to be > documented" list such that we can reference it from here.I am going to add a TO BE DOCUMENTED section at the end> > +=back > > + > > +=head1 XEN HOST SUBCOMMANDS > > + > > +=over 4 > > + > > +=item B<debug-keys> I<keys> > > + > > +Send debug I<keys> to Xen. > > The same as pressing the Xen "conswitch" (Ctrl-A by default) three times > and then pressing "keys".I''ll add that> > + > > +=item B<dmesg> [B<-c>] > > + > > +Reads the Xen message buffer, similar to dmesg on a Linux system. The > dmesg(1) ^Unix or ;-) > > > +buffer contains informational, warning, and error messages created > > +during Xen''s boot process. If you are having problems with Xen, this > > +is one of the first places to look as part of problem determination. > > + > > +B<OPTIONS> > > + > > +=over 4 > > + > > +=item B<-c>, B<--clear> > > + > > +Clears Xen''s message buffer. > > + > > +=back > > + > > +=item B<info> [B<-n>, B<--numa>] > > + > > +Print information about the Xen host in I<name : value> format. When > > +reporting a Xen bug, please provide this information as part of the > > +bug report. > > I''m not sure this is useful people reporting bugs will look for > information on reporting bugs (which should include this info) rather > than scanning the xl man page for options which say "please include.." > > I have added the need for this to > http://wiki.xen.org/xenwiki/ReportingBugsOK> > + > > +Sample output looks as follows (lines wrapped manually to make the man > > +page more readable): > > > + > > + host : talon > > + release : 2.6.12.6-xen0 > > Heh. Perhaps a more up to date example if one is needed at all?Good point> > + version : #1 Mon Nov 14 14:26:26 EST 2005 > > + machine : i686 > > + nr_cpus : 2 > > + nr_nodes : 1 > > + cores_per_socket : 1 > > + threads_per_core : 1 > > + cpu_mhz : 696 > > + hw_caps : 0383fbff:00000000:00000000:00000040 > > + total_memory : 767 > > + free_memory : 37 > > + xen_major : 3 > > + xen_minor : 0 > > + xen_extra : -devel > > + xen_caps : xen-3.0-x86_32 > > + xen_scheduler : credit > > + xen_pagesize : 4096 > > + platform_params : virt_start=0xfc000000 > > + xen_changeset : Mon Nov 14 18:13:38 2005 +0100 > > + 7793:090e44133d40 > > + cc_compiler : gcc version 3.4.3 (Mandrakelinux > > + 10.2 3.4.3-7mdk) > > + cc_compile_by : sdague > > + cc_compile_domain : (none) > > + cc_compile_date : Mon Nov 14 14:16:48 EST 2005 > > + xend_config_format : 4 > > + > > +B<FIELDS> > > + > > +Not all fields will be explained here, but some of the less obvious > > +ones deserve explanation: > > + > > +=over 4 > > + > > +=item B<hw_caps> > > + > > +A vector showing what hardware capabilities are supported by your > > +processor. This is equivalent to, though more cryptic, the flags > > +field in /proc/cpuinfo on a normal Linux machine. > > Does this correspond to some cpuid output somewhere? That might be a > good thing to reference. > > (checks, hmm, it all very processor specific)Yes, they do. I''ll add a reference to that.> > +=back > > + > > +B<OPTIONS> > > + > > +=over 4 > > + > > +=item B<-n>, B<--numa> > > + > > +List host NUMA topology information > > + > > +=back > [...] > > > +=item B<pci-list-assignable-devices> > > + > > +List all the assignable PCI devices. > > Perhaps add: > That is, though devices in the system which are configured to be > available for passthrough and are bound to a suitable PCI > backend driver in domain 0 rather than a real driver.OK> > +=head1 CPUPOOLS COMMANDS > > + > > +Xen can group the physical cpus of a server in cpu-pools. Each physical CPU is > > +assigned at most to one cpu-pool. Domains are each restricted to a single > > +cpu-pool. Scheduling does not cross cpu-pool boundaries, so each cpu-pool has > > +an own scheduler. > > +Physical cpus and domains can be moved from one pool to another only by an > > +explicit command. > > + > > +=over 4 > > + > > +=item B<cpupool-create> [I<OPTIONS>] I<ConfigFile> > > + > > +Create a cpu pool based an I<ConfigFile>. > > + > > +B<OPTIONS> > > + > > +=over 4 > > + > > +=item B<-f=FILE>, B<--defconfig=FILE> > > + > > +Use the given configuration file. > > + > > +=item B<-n>, B<--dryrun> > > + > > +Dry run - prints the resulting configuration. > > Is this deprecated in favour of global -N option? I think it should be.Yeah, there is no point since we have a global option.> > + > > +=back > > + > > +=item B<cpupool-list> [I<-c|--cpus> I<cpu-pool>] > > + > > +List CPU pools on the host. > > +If I<-c> is specified, B<xl> prints a list of CPUs used by I<cpu-pool>. > > Is cpu-pool a name or a number, or both? (this info would be useful in > the intro to the section I suppose).I think it is a name, but I would need a confirmation from Juergen.> > + > > +=item B<cpupool-destroy> I<cpu-pool> > > + > > +Deactivates a cpu pool. > > + > > +=item B<cpupool-rename> I<cpu-pool> <newname> > > + > > +Renames a cpu pool to I<newname>. > > + > > +=item B<cpupool-cpu-add> I<cpu-pool> I<cpu-nr|node-nr> > > + > > +Adds a cpu or a numa node to a cpu pool. > > + > > +=item B<cpupool-cpu-remove> I<cpu-nr|node-nr> > > + > > +Removes a cpu or a numa node from a cpu pool. > > + > > +=item B<cpupool-migrate> I<domain-id> I<cpu-pool> > > + > > +Moves a domain into a cpu pool. > > + > > +=item B<cpupool-numa-split> > > + > > +Splits up the machine into one cpu pool per numa node. > > + > > +=back > > + > > +=head1 VIRTUAL DEVICE COMMANDS > > + > > +Most virtual devices can be added and removed while guests are > > +running. > > ... assuming the necessary support exists in the guest. >OK> > The effect to the guest OS is much the same as any hotplug > > +event. > > + > > +=head2 BLOCK DEVICES > > + > > +=over 4 > > + > > +=item B<block-attach> I<domain-id> I<disc-spec-component(s)> ... > > + > > +Create a new virtual block device. This will trigger a hotplug event > > +for the guest. > > Should add a reference to the docs/misc/xl-disk-configuration.txt doc to > your SEE ALSO section.OK> > + > > +B<OPTIONS> > > + > > +=over 4 > > + > > +=item I<domain-id> > > + > > +The domain id of the guest domain that the device will be attached to. > > + > > +=item I<disc-spec-component> > > + > > +A disc specification in the same format used for the B<disk> variable in > > +the domain config file. See L<xldomain.cfg>. > > + > > +=back > > + > > +=item B<block-detach> I<domain-id> I<devid> [B<--force>] > > + > > +Detach a domain''s virtual block device. I<devid> may be the symbolic > > +name or the numeric device id given to the device by domain 0. You > > +will need to run B<xl block-list> to determine that number. > > + > > +Detaching the device requires the cooperation of the domain. If the > > +domain fails to release the device (perhaps because the domain is hung > > +or is still using the device), the detach will fail. The B<--force> > > +parameter will forcefully detach the device, but may cause IO errors > > +in the domain. > > + > > +=item B<block-list> I<domain-id> > > + > > +List virtual block devices for a domain. > > + > > +=item B<cd-insert> I<domain-id> I<VirtualDevice> I<be-dev> > > + > > +Insert a cdrom into a guest domain''s cd drive. Only works with HVM domains. > > + > > +B<OPTIONS> > > + > > +=over 4 > > + > > +=item I<VirtualDevice> > > + > > +How the device should be presented to the guest domain; for example /dev/hdc. > > + > > +=item I<be-dev> > > + > > +the device in the backend domain (usually domain 0) to be exported; it can be a > > +path to a file (file://path/to/file.iso). See B<disk> in L<xldomain.cfg> for the > > +details. > > + > > +=back > > + > > +=item B<cd-eject> I<domain-id> I<VirtualDevice> > > + > > +Eject a cdrom from a guest''s cd drive. Only works with HVM domains. > > +I<VirtualDevice> is the cdrom device in the guest to eject. > > + > > +=back > > + > > +=head2 NETWORK DEVICES > > + > > +=over 4 > > + > > +=item B<network-attach> I<domain-id> I<network-device> > > + > > +Creates a new network device in the domain specified by I<domain-id>. > > +I<network-device> describes the device to attach, using the same format as the > > +B<vif> string in the domain config file. See L<xldomain.cfg> for the > > +description. > > I sent out a patch to add docs/misc/xl-network-configuration.markdown as > well.I''ll add a reference to it> > + > > +=item B<network-detach> I<domain-id> I<devid|mac> > > + > > +Removes the network device from the domain specified by I<domain-id>. > > +I<devid> is the virtual interface device number within the domain > > +(i.e. the 3 in vif22.3). Alternatively the I<mac> address can be used to > > +select the virtual interface to detach. > > + > > +=item B<network-list> I<domain-id> > > + > > +List virtual network interfaces for a domain. > > + > > +=back > > + > > +=head2 PCI PASS-THROUGH > > + > > +=over 4 > > + > > +=item B<pci-attach> I<domain-id> I<BDF> > > + > > +Hot-plug a new pass-through pci device to the specified domain. > > +B<BDF> is the PCI Bus/Device/Function of the physical device to pass-through. > > + > > +=item B<pci-detach> [I<-f>] I<domain-id> I<BDF> > > + > > +Hot-unplug a previously assigned pci device from a domain. B<BDF> is the PCI > > +Bus/Device/Function of the physical device to be removed from the guest domain. > > + > > +If B<-f> is specified, B<xl> is going to forcefully remove the device even > > +without guest''s collaboration. > > + > > +=item B<pci-list> I<domain-id> > > + > > +List pass-through pci devices for a domain. > > + > > +=back > > + > > +=head1 SEE ALSO > > + > > +B<xldomain.cfg>(5), B<xentop>(1) > > + > > +=head1 AUTHOR > > + > > + Stefano Stabellini <stefano.stabellini@eu.citrix.com> > > + Vincent Hanquez <vincent.hanquez@eu.citrix.com> > > + Ian Jackson <ian.jackson@eu.citrix.com> > > + Ian Campbell <Ian.Campbell@citrix.com> > > This list seems so incomplete/unlikely to be updated that it may as well > not be included. (also I think AUTHOR in a man page refers to the author > of the page, not the authors of the software)OK, I''ll remove it> > +=head1 BUGS > > + > > +Send bugs to xen-devel@lists.xensource.com. > > Reference http://wiki.xen.org/xenwiki/ReportingBugs >Sure _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Juergen Gross
2011-Nov-09 14:47 UTC
Re: [Xen-devel] [PATCH DOCDAY] introduce an xl man page in pod format
On 11/09/2011 03:41 PM, Stefano Stabellini wrote:> On Fri, 28 Oct 2011, Ian Campbell wrote: >> On Thu, 2011-10-27 at 17:19 +0100, Stefano Stabellini wrote: >>> +=head1 CPUPOOLS COMMANDS >>> + >>> +Xen can group the physical cpus of a server in cpu-pools. Each physical CPU is >>> +assigned at most to one cpu-pool. Domains are each restricted to a single >>> +cpu-pool. Scheduling does not cross cpu-pool boundaries, so each cpu-pool has >>> +an own scheduler. >>> +Physical cpus and domains can be moved from one pool to another only by an >>> +explicit command. >>> + >>> +=over 4 >>> + >>> +=item B<cpupool-create> [I<OPTIONS>] I<ConfigFile> >>> + >>> +Create a cpu pool based an I<ConfigFile>. >>> + >>> +B<OPTIONS> >>> + >>> +=over 4 >>> + >>> +=item B<-f=FILE>, B<--defconfig=FILE> >>> + >>> +Use the given configuration file. >>> + >>> +=item B<-n>, B<--dryrun> >>> + >>> +Dry run - prints the resulting configuration. >> Is this deprecated in favour of global -N option? I think it should be. > Yeah, there is no point since we have a global option. > >>> + >>> +=back >>> + >>> +=item B<cpupool-list> [I<-c|--cpus> I<cpu-pool>] >>> + >>> +List CPU pools on the host. >>> +If I<-c> is specified, B<xl> prints a list of CPUs used by I<cpu-pool>. >> Is cpu-pool a name or a number, or both? (this info would be useful in >> the intro to the section I suppose). > I think it is a name, but I would need a confirmation from Juergen. >Already specified by cs 24066. Juergen -- Juergen Gross Principal Developer Operating Systems PDG ES&S SWE OS6 Telephone: +49 (0) 89 3222 2967 Fujitsu Technology Solutions e-mail: juergen.gross@ts.fujitsu.com Domagkstr. 28 Internet: ts.fujitsu.com D-80807 Muenchen Company details: ts.fujitsu.com/imprint.html _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel