On Mar 8, 2014 10:08 AM, "Gene Cumm" <gene.cumm at gmail.com> wrote:> > On Mar 8, 2014 9:27 AM, "Steven Shiau" <steven at nchc.org.tw> wrote: > > > > > > > > On 03/08/2014 10:06 PM, Gene Cumm wrote: > > >> Hi Gene, > > >> > Thanks. As you suggested, I did a test about 6.03-pre6, and Istill got> > >> > the same issue. My client machine > > >> > still only shows: > > >> > ===================> > >> > Getting cached packets > > >> > My IP is 192.168.120.1 > > >> > ===================> > >> > The syslog log shows: > > >> > ===================> > >> > Mar 8 21:25:06 drbldbn dhcpd: Client 0:c:29:6e:ac:93 requests > > > Which vNIC? > > > > > > --Gene > > Hi Gene, > > My VMWare WS 10 is running on debian wheezy. The client machine connects > > to host machine via vmnet1. I have dhcpd service on debian wheezy: > > root 8433 1 0 21:19 ? 00:00:00 /usr/sbin/dhcpd -q -cf > > /etc/dhcp/dhcpd.conf -pf /var/run/dhcpd.pid eth1 vmnet1 > > Thanks. > > vNIC not VMNet. AMD PCNet32 vlance, Intel e1000, Intel e1000e, VMwareVMXNet3? Feel free to directly email me your .vmx file if you don't know. So that should be vlance or flexible (which can be PCNet32 or e1000). Could you verify with a real OS? I see it has two cores per socket. Is it one socket? What exact make/model is the CPU in the host? You might be choking the VM. Have you considered just 1 socket and 1 core per socket? --Gene
On 2014?03?09? 01:39, Gene Cumm wrote:> So that should be vlance or flexible (which can be PCNet32 or e1000). > Could you verify with a real OS? > > I see it has two cores per socket. Is it one socket? What exact make/model > is the CPU in the host? You might be choking the VM. Have you considered > just 1 socket and 1 core per socket? > > --GeneHi Gene, I made some progress. By creating a new VMWare WS 10 efi client, and making sure the ethernet device is using "e1000" in .vmx file: ethernet0.virtualDev = "e1000" As you mentioned, 6.03-pre7 has some regression. This testing was done with 6.03-pre6. Now my client is able to EFI PXE boot. The log shows PXElinux required files were downloaded, and finally it booted: ==============Mar 9 10:36:23 drbldbn dhcpd: DHCPACK on 192.168.120.3 to 00:0c:29:68:f3:23 via vmnet1 Mar 9 10:36:23 drbldbn atftpd[10590]: Serving bootx64.efi to 192.168.120.3:1808 Mar 9 10:36:23 drbldbn atftpd[10590]: Serving bootx64.efi to 192.168.120.3:1809 Mar 9 10:36:23 drbldbn atftpd[10590]: Serving ldlinux.e64 to 192.168.120.3:1810 Mar 9 10:36:23 drbldbn atftpd[10590]: Serving pxelinux.cfg/564d880a-57b6-3c13-26d5-a2af8968f323 to 192.168.120.3:1811 Mar 9 10:36:23 drbldbn atftpd[10590]: Serving pxelinux.cfg/01-00-0c-29-68-f3-23 to 192.168.120.3:1812 Mar 9 10:36:23 drbldbn atftpd[10590]: Serving pxelinux.cfg/C0A87803 to 192.168.120.3:1813 Mar 9 10:36:23 drbldbn atftpd[10590]: Serving pxelinux.cfg/C0A8780 to 192.168.120.3:1814 Mar 9 10:36:23 drbldbn atftpd[10590]: Serving pxelinux.cfg/C0A878 to 192.168.120.3:1815 Mar 9 10:36:23 drbldbn atftpd[10590]: Serving pxelinux.cfg/C0A87 to 192.168.120.3:1816 Mar 9 10:36:23 drbldbn atftpd[10590]: Serving pxelinux.cfg/C0A8 to 192.168.120.3:1817 Mar 9 10:36:23 drbldbn atftpd[10590]: Serving pxelinux.cfg/C0A to 192.168.120.3:1818 Mar 9 10:36:23 drbldbn atftpd[10590]: Serving pxelinux.cfg/C0 to 192.168.120.3:1819 Mar 9 10:36:23 drbldbn atftpd[10590]: Serving pxelinux.cfg/C to 192.168.120.3:1820 Mar 9 10:36:23 drbldbn atftpd[10590]: Serving pxelinux.cfg/default to 192.168.120.3:1821 Mar 9 10:36:23 drbldbn atftpd[10590]: Serving vesamenu.c32 to 192.168.120.3:1822 Mar 9 10:36:23 drbldbn atftpd[10590]: Serving efi64/vesamenu.c32 to 192.168.120.3:1823 Mar 9 10:36:23 drbldbn atftpd[10590]: Serving libcom32.c32 to 192.168.120.3:1824 Mar 9 10:36:23 drbldbn atftpd[10590]: Serving efi64/libcom32.c32 to 192.168.120.3:1825 Mar 9 10:36:24 drbldbn atftpd[10590]: Serving libutil.c32 to 192.168.120.3:1826 Mar 9 10:36:24 drbldbn atftpd[10590]: Serving efi64/libutil.c32 to 192.168.120.3:1827 Mar 9 10:36:24 drbldbn atftpd[10590]: Serving pxelinux.cfg/default to 192.168.120.3:1828 Mar 9 10:36:24 drbldbn atftpd[10590]: Serving drblwp.png to 192.168.120.3:1829 Mar 9 10:36:25 drbldbn atftpd[10590]: Serving vmlinuz-pxe to 192.168.120.3:1830 Mar 9 10:36:28 drbldbn atftpd[10590]: Serving initrd-pxe.img to 192.168.120.3:1831 Mar 9 10:37:32 drbldbn dhcpd: Client 0:c:29:68:f3:23 requests 1:3:6:c:f:1c:2a - DRBLClient - no dhcp-client-id Mar 9 10:37:32 drbldbn dhcpd: DHCPDISCOVER from 00:0c:29:68:f3:23 via vmnet1 Mar 9 10:37:33 drbldbn dhcpd: DHCPOFFER on 192.168.120.3 to 00:0c:29:68:f3:23 via vmnet1 Mar 9 10:37:33 drbldbn dhcpd: Client 0:c:29:68:f3:23 requests 1:3:6:c:f:1c:2a - DRBLClient - no dhcp-client-id Mar 9 10:37:33 drbldbn dhcpd: DHCPREQUEST for 192.168.120.3 (192.168.120.254) from 00:0c:29:68:f3:23 via vmnet1 Mar 9 10:37:33 drbldbn dhcpd: DHCPACK on 192.168.120.3 to 00:0c:29:68:f3:23 via vmnet1 Mar 9 10:37:33 drbldbn rpc.mountd[8609]: authenticated mount request from 192.168.120.3:934 for /tftpboot/node_root (/tftpboot/node_root) ==============One thing I noticed is, it took almost 1 minute to download the initrd-pxe.img (13 MB). Before I saw that, I thought it hanged. :) To compare that with BIOS PXE booting, I commented the 'firmware "efi"' line in my .vmx file, and changed the line "PATH efi64/" to "PATH bios/" in my pxelinux config file then did a BIOS PXE booting this time. The log shows: ===============Mar 9 11:04:07 drbldbn atftpd[10590]: Serving pxelinux.0 to 192.168.120.3:2070 Mar 9 11:04:07 drbldbn atftpd[10590]: Serving pxelinux.0 to 192.168.120.3:2071 Mar 9 11:04:07 drbldbn atftpd[10590]: Serving ldlinux.c32 to 192.168.120.3:49152 Mar 9 11:04:07 drbldbn atftpd[10590]: Serving pxelinux.cfg/564d880a-57b6-3c13-26d5-a2af8968f323 to 192.168.120.3:49153 Mar 9 11:04:07 drbldbn atftpd[10590]: Serving pxelinux.cfg/01-00-0c-29-68-f3-23 to 192.168.120.3:49154 Mar 9 11:04:07 drbldbn atftpd[10590]: Serving pxelinux.cfg/C0A87803 to 192.168.120.3:49155 Mar 9 11:04:07 drbldbn atftpd[10590]: Serving pxelinux.cfg/C0A8780 to 192.168.120.3:49156 Mar 9 11:04:07 drbldbn atftpd[10590]: Serving pxelinux.cfg/C0A878 to 192.168.120.3:49157 Mar 9 11:04:07 drbldbn atftpd[10590]: Serving pxelinux.cfg/C0A87 to 192.168.120.3:49158 Mar 9 11:04:07 drbldbn atftpd[10590]: Serving pxelinux.cfg/C0A8 to 192.168.120.3:49159 Mar 9 11:04:07 drbldbn atftpd[10590]: Serving pxelinux.cfg/C0A to 192.168.120.3:49160 Mar 9 11:04:07 drbldbn atftpd[10590]: Serving pxelinux.cfg/C0 to 192.168.120.3:49161 Mar 9 11:04:07 drbldbn atftpd[10590]: Serving pxelinux.cfg/C to 192.168.120.3:49162 Mar 9 11:04:07 drbldbn atftpd[10590]: Serving pxelinux.cfg/default to 192.168.120.3:49163 Mar 9 11:04:07 drbldbn atftpd[10590]: Serving vesamenu.c32 to 192.168.120.3:49164 Mar 9 11:04:07 drbldbn atftpd[10590]: Serving bios/vesamenu.c32 to 192.168.120.3:49165 Mar 9 11:04:07 drbldbn atftpd[10590]: Serving libcom32.c32 to 192.168.120.3:49166 Mar 9 11:04:07 drbldbn atftpd[10590]: Serving bios/libcom32.c32 to 192.168.120.3:49167 Mar 9 11:04:07 drbldbn atftpd[10590]: Serving libutil.c32 to 192.168.120.3:49168 Mar 9 11:04:07 drbldbn atftpd[10590]: Serving bios/libutil.c32 to 192.168.120.3:49169 Mar 9 11:04:07 drbldbn atftpd[10590]: Serving pxelinux.cfg/default to 192.168.120.3:49170 Mar 9 11:04:07 drbldbn atftpd[10590]: Serving drblwp.png to 192.168.120.3:49171 Mar 9 11:04:08 drbldbn atftpd[10590]: Serving vmlinuz-pxe to 192.168.120.3:49172 Mar 9 11:04:08 drbldbn atftpd[10590]: Serving initrd-pxe.img to 192.168.120.3:49173 Mar 9 11:04:14 drbldbn dhcpd: Client 0:c:29:68:f3:23 requests 1:3:6:c:f:1c:2a - DRBLClient - no dhcp-client-id Mar 9 11:04:14 drbldbn dhcpd: DHCPDISCOVER from 00:0c:29:68:f3:23 via vmnet1 Mar 9 11:04:14 drbldbn dhcpd: DHCPOFFER on 192.168.120.3 to 00:0c:29:68:f3:23 via vmnet1 Mar 9 11:04:14 drbldbn dhcpd: Client 0:c:29:68:f3:23 requests 1:3:6:c:f:1c:2a - DRBLClient - no dhcp-client-id Mar 9 11:04:14 drbldbn dhcpd: DHCPREQUEST for 192.168.120.3 (192.168.120.254) from 00:0c:29:68:f3:23 via vmnet1 Mar 9 11:04:14 drbldbn dhcpd: DHCPACK on 192.168.120.3 to 00:0c:29:68:f3:23 via vmnet1 ===============You can see that the client spent only 6 secs to download the initrd-pxe.img file. Therefore for the time being, the performance of pxelinux efi64 is not as good as that of pxelinux bios. Besides, I also did a test about "e1000e" by assigning: ethernet0.virtualDev = "e1000e" in my .vmx file. The result is the same with that of PCnet32 NIC. Therefore for the time being, the conclusion to me: 1. Remember to use "e1000", not "PCnet32" or "e1000e" firmware for the VM client. 2. Stay with syslinux 6.03-pre6 before the next testing release. I will do some tests with real machine clients when I enter office tomorrow. Thank you very much. Steven. -- Steven Shiau <steven _at_ nchc org tw> <steven _at_ stevenshiau org> National Center for High-performance Computing, Taiwan. http://www.nchc.org.tw Public Key Server PGP Key ID: 4096R/47CF935C Fingerprint: 0240 1FEB 695D 7112 62F0 8796 11C1 12DA 47CF 935C
On Mar 8, 2014 12:39 PM, "Gene Cumm" <gene.cumm at gmail.com> wrote:> > On Mar 8, 2014 10:08 AM, "Gene Cumm" <gene.cumm at gmail.com> wrote: > >> > vNIC not VMNet. AMD PCNet32 vlance, Intel e1000, Intel e1000e, VMwareVMXNet3? Feel free to directly email me your .vmx file if you don't know.> I see it has two cores per socket. Is it one socket? What exactmake/model is the CPU in the host? You might be choking the VM. Have you considered just 1 socket and 1 core per socket?> > --GeneStephen, this last part is important for performance in a VM. --Gene
On Sat, Mar 8, 2014 at 12:39 PM, Gene Cumm <gene.cumm at gmail.com> wrote:> On Mar 8, 2014 10:08 AM, "Gene Cumm" <gene.cumm at gmail.com> wrote: >> >> vNIC not VMNet. AMD PCNet32 vlance, Intel e1000, Intel e1000e, VMware >> VMXNet3? Feel free to directly email me your .vmx file if you don't know. > > So that should be vlance or flexible (which can be PCNet32 or e1000). Could > you verify with a real OS?I finally got the opportunity to test your VMX and I can see the same issue. Interestingly, I also see issues with booting from the second NIC (null IP)> I see it has two cores per socket. Is it one socket? What exact make/model > is the CPU in the host? You might be choking the VM. Have you considered > just 1 socket and 1 core per socket? > > --Gene-- -Gene
On Sun, Mar 9, 2014 at 8:09 AM, Gene Cumm <gene.cumm at gmail.com> wrote:> On Sat, Mar 8, 2014 at 12:39 PM, Gene Cumm <gene.cumm at gmail.com> wrote: >> On Mar 8, 2014 10:08 AM, "Gene Cumm" <gene.cumm at gmail.com> wrote: >>> >>> vNIC not VMNet. AMD PCNet32 vlance, Intel e1000, Intel e1000e, VMware >>> VMXNet3? Feel free to directly email me your .vmx file if you don't know. >> >> So that should be vlance or flexible (which can be PCNet32 or e1000). Could >> you verify with a real OS? > > I finally got the opportunity to test your VMX and I can see the same > issue. Interestingly, I also see issues with booting from the second > NIC (null IP) > >> I see it has two cores per socket. Is it one socket? What exact make/model >> is the CPU in the host? You might be choking the VM. Have you considered >> just 1 socket and 1 core per socket?1) My assumption would be that the VMware virtualized AMD 79C970A (PCNet32 driver; vlance virtualDev) lacks proper EFI64 support. 2) I have 0 speed issues using your VMX. If you only have two real cores for this 2vCPU VM, you're choking it as the host needs time to run. If you choke it, you mess with timers. If you mess with timers, interface polling and a HUGE slew of items (including a guest OS's clock) will be slowed to a crawl. In my experience (both on ESXi and VMware Workstation), gGeneral sizing guidelines are that the first core on the first socket and 1-2GiB RAM should be considered dedicated to the host and arbitrary VMs should not have a vCPU count greater than the number of cores of an available socket and the vRAM should not exceed the RAM directly available to the NUMA node of the socket. 2 Intel Xeon x54xx constitute 1 NUMA node so you can have (1) 4vCPU VM with most of the RAM of the entire server. 2 Intel Xeon x55xx constitute 2 NUMA nodes so you can up to the vCPU count of the number of available cores in a socket and up to the available RAM directly controlled by that socket. If we had 2 Intel Xeon E5530 with (6) 8GiB DIMMs, the best we could do for a single VM on this otherwise idle host is 4vCPUs and 22-24 GiB RAM. -- -Gene