Hi I am sending my previous mail about saving the state of vtpm with some new questions and information. I am using xen 3.1.3 with these patches applied: http://lists.xensource.com/archives/html/xen-devel/2008-02/msg01092.html http://lists.xensource.com/archives/html/xense-devel/2007-04/msg00005.html When I start the vtpm_manager I get the following output. INFO[VTPM]: Starting VTPM. INFO[TCS]: Constructing new TCS: INFO[TCS]: Calling TCS_OpenContext: INFO[VTSP]: OIAP. INFO[VTSP]: Loading Key into TPM. INFO[VTSP]: Unbinding 256 bytes of data. INFO[VTPM]: Loaded saved state (dmis = 1). INFO[VTSP]: Loading Key into TPM. INFO[VTPM]: Creating new DMI instance 0 attached. INFO[TCS]: Calling TCS_OpenContext: INFO[VTPM]: [Backend Listener]: Backend Listener waiting for messages. INFO[VTPM]: [VTPM Listener]: VTPM Listener waiting for messages. INFO[VTPM]: [Hotplug Listener]: Hotplug Listener waiting for messages. INFO[VTPM]: Re-attaching DMI instance 1. INFO[TCS]: Calling TCS_OpenContext: INFO[VTPM]: Launching DMI on PID = 4407 INFO[VTSP]: Binding 16 bytes of data. INFO[VTPM]: Saved 256 bytes of E(symkey) + 656 bytes of E(data) INFO[VTPM]: Saved VTPM Manager state (status = 0, dmis = 1) INFO[VTPM]: [Hotplug Listener]: Hotplug Listener waiting for messages. When I start a domain with the option vtpm = [ ''instance=1, backend=0'' ] vtpm_manager on dom0 correctly starts a new vtpmd process with the following options: vtpmd clear pvm 1 I can accomplish all tpm operations on this vtpm from domU. I can see the instance is recorded to vtpm database correctly: cat /etc/xen/vtpm.db #Database for VM to vTPM association #1st column: domain name #2nd column: TPM instance number pardus-client 1 However, vtpm_manager complains about loading NVM. TPMD[1]: tpmd.c:126: Info: Initializing tpm state: clear, type: pvm, id: 1 TPMD[1]: tpm/tpm_cmd_handler.c:4143: Debug: tpm_emulator_init() TPMD[1]: tpm/tpm_startup.c:30: Info: TPM_Init() TPMD[1]: tpm/tpm_testing.c:242: Info: TPM_SelfTestFull() TPMD[1]: tpm/tpm_testing.c:42: Debug: tpm_test_prng() TPMD[1]: tpm/tpm_testing.c:70: Debug: Monobit: 9880 TPMD[1]: tpm/tpm_testing.c:71: Debug: Poker: 19.3 TPMD[1]: tpm/tpm_testing.c:72: Debug: run_1: 2413, 2466 TPMD[1]: tpm/tpm_testing.c:73: Debug: run_2: 1205, 1219 TPMD[1]: tpm/tpm_testing.c:74: Debug: run_3: 658, 627 TPMD[1]: tpm/tpm_testing.c:75: Debug: run_4: 316, 320 TPMD[1]: tpm/tpm_testing.c:76: Debug: run_5: 179, 148 TPMD[1]: tpm/tpm_testing.c:77: Debug: run_6+: 166, 156 TPMD[1]: tpm/tpm_testing.c:78: Debug: run_34: 0 TPMD[1]: tpm/tpm_testing.c:112: Debug: tpm_test_sha1() TPMD[1]: tpm/tpm_testing.c:156: Debug: tpm_test_hmac() TPMD[1]: tpm/tpm_testing.c:183: Debug: tpm_test_rsa_EK() TPMD[1]: tpm/tpm_testing.c:185: Debug: rsa_generate_key() TPMD[1]: tpm/tpm_testing.c:190: Debug: testing endorsement key TPMD[1]: tpm/tpm_testing.c:196: Debug: rsa_sign(RSA_SSA_PKCS1_SHA1) TPMD[1]: tpm/tpm_testing.c:199: Debug: rsa_verify(RSA_SSA_PKCS1_SHA1) TPMD[1]: tpm/tpm_testing.c:202: Debug: rsa_sign(RSA_SSA_PKCS1_DER) TPMD[1]: tpm/tpm_testing.c:205: Debug: rsa_verify(RSA_SSA_PKCS1_DER) TPMD[1]: tpm/tpm_testing.c:209: Debug: rsa_encrypt(RSA_ES_PKCSV15) TPMD[1]: tpm/tpm_testing.c:213: Debug: rsa_decrypt(RSA_ES_PKCSV15) TPMD[1]: tpm/tpm_testing.c:217: Debug: verify plain text TPMD[1]: tpm/tpm_testing.c:220: Debug: rsa_encrypt(RSA_ES_OAEP_SHA1) TPMD[1]: tpm/tpm_testing.c:224: Debug: rsa_decrypt(RSA_ES_OAEP_SHA1) TPMD[1]: tpm/tpm_testing.c:228: Debug: verify plain text TPMD[1]: tpm/tpm_testing.c:260: Info: Self-Test succeeded TPMD[1]: tpm/tpm_startup.c:45: Info: TPM_Startup(1) Loading NVM. Sending LoadNVM command ERROR[VTPM]: Failed to load NVM .INFO[VTPM]: [VTPM Listener]: VTPM Listener waiting for messages. Reading LoadNVM header I have searched the code and lists for this NVM loading, and realized that the NVM for saving all vtpm instances resides on /var/vtpm directory. I have searched the lists for this information and found out that vtpm_manager creates the necessary directory structure when it starts running. However my vtpm_manager does not create any such directory. /var/vtpm/: toplam 20 drwxr-xr-x 4 root root 4096 Mar 26 16:40 . drwxr-xr-x 17 root root 4096 Mar 26 16:35 .. drwxr-xr-x 2 root root 4096 Mar 26 16:40 fifos drwxr-xr-x 2 root root 4096 Mar 26 16:35 socks -rw------- 1 root root 1532 Mar 27 15:54 VTPM /var/vtpm/fifos: toplam 8 drwxr-xr-x 2 root root 4096 Mar 26 16:40 . drwxr-xr-x 4 root root 4096 Mar 26 16:40 .. prw------- 1 root root 0 Mar 27 15:54 from_console.fifo prw------- 1 root root 0 Mar 27 15:54 to_console.fifo prw------- 1 root root 0 Mar 26 16:40 tpm_cmd_to_1.fifo prw------- 1 root root 0 Mar 26 16:35 tpm_rsp_from_all.fifo prw------- 1 root root 0 Mar 27 15:54 vtpm_cmd_from_all.fifo prw------- 1 root root 0 Mar 27 15:54 vtpm_rsp_to_1.fifo /var/vtpm/socks: toplam 8 drwxr-xr-x 2 root root 4096 Mar 26 16:35 . drwxr-xr-x 4 root root 4096 Mar 26 16:40 .. When I restart or shutdown the domain and start again, vtpmd starts a new vtpm instance with clear option again, which I think is wrong. So all my previously created keys are lost on new instance, because previous SRK key is lost. So the most important question follows: How do I save state of a vtpm across domU reboots? I checked the code for this clear parameter, and my understanding is as follows: vtpm is based on tpm_emulator and tpm_emulator have 3 states: deactivate, save, clear. Whenever I start a new domain, xen starts vtpm with clear parameter. vtpm_create_instance() creates a new vtpm instance and determines what to do with it with the return value of vtpm_get_create_reason(), which returns the value of xenbus/resume. vtpm_create_instance() then sends a command to the tpm with a fifo about whether to resume or start a vtpm instance. When the command sent is start, vtpm just clears all the PCR''s and keys on the existing vtpm instance. Is this vtpm_resume something related to domain save/restore and suspend/resume therefore completely irrelevant to the subject? (like the backend driver restarted all frontend connections must be resumed) I assume this because I saw the code about netfront and blkfront driver codes, which includes this resume command sended with xenbus. But the tpm frontend xenu driver does not include information abut this. How do I save state of the vtpm across domU shutdowns? Kind regards Erdem Bayer _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel