On my current instance, the meta_data.json is the following:
{
"availability_zone": "nova",
"devices": [],
"hostname": "ims-host-1",
"keys": [
{
"data": "ssh-rsa
AAAAB3NzaC1yc2EAAAADAQABAAABAQCoq69QeFy0E9A4uMf5be62ENRAAMh/VQ3Uva/HXMY0nz0I0KwyVs2gzim2f5u51BtTJf9Hr5rDrwBtsmXliPlkCwlCi6oLLe9+06jEZsATdLak9rxbtbuRBiCYcHMAuQWIbVzo1IW1w+WE6DDLc2qwkb0RCozq3wzEJgVUTNMRa9gEzRtD3WGeV2wegt/FNpM1/lXM9T1Ki577vCcv0zFAr4JoNW2YjtFO83t8N5+rDgE4Ar9jFGZWjhB7NuYUN2MlzS1DjXi3SxbBm8gd1ReiqNA7MruUudqQ8I/TgSE5CxQL0UH67c3Y17hyzQDT/r8DAqfDm2P6HzSJQXBVZ7+j
IMS Migration",
"name": "fdupont",
"type": "ssh"
}
],
"launch_index": 0,
"name": "ims_host_1",
"project_id": "641a64a1a42d429f9606b345f328d306",
"public_keys": {
"fdupont": "ssh-rsa
AAAAB3NzaC1yc2EAAAADAQABAAABAQCoq69QeFy0E9A4uMf5be62ENRAAMh/VQ3Uva/HXMY0nz0I0KwyVs2gzim2f5u51BtTJf9Hr5rDrwBtsmXliPlkCwlCi6oLLe9+06jEZsATdLak9rxbtbuRBiCYcHMAuQWIbVzo1IW1w+WE6DDLc2qwkb0RCozq3wzEJgVUTNMRa9gEzRtD3WGeV2wegt/FNpM1/lXM9T1Ki577vCcv0zFAr4JoNW2YjtFO83t8N5+rDgE4Ar9jFGZWjhB7NuYUN2MlzS1DjXi3SxbBm8gd1ReiqNA7MruUudqQ8I/TgSE5CxQL0UH67c3Y17hyzQDT/r8DAqfDm2P6HzSJQXBVZ7+j
IMS Migration"
},
"random_seed":
"2Fo3Ys4aEO0JdrjOtIsLzq89/nQZ5ojFqIESCRkP23wIcNOB4rpLAMxNwe1mGKja0XMKefBAYqCvOYL9u7X8L50dkA6WqnsKQP6MSJToF19YU7QAURixo13ZQQI4l/f2ou4cDE6yUyB/NlqaHEwjUF3mfYgUZfHLHdSgrv7YaSyZ0etUvtHxAseiXiBYdB3boQhVD8YE7EKZ8gKWHgDbOk3wAd/FsTDya70O01QlKZiJPv0MMCFbanvo6rN3PJN+qN6xhBoRvg0ZRY6bDz2PtQltLhqCCP+M6kj4qvrW6uW/Mg+qNcxwFSvASXoTPxTnaII0MPx6tL7AqgIechBBtBkEgXKvFA/p7SAPRvcucwwNvNYytiBqTbKAS+kLYUJbDYouqqychYbh4kwtpXkMTrpoI070R6uNamHpfGXfyJv/7ancW42K1EhOb52tMRNCPUWEi4TqOozFCTNeozopdcG7cyAf9w0NxAdcvtVOxKtgSJipjNE87e6kL3s3etxANQwMf/rN5406P0up9qTR1owxlRxSUf0ydKSdqcsZH/m5Ua4w5t9fPCHxCmOXiVfTW5iFKZOIGQwdcH8f5j5UnX/Q50dyrvy2QIw+nzLQcGHa5ZZKPFcXmAO7NBx7pIdrf76FtIVwnF4FA8AAKrsZc4We/IAAhX/SUNYZ3JHjp/M=",
"uuid": "7ee62bdc-1c9d-4193-bb8f-fdbbbdfded0f"
}
The information we need is the uuid.
@Nenad Peric <nperic@redhat.com>, this comes from our lab. I've
enabled
DHCP on the external network, but that don't mean that all networks will
use it. In the conversion host project, I'd enable it.
On Mon, Sep 24, 2018 at 9:16 PM Nenad Peric <nperic@redhat.com> wrote:
>
>
> On Mon, Sep 24, 2018 at 6:30 PM Richard W.M. Jones
<rjones@redhat.com>
> wrote:
>
>> On Mon, Sep 24, 2018 at 10:00:21AM +0200, Fabien Dupont wrote:
>> > Hi,
>>
>> Hi Fabien, sorry I didn't respond to this earlier as I was doing
some
>> work. If you CC me on emails then you can usually get a quicker
>> response.
>>
>> > I've read the virt-v2v OpenStack output code to understand how
it works
>> and
>> > I've seen this:
>> >
>> > > (* The server name or UUID of the conversion appliance
where
>> > > * virt-v2v is currently running. In future we may be able
>> > > * to make this optional and derive it from the OpenStack
>> > > * metadata service instead.
>> > > *)
>> > > server_id : string;
>> >
>> > Indeed, it can be derived from OpenStack metadata service. The
following
>> > URL called from within the conversion appliance will return the
>> metadata:
>> > http://169.254.169.254/openstack/latest/meta_data.json. As you can
>> see, the
>> > IP address is 169.254.169.254, which will is the metadata service.
The
>> JSON
>> > body contains a uuid entry that is the current appliance UUID,
hence the
>> > server_id used by virt-v2v.
>>
>> We certainly do want to do this, although there was some concern about
>> whether the metadata service is enabled on every OpenStack instance
>> out there. (Also there are two different types of metadata service
IIRC?)
>>
>>
> This concrete approach will not work in our current deployment, since the
> metadata service is not there.
> The infrastructure was made in such a way that the IP addressing and
> network configuration
> is done on the provider side. This means that all the information VMs are
> getting, are coming
> from the lab network. I am thinking a way around this if possible. I'll
> try out different OSP network
> configurations and see if I can come up with something which will keep IP,
> MAC and routing consistent
> after migration, and still have an isolated metadata service on the OSP
> side.
>
>
>
>> (Unfortunately the connection hung
>> for minutes instead of timing out quickly, which is not great.)
>>
>
> yeah ... That is not the friendliest of approaches, but it waits for a
> pre-defined timeout someplace.
>
> Cheers,
>
> Nenad
>
>
--
*Fabien Dupont*
PRINCIPAL SOFTWARE ENGINEER
Red Hat - Solutions Engineering
fabien@redhat.com M: +33 (0) 662 784 971 <+33662784971>
<http://redhat.com> *TRIED. TESTED. TRUSTED.*
Twitter: @redhatway <https://twitter.com/redhatway> | Instagram:
@redhatinc
<https://www.instagram.com/redhatinc/> | Snapchat: @redhatsnaps