Displaying 13 results from an estimated 13 matches for "48d3".
Did you mean:
483
2020 Apr 15
2
Can't start vm with enc backing files, No secret with id 'sec0' ?
...9;qemu' type='qcow2'/>
<source file='/root/enc.qcow2'/>
<backingStore/>
<target dev='hda' bus='ide'/>
<encryption format='luks'>
<secret type='passphrase' uuid='694bdf38-214e-48d3-8c4c-9dbbcf0f5fa0'/>
</encryption>
<alias name='ide0-0-0'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
4. According to the qemu documentation, an encrypted snap.qc...
2020 Apr 15
0
Re: Can't start vm with enc backing files, No secret with id 'sec0' ?
...<source file='/root/enc.qcow2'/>
>
> <backingStore/>
>
> <target dev='hda' bus='ide'/>
>
> <encryption format='luks'>
>
> <secret type='passphrase' uuid='694bdf38-214e-48d3-8c4c-9dbbcf0f5fa0'/>
>
> </encryption>
>
> <alias name='ide0-0-0'/>
>
> <address type='drive' controller='0' bus='0' target='0' unit='0'/>
>
> </disk>
>
> 4. Accordi...
2018 Feb 04
1
Troubleshooting glusterfs
Please help troubleshooting glusterfs with the following setup:
Distributed volume without replication. Sharding enabled.
[root at master-5f81bad0054a11e8bf7d0671029ed6b8 uploads]# gluster volume info
Volume Name: gv0
Type: Distribute
Volume ID: 1a7e05f6-4aa8-48d3-b8e3-300637031925
Status: Started
Snapshot Count: 0
Number of Bricks: 27
Transport-type: tcp
Bricks:
Brick1: gluster3.qencode.com:/var/storage/brick/gv0
Brick2: encoder-376cac0405f311e884700671029ed6b8.qencode.com:
/var/storage/brick/gv0
Brick3: encoder-ee6761c0091c11e891ba0671029ed6b8.qenc...
2018 Feb 04
1
Fwd: Troubleshooting glusterfs
...:
Distributed volume without replication. Sharding enabled.
# cat /etc/centos-release
CentOS release 6.9 (Final)
# glusterfs --version
glusterfs 3.12.3
[root at master-5f81bad0054a11e8bf7d0671029ed6b8 uploads]# gluster volume info
Volume Name: gv0
Type: Distribute
Volume ID: 1a7e05f6-4aa8-48d3-b8e3-300637031925
Status: Started
Snapshot Count: 0
Number of Bricks: 27
Transport-type: tcp
Bricks:
Brick1: gluster3.qencode.com:/var/storage/brick/gv0
Brick2: encoder-376cac0405f311e884700671029ed6b8.qencode.com:/var/storage/
brick/gv0
Brick3: encoder-ee6761c0091c11e891ba0671029ed6b8.qenc...
2015 Aug 18
1
Live migration & storage copy broken since 1.2.17
Hi,
It seems that live migration using storage copy is broken since libvirt
1.2.17.
Here is the command line used to do the migration using virsh:
virsh migrate --live --p2p --persistent --undefinesource --copy
-storage-all d2b545d3-db32-48d3-b7fa-f62ff3a7fa18
qemu+tcp://dest/system
XML dump of my storage:
<pool type='logical'>
<name>local</name>
<uuid>276bda97-d6c2-4681-bc3f-0c8c221bd1b1</uuid>
<capacity unit='bytes'>1024207093760</capacity>
<allocation un...
2019 Sep 06
1
could not create snapshotxml on encryption image
...xports_data/168bc099-3ff1-44e8-ba2f-face1594db63/images/eed78b32-f257-445f-8f2e-b5c969ee38e8/da869c1d-f6bb-4816-be6a-f4d1d2ae2af2" type="file" /> #enc
<encryption format="luks">
<secret type="passphrase" uuid="694bdf38-214e-48d3-8c4c-9dbbcf0f5fa0" />
</encryption>
</disk>
</disks>
<memory file="/rhev/data-center/mnt/192.168.0.91:_exports_data/168bc099-3ff1-44e8-ba2f-face1594db63/images/6236ecf4-0eb5-4879-bb6d-251ae84b55f8/f086c613-d1c2-4ab3-851e-97961173d1d6" snapshot="externa...
2018 Feb 05
2
Fwd: Troubleshooting glusterfs
...; glusterfs 3.12.3
>>>
>>> [root at master-5f81bad0054a11e8bf7d0671029ed6b8 uploads]# gluster volume
>>> info
>>>
>>>
>>>
>>> Volume Name: gv0
>>>
>>> Type: Distribute
>>>
>>> Volume ID: 1a7e05f6-4aa8-48d3-b8e3-300637031925
>>>
>>> Status: Started
>>>
>>> Snapshot Count: 0
>>>
>>> Number of Bricks: 27
>>>
>>> Transport-type: tcp
>>>
>>> Bricks:
>>>
>>> Brick1: gluster3.qencode.com:/var/storage/...
2018 Feb 05
0
Fwd: Troubleshooting glusterfs
...>> [root at master-5f81bad0054a11e8bf7d0671029ed6b8 uploads]# gluster volume
>>>> info
>>>>
>>>>
>>>>
>>>> Volume Name: gv0
>>>>
>>>> Type: Distribute
>>>>
>>>> Volume ID: 1a7e05f6-4aa8-48d3-b8e3-300637031925
>>>>
>>>> Status: Started
>>>>
>>>> Snapshot Count: 0
>>>>
>>>> Number of Bricks: 27
>>>>
>>>> Transport-type: tcp
>>>>
>>>> Bricks:
>>>>
>>>...
2018 Feb 05
2
Fwd: Troubleshooting glusterfs
...a11e8bf7d0671029ed6b8 uploads]# gluster
>>>>> volume info
>>>>>
>>>>>
>>>>>
>>>>> Volume Name: gv0
>>>>>
>>>>> Type: Distribute
>>>>>
>>>>> Volume ID: 1a7e05f6-4aa8-48d3-b8e3-300637031925
>>>>>
>>>>> Status: Started
>>>>>
>>>>> Snapshot Count: 0
>>>>>
>>>>> Number of Bricks: 27
>>>>>
>>>>> Transport-type: tcp
>>>>>
>>>>&...
2018 Feb 07
0
Fwd: Troubleshooting glusterfs
...ter
>>>>>> volume info
>>>>>>
>>>>>>
>>>>>>
>>>>>> Volume Name: gv0
>>>>>>
>>>>>> Type: Distribute
>>>>>>
>>>>>> Volume ID: 1a7e05f6-4aa8-48d3-b8e3-300637031925
>>>>>>
>>>>>> Status: Started
>>>>>>
>>>>>> Snapshot Count: 0
>>>>>>
>>>>>> Number of Bricks: 27
>>>>>>
>>>>>> Transport-type: tcp
>&g...
2017 Oct 26
0
not healing one file
Hey Richard,
Could you share the following informations please?
1. gluster volume info <volname>
2. getfattr output of that file from all the bricks
getfattr -d -e hex -m . <brickpath/filepath>
3. glustershd & glfsheal logs
Regards,
Karthik
On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote:
> On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it
does diagnose any issues in setup. Currently you may have to run it in all
the three machines.
On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote:
> Thanks for this report. This week many of the developers are at Gluster
> Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
...common.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed metadata selfheal on 14d61959-9da1-47ed-a34e-a7b466b01fb0. sources=0 [2] sinks=1
[2017-10-25 10:40:25.532816] I [MSGID: 108026] [afr-self-heal-common.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed data selfheal on 41d4e663-87a5-48d3-9f8d-a8dd89ea5c79. sources=0 [2] sinks=1
[2017-10-25 10:40:25.535043] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-home-replicate-0: performing metadata selfheal on 41d4e663-87a5-48d3-9f8d-a8dd89ea5c79
[2017-10-25 10:40:25.668606] I [MSGID: 108026] [afr-self-heal-c...