similar to: Looking for Instance backed Centos 6 x86_64 images

Displaying 7 results from an estimated 7 matches similar to: "Looking for Instance backed Centos 6 x86_64 images"

2013 Nov 04
1
root password required
Hi, I've lost private key for my instance. So I've generated new Private/Public Keys, Created Image from my instance, and Launched new instance. Now I can ssh to this new instance: but it ask me for root password (which it did not on original instance): It looks like this is problem with MarketPlace AMI I was using originally:
2011 Aug 23
1
pool assumed to have 512B sector
Hi, When creating a dedicated storage pool on a FC attached storage with sector size of 4096B and size of 2TB, the storage pool is created but reported as 1/8 of it's size, the same follows for any volume created in the same storage pool: # virsh pool-info guest_images_disk Name: guest_images_disk UUID: 30247222-c1fb-8749-b833-a73782198d26 State:
2013 Apr 20
1
PuppetDB / inventory service configuration problem
Hi, I''ve just been configuring my new Puppet 3.1.1 / Dashboard setup with Passenger to use PuppetDB for the inventory service. I configured it via the puppetdb forge module, and it all seems to be configured correctly as far as the docs describe. When I look at a node in the dashboard, under the inventory section, I just see: Could not retrieve facts from inventory service: 404
2012 Jul 20
2
Bug#682202: xcp-squeezed not started on boot
Package: xcp-squeezed Version: 1.3.2-9 Severity: important When trying to start a guest I get: # xe vm-start name-label=squeeze-2 The server failed to handle your request, due to an internal error. The given message may give details useful for debugging the problem. message: Failure("The ballooning daemon is not running") If I then manually run "/etc/init.d/xcp-squeezed
2017 Oct 26
0
not healing one file
Hey Richard, Could you share the following informations please? 1. gluster volume info <volname> 2. getfattr output of that file from all the bricks getfattr -d -e hex -m . <brickpath/filepath> 3. glustershd & glfsheal logs Regards, Karthik On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote: > On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it does diagnose any issues in setup. Currently you may have to run it in all the three machines. On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote: > Thanks for this report. This week many of the developers are at Gluster > Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
Hi Karthik, thanks for taking a look at this. I'm not working with gluster long enough to make heads or tails out of the logs. The logs are attached to this mail and here is the other information: # gluster volume info home Volume Name: home Type: Replicate Volume ID: fe6218ae-f46b-42b3-a467-5fc6a36ad48a Status: Started Snapshot Count: 1 Number of Bricks: 1 x 3 = 3 Transport-type: tcp