search for: 4a09

Displaying 11 results from an estimated 11 matches for "4a09".

Did you mean: 409
2014 Jun 27
1
libvirt on OpenStack
...After running a vm instance with some croup limit applied, I can’t find any related cgroup settings. 2. Can I change limit value after instance is running? like change disk_read_iops_sec from 10 to 20. One of the xml file like below. <domain type="kvm"> <uuid>27f49e5c-8ee0-4a09-8269-5fa31acd2983</uuid> <name>instance-000000da</name> <memory>2097152</memory> <vcpu cpuset="1-12">1</vcpu> <sysinfo type="smbios"> <system> <entry name="manufacturer">Red Hat Inc.</entr...
2018 Dec 12
0
No inbound or outbound
...A object GUID: a149e40b-9b85-40f0-b87b-eeec61796ef4 > DSA invocationId: 382767aa-ffec-4fd9-8a25-c1c3b8b87012 > > ==== INBOUND NEIGHBORS ==== > > ==== OUTBOUND NEIGHBORS ==== > > ==== KCC CONNECTION OBJECTS ==== > > Connection -- >         Connection name: e622af6a-6bd2-4a09-a846-2e835a6b7a97 >         Enabled        : TRUE >         Server DNS name : ads1.samdom.com >         Server DN name  : CN=NTDS > Settings,CN=ADS1,CN=Servers,CN=Default-First-Site-Name,CN=Sites,CN=Configuration,DC=samdom,DC=com >                 TransportType: RPC >            ...
2017 Jul 07
2
I/O error for one folder within the mountpoint
...45c6-a52a-b3e2402d0316> <gfid:01409b23-eff2-4bda-966e-ab6133784001> <gfid:c723e484-63fc-4267-b3f0-4090194370a0> <gfid:fb1339a8-803f-4e29-b0dc-244e6c4427ed> <gfid:056f3bba-6324-4cd8-b08d-bdf0fca44104> <gfid:a8f6d7e5-0ff2-4747-89f3-87592597adda> <gfid:3f6438a0-2712-4a09-9bff-d5a3027362b4> <gfid:392c8e2f-9da4-4af8-a387-bfdfea2f404e> <gfid:37e1edfd-9f58-4da3-8abe-819670c70906> <gfid:15b7cdb3-aae8-4ca5-b28c-e87a3e599c9b> <gfid:1d087e51-fb40-4606-8bb5-58936fb11a4c> <gfid:bb0352b9-4a5e-4075-9179-05c3a5766cf4> <gfid:40133fcf-a1fb-4d60...
2017 Jul 07
0
I/O error for one folder within the mountpoint
...; <gfid:01409b23-eff2-4bda-966e-ab6133784001> > <gfid:c723e484-63fc-4267-b3f0-4090194370a0> > <gfid:fb1339a8-803f-4e29-b0dc-244e6c4427ed> > <gfid:056f3bba-6324-4cd8-b08d-bdf0fca44104> > <gfid:a8f6d7e5-0ff2-4747-89f3-87592597adda> > <gfid:3f6438a0-2712-4a09-9bff-d5a3027362b4> > <gfid:392c8e2f-9da4-4af8-a387-bfdfea2f404e> > <gfid:37e1edfd-9f58-4da3-8abe-819670c70906> > <gfid:15b7cdb3-aae8-4ca5-b28c-e87a3e599c9b> > <gfid:1d087e51-fb40-4606-8bb5-58936fb11a4c> > <gfid:bb0352b9-4a5e-4075-9179-05c3a5766cf4> &g...
2017 Jul 07
2
I/O error for one folder within the mountpoint
...eff2-4bda-966e-ab6133784001> >> <gfid:c723e484-63fc-4267-b3f0-4090194370a0> >> <gfid:fb1339a8-803f-4e29-b0dc-244e6c4427ed> >> <gfid:056f3bba-6324-4cd8-b08d-bdf0fca44104> >> <gfid:a8f6d7e5-0ff2-4747-89f3-87592597adda> >> <gfid:3f6438a0-2712-4a09-9bff-d5a3027362b4> >> <gfid:392c8e2f-9da4-4af8-a387-bfdfea2f404e> >> <gfid:37e1edfd-9f58-4da3-8abe-819670c70906> >> <gfid:15b7cdb3-aae8-4ca5-b28c-e87a3e599c9b> >> <gfid:1d087e51-fb40-4606-8bb5-58936fb11a4c> >> <gfid:bb0352b9-4a5e-4075-9179...
2017 Jul 07
0
I/O error for one folder within the mountpoint
...3784001> >>> <gfid:c723e484-63fc-4267-b3f0-4090194370a0> >>> <gfid:fb1339a8-803f-4e29-b0dc-244e6c4427ed> >>> <gfid:056f3bba-6324-4cd8-b08d-bdf0fca44104> >>> <gfid:a8f6d7e5-0ff2-4747-89f3-87592597adda> >>> <gfid:3f6438a0-2712-4a09-9bff-d5a3027362b4> >>> <gfid:392c8e2f-9da4-4af8-a387-bfdfea2f404e> >>> <gfid:37e1edfd-9f58-4da3-8abe-819670c70906> >>> <gfid:15b7cdb3-aae8-4ca5-b28c-e87a3e599c9b> >>> <gfid:1d087e51-fb40-4606-8bb5-58936fb11a4c> >>> <gfid:bb0...
2017 Jul 07
0
I/O error for one folder within the mountpoint
On 07/07/2017 01:23 PM, Florian Leleu wrote: > > Hello everyone, > > first time on the ML so excuse me if I'm not following well the rules, > I'll improve if I get comments. > > We got one volume "applicatif" on three nodes (2 and 1 arbiter), each > following command was made on node ipvr8.xxx: > > # gluster volume info applicatif > > Volume
2017 Jul 07
2
I/O error for one folder within the mountpoint
Hello everyone, first time on the ML so excuse me if I'm not following well the rules, I'll improve if I get comments. We got one volume "applicatif" on three nodes (2 and 1 arbiter), each following command was made on node ipvr8.xxx: # gluster volume info applicatif Volume Name: applicatif Type: Replicate Volume ID: ac222863-9210-4354-9636-2c822b332504 Status: Started
2017 Oct 26
0
not healing one file
Hey Richard, Could you share the following informations please? 1. gluster volume info <volname> 2. getfattr output of that file from all the bricks getfattr -d -e hex -m . <brickpath/filepath> 3. glustershd & glfsheal logs Regards, Karthik On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote: > On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it does diagnose any issues in setup. Currently you may have to run it in all the three machines. On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote: > Thanks for this report. This week many of the developers are at Gluster > Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
...0955B546B63507B (396a3cb4-9a9e-4af5-ab70-9333c69ab440) on home-client-2 [2017-10-25 10:14:12.199239] W [MSGID: 108015] [afr-self-heal-entry.c:56:afr_selfheal_entry_delete] 0-home-replicate-0: expunging file a3f5a769-8859-48e3-96ca-60a988eb9358/CCAC61193717D387D8A2BAA472C075CC648D6EBE (582f4c2a-15c0-4a09-9ae9-69d2d3f08b00) on home-client-2 [2017-10-25 10:14:12.218269] W [MSGID: 108015] [afr-self-heal-entry.c:56:afr_selfheal_entry_delete] 0-home-replicate-0: expunging file a3f5a769-8859-48e3-96ca-60a988eb9358/25792D43B1F7EB054AB3F87C67F64C6CF1A6127F (8a51a59d-3423-4db4-98fa-3c98eb0d9227) on home-cli...