similar to: Unable to get transaction opinfo for transaction ID gluster version 3.6

Displaying 20 results from an estimated 200 matches similar to: "Unable to get transaction opinfo for transaction ID gluster version 3.6"

2009 Jul 08
4
[LLVMdev] Internal compiler error in SelectionDAGBuild.cpp
Hello, While I was trying to cross-compile Linux OMAP kernel with llvm, I have the following error message. CC arch/arm/kernel/traps.o cc1: /home/wonjeon/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuild.cpp:5388: void llvm::SelectionDAGLowering::visitInlineAsm(llvm::CallSite): Assertion `(OpInfo.ConstraintType == TargetLowering::C_RegisterClass || OpInfo.ConstraintType ==
2016 May 09
2
Removing pointers from MCInstrDesc for less relocations
Hi everybody, I noticed today that my libLLVM-3.9svn.so has a ~1.7MB .data.rel.ro segment - i.e. data that needs to be touched by the dynamic linker even though it's ultimately read-only, and data that cannot be shared between multiple processes using LLVM. It turns out that a solid ~1.3MB of that data is in the tablegen'd MCInstrDesc tables - there a pointers for ImplicitUses,
2016 May 09
2
Removing pointers from MCInstrDesc for less relocations
On 09.05.2016 05:19, Benjamin Kramer wrote: > On Mon, May 9, 2016 at 5:35 AM, Nicolai Hähnle <llvm-dev at lists.llvm.org> wrote: >> Hi everybody, >> >> I noticed today that my libLLVM-3.9svn.so has a ~1.7MB .data.rel.ro segment >> - i.e. data that needs to be touched by the dynamic linker even though it's >> ultimately read-only, and data that cannot be
2009 Jul 08
2
[LLVMdev] Internal compiler error in SelectionDAGBuild.cpp
Bug #4521 has been filed. traps.c has been also attached. Thanks, Won On Wed, Jul 8, 2009 at 1:38 PM, Bob Wilson <bob.wilson at apple.com> wrote: > > On Jul 8, 2009, at 11:16 AM, Won J Jeon wrote: > > Hello, > > While I was trying to cross-compile Linux OMAP kernel with llvm, I have the > following error message. > > CC arch/arm/kernel/traps.o >
2008 May 20
2
[LLVMdev] [ia64] Assertion failed: (!OpInfo.AssignedRegs.Regs.empty() && "Couldn't allocate input reg!")
All, The following IR is causing the assert: \begin{ll} ; ModuleID = 'x.bc' target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32- i64:32:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64- f80:128:128" target triple = "ia64-portbld-freebsd8.0" define void @__ia64_set_fast_math() nounwind { entry: tail call void asm sideeffect "mov.m
2009 Jul 08
0
[LLVMdev] Internal compiler error in SelectionDAGBuild.cpp
On Jul 8, 2009, at 11:16 AM, Won J Jeon wrote: > Hello, > > While I was trying to cross-compile Linux OMAP kernel with llvm, I > have the following error message. > > CC arch/arm/kernel/traps.o > cc1: /home/wonjeon/llvm/lib/CodeGen/SelectionDAG/ > SelectionDAGBuild.cpp:5388: void > llvm::SelectionDAGLowering::visitInlineAsm(llvm::CallSite): >
2012 Sep 20
1
[LLVMdev] How to locate the start if an address mode in an X86 MachineInstr?
My team interested in doing some post-RA optimizations on X86 instructions, which would require identifying memory reference instructions. In the X86 back end instructions, memory addresses consist of a set of five operands. The offset to the start of the five operands depends on the format of the instruction. For instance, the instructions ADC32rm, ADD32rm, AND32rm, ANDN32rm, CMOVA32rm,
2015 Jun 08
2
CEntos6.6 doesn't install Glusterfs-server?
Hi I am newbie of Centos. I am trouble with installing glusterfs-server on Centos6.6 presented as below Sorry for you inconvenience in error message translated from Japanese messeage of Centos6.6. Is there Any solution? ??????????????????????????????????????????????????? [root at fs2 ~]# yum -y install glusterfs-server ....plugin:fastestmirror, priorities, refresh-packagekit, security
2009 Jul 08
0
[LLVMdev] Internal compiler error in SelectionDAGBuild.cpp
Thanks for the bug report. The attached file isn't helpful for reproducing the problem. I don't have all the header files that are included, so I can't just try to run it through my version of llvm-gcc and see what happens. At a minimum, please attach the preprocessed source file along with the complete llvm-gcc command line that you used to compile it. That would
2015 Aug 25
2
[LLVMdev] TableGen Register Class not matching for MI in 3.6
> On Aug 24, 2015, at 4:46 PM, Ryan Taylor <ryta1203 at gmail.com> wrote: > > Here is the snippet that matters: > > void > InstrEmitter::AddRegisterOperand(MachineInstrBuilder &MIB, > SDValue Op, > unsigned IIOpNum, > const MCInstrDesc *II, >
2015 Aug 25
4
[LLVMdev] TableGen Register Class not matching for MI in 3.6
Hi Ryan, > On Aug 24, 2015, at 6:49 PM, Ryan Taylor <ryta1203 at gmail.com> wrote: > > Quentin, > > I apologize for the spamming here but in getVR (where VReg is assigned an RC), it calls: > > const TargetRegisterClass *RC = TLI->getRegClassFor(Op.getSimpleValueType()); > VReg = MRI->createVirtualRegister(RC); > > My question is why is it using the
2015 Aug 25
2
[LLVMdev] TableGen Register Class not matching for MI in 3.6
Quentin, This is the issue. Somewhere prior to the constrainRegClass, it's assigning the GPRBase sub class of GPR to the MOV instruction, so it can't constrain it to Base and hence has to add the COPY. Now I just need to find out why it is ignoring the TableGen defined GPRBase for the MOV MI in favor of it's sub class GPR. Thanks. On Mon, Aug 24, 2015 at 8:34 PM, Ryan Taylor
2017 Jul 20
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
Hi list, recently I've noted a strange behaviour of my gluster storage, sometimes while executing a simple command like "gluster volume status vm-images-repo" as a response I got "Another transaction is in progress for vm-images-repo. Please try again after sometime.". This situation does not get solved simply waiting for but I've to restart glusterd on the node that
2017 Jul 20
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
In attachment the requested logs for all the three nodes. thanks, Paolo Il 20/07/2017 11:38, Atin Mukherjee ha scritto: > Please share the cmd_history.log file from all the storage nodes. > > On Thu, Jul 20, 2017 at 2:34 PM, Paolo Margara > <paolo.margara at polito.it <mailto:paolo.margara at polito.it>> wrote: > > Hi list, > > recently I've
2015 Aug 24
2
[LLVMdev] TableGen Register Class not matching for MI in 3.6
> On Aug 24, 2015, at 1:30 PM, Ryan Taylor <ryta1203 at gmail.com> wrote: > > I'm trying to do something like this: > > // Dst = NewVReg's reg class > // *II = MCInstrDesc > // IIOpNum = II Operand Num > > if (TRI->getCommonSubClass(DstRC, TRI->getRegClass(II->OpInfo[IIOpNum].RegClass)) == DstRC) > MRI->setRegClass(VReg, DstRC); >
2017 Jul 20
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
OK, on my nagios instance I've disabled gluster status check on all nodes except on one, I'll check if this is enough. Thanks, Paolo Il 20/07/2017 13:50, Atin Mukherjee ha scritto: > So from the cmd_history.logs across all the nodes it's evident that > multiple commands on the same volume are run simultaneously which can > result into transactions collision and you can
2017 Jul 26
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
Technically if only one node is pumping all these status commands, you shouldn't get into this situation. Can you please help me with the latest cmd_history & glusterd log files from all the nodes? On Wed, Jul 26, 2017 at 1:41 PM, Paolo Margara <paolo.margara at polito.it> wrote: > Hi Atin, > > I've initially disabled gluster status check on all nodes except on one on
2017 Jul 20
0
glusterd-locks.c:572:glusterd_mgmt_v3_lock
Please share the cmd_history.log file from all the storage nodes. On Thu, Jul 20, 2017 at 2:34 PM, Paolo Margara <paolo.margara at polito.it> wrote: > Hi list, > > recently I've noted a strange behaviour of my gluster storage, sometimes > while executing a simple command like "gluster volume status > vm-images-repo" as a response I got "Another transaction
2017 Jul 20
0
glusterd-locks.c:572:glusterd_mgmt_v3_lock
So from the cmd_history.logs across all the nodes it's evident that multiple commands on the same volume are run simultaneously which can result into transactions collision and you can end up with one command succeeding and others failing. Ideally if you are running volume status command for monitoring it's suggested to be run from only one node. On Thu, Jul 20, 2017 at 3:54 PM, Paolo
2017 Jul 26
0
glusterd-locks.c:572:glusterd_mgmt_v3_lock
Hi Atin, I've initially disabled gluster status check on all nodes except on one on my nagios instance as you recommended but this issue happens again. So I've disabled it on each nodes but the error happens again, currently only oVirt is monitoring gluster. I cannot modify this behaviour in the oVirt GUI, there is anything that could I do from the gluster prospective to solve this