Displaying 20 results from an estimated 182 matches for "0xde".
Did you mean:
0x2e
2014 Mar 19
2
[LLVMdev] load bytecode from string for jiting problem
...if (BufPtr != BufEnd )
errs() << "BP != BE ok\n";
if (BufPtr[0] == 'B')
errs() << "B ok\n";
if (BufPtr[1] == 'C')
errs() << "C ok\n";
if (BufPtr[2] == 0xc0)
errs() << "0xc0 ok\n";
if (BufPtr[3] == 0xde)
errs() << "0xde ok\n";
return BufPtr != BufEnd &&
BufPtr[0] == 'B' &&
BufPtr[1] == 'C' &&
BufPtr[2] == 0xc0 &&
BufPtr[3] == 0xde;
}
Second, I change ParseBitcodeInto as this...
2014 Mar 13
2
[LLVMdev] load bytecode from string for jiting problem
...l/ReaderWriter_8h_source.html#l00067)
if (sr.str()[0] == 'B')
std::cout << "B ok\n";
if (sr.str()[1] == 'C')
std::cout << "C ok\n";
if (sr.str()[2] == (char) 0xc0)
std::cout << "0xc0 ok\n";
if (sr.str()[3] == (char) 0xde)
std::cout << "0xde ok\n";
3) I try to parse the gv by
MemoryBuffer* mbjit = MemoryBuffer::getMemBuffer (sr.str());
LLVMContext& context = getGlobalContext();
ErrorOr<Module*> ModuleOrErr = parseBitcodeFile (mbjit, context);
if (error_code EC = ModuleOrErr.get...
2014 Mar 19
2
[LLVMdev] load bytecode from string for jiting problem
...;B')
> errs() << "B ok\n";
> if (BufPtr[1] == 'C')
> errs() << "C ok\n";
> if (BufPtr[2] == 0xc0)
> errs() << "0xc0 ok\n";
> if (BufPtr[3] == 0xde)
> errs() << "0xde ok\n";
>
> return BufPtr != BufEnd &&
> BufPtr[0] == 'B' &&
> BufPtr[1] == 'C' &&
> BufPtr[2] == 0xc0 &&
> BufPtr[3] == 0xde;
>...
2014 Mar 20
2
[LLVMdev] load bytecode from string for jiting problem
...== 'C')
std::cout << "C ok\n";
if (sr.str()[2] == (char) 0xc0)
std::cout << "0xc0 ok\n";
if (sr.str()[3] == (char) 0xde)
std::cout << "0xde ok\n";
llvm::MemoryBuffer* mbjit = llvm::MemoryBuffer::getMemBufferCopy (sr);
llvm::ErrorOr<llvm::Module*> ModuleOrErr = llvm::parseBitcodeFile (mbjit, context);
if (llvm::error_code EC = ModuleOrErr.getError())...
2014 Mar 20
2
[LLVMdev] load bytecode from string for jiting problem
Hello Willy,
Here is the dump from one of my bitcode files:
0000000 42 43 c0 de 21 0c 00 00 25 05 00 00 0b 82 20 00
As expected, 0x42 (= B), 0x43 (= C), xc0 and 0xde are in correct order. In
your case, the first byte is read as 37 (= 0x25). I wonder why? When you
check the bytes yourself, you get expected results. When the same bytes are
read from Stream object, you get a different result (maybe garbage). I
would suggest that you put a watchpoint on mbjit->g...
2018 Jan 05
4
Centos 6 2.6.32-696.18.7.el6.x86_64 does not boot in Xen PV mode
...AMD AuthenticAMD
(early) Centaur CentaurHauls
(early) 1 multicall(s) failed: cpu 0
(early) Pid: 0, comm: swapper Not tainted 2.6.32-696.18.7.el6.x86_64 #1
(early) Call Trace:
(early) [<ffffffff81004843>] ? xen_mc_flush+0x1c3/0x250
(early) [<ffffffff81006b9e>] ? xen_extend_mmu_update+0xde/0x1b0
(early) [<ffffffff81006fcd>] ? xen_set_pmd_hyper+0x9d/0xc0
(early) [<ffffffff81c5e8ac>] ? early_ioremap_init+0x98/0x133
(early) [<ffffffff81c45221>] ? setup_arch+0x40/0xca6
(early) [<ffffffff8107e0ee>] ? vprintk_default+0xe/0x10
(early) [<ffffffff8154b0cd>]...
2018 Jan 05
0
Centos 6 2.6.32-696.18.7.el6.x86_64 does not boot in Xen PV mode
...AMD AuthenticAMD
(early) Centaur CentaurHauls
(early) 1 multicall(s) failed: cpu 0
(early) Pid: 0, comm: swapper Not tainted 2.6.32-696.18.7.el6.x86_64 #1
(early) Call Trace:
(early) [<ffffffff81004843>] ? xen_mc_flush+0x1c3/0x250
(early) [<ffffffff81006b9e>] ? xen_extend_mmu_update+0xde/0x1b0
(early) [<ffffffff81006fcd>] ? xen_set_pmd_hyper+0x9d/0xc0
(early) [<ffffffff81c5e8ac>] ? early_ioremap_init+0x98/0x133
(early) [<ffffffff81c45221>] ? setup_arch+0x40/0xca6
(early) [<ffffffff8107e0ee>] ? vprintk_default+0xe/0x10
(early) [<ffffffff8154b0cd>]...
2009 Nov 14
2
[LLVMdev] Very slow performance of lli on x86
...d1, 0xd2, 0xd4 },
{ 0xd1, 0xd2, 0xd3, 0xd5 },
{ 0xd2, 0xd3, 0xd4, 0xd6 },
{ 0xd3, 0xd4, 0xd5, 0xd7 },
{ 0xd4, 0xd5, 0xd6, 0xd8 },
{ 0xd5, 0xd6, 0xd7, 0xd9 },
{ 0xd6, 0xd7, 0xd8, 0xda },
{ 0xd7, 0xd8, 0xd9, 0xdb },
{ 0xd8, 0xd9, 0xda, 0xdc },
{ 0xd9, 0xda, 0xdb, 0xdd },
{ 0xda, 0xdb, 0xdc, 0xde },
{ 0xdb, 0xdc, 0xdd, 0xdf },
{ 0xdc, 0xdd, 0xde, 0xe0 },
{ 0xdd, 0xde, 0xdf, 0xe1 },
{ 0xde, 0xdf, 0xe0, 0xe2 },
{ 0xdf, 0xe0, 0xe1, 0xe3 },
{ 0xe0, 0xe1, 0xe2, 0xe4 },
{ 0xe1, 0xe2, 0xe3, 0xe5 },
{ 0xe2, 0xe3, 0xe4, 0xe6 },
{ 0xe3, 0xe4, 0xe5, 0xe7 },
{ 0xe4, 0xe5, 0xe6, 0xe8 },
{ 0xe...
2018 Jan 06
0
Centos 6 2.6.32-696.18.7.el6.x86_64 does not boot in Xen PV mode
...rly) Centaur CentaurHauls
>(early) 1 multicall(s) failed: cpu 0
>(early) Pid: 0, comm: swapper Not tainted 2.6.32-696.18.7.el6.x86_64 #1
>(early) Call Trace:
>(early) [<ffffffff81004843>] ? xen_mc_flush+0x1c3/0x250
>(early) [<ffffffff81006b9e>] ? xen_extend_mmu_update+0xde/0x1b0
>(early) [<ffffffff81006fcd>] ? xen_set_pmd_hyper+0x9d/0xc0
>(early) [<ffffffff81c5e8ac>] ? early_ioremap_init+0x98/0x133
>(early) [<ffffffff81c45221>] ? setup_arch+0x40/0xca6
>(early) [<ffffffff8107e0ee>] ? vprintk_default+0xe/0x10
>(early) [<ff...
2004 Jun 25
2
panic() panic() panic()
...7.3 and the systems still panic()ed.
If I don't start the zaptel driver, they don't panic. If I start the
zaptel driver, but don't start asterisk, they still panic. I'm at a
loss of what to try next.
A typical Call Trace from the panic message looks like:
wait_on_irq, [kernel] 0xde
__global_cli [kernel] 0x62
flush_to_ldisc [kernel] 0x126
__run_task_queue [kernel] 0x61
context_thread [kernel] 0x13b
context_thread [kernel] 0x0
context_thread [kernel] 0x0
kernel_thread_helper 0x5
Any ideas? Thanks...
2009 Nov 14
0
[LLVMdev] Very slow performance of lli on x86
...d1, 0xd2, 0xd4 },
{ 0xd1, 0xd2, 0xd3, 0xd5 },
{ 0xd2, 0xd3, 0xd4, 0xd6 },
{ 0xd3, 0xd4, 0xd5, 0xd7 },
{ 0xd4, 0xd5, 0xd6, 0xd8 },
{ 0xd5, 0xd6, 0xd7, 0xd9 },
{ 0xd6, 0xd7, 0xd8, 0xda },
{ 0xd7, 0xd8, 0xd9, 0xdb },
{ 0xd8, 0xd9, 0xda, 0xdc },
{ 0xd9, 0xda, 0xdb, 0xdd },
{ 0xda, 0xdb, 0xdc, 0xde },
{ 0xdb, 0xdc, 0xdd, 0xdf },
{ 0xdc, 0xdd, 0xde, 0xe0 },
{ 0xdd, 0xde, 0xdf, 0xe1 },
{ 0xde, 0xdf, 0xe0, 0xe2 },
{ 0xdf, 0xe0, 0xe1, 0xe3 },
{ 0xe0, 0xe1, 0xe2, 0xe4 },
{ 0xe1, 0xe2, 0xe3, 0xe5 },
{ 0xe2, 0xe3, 0xe4, 0xe6 },
{ 0xe3, 0xe4, 0xe5, 0xe7 },
{ 0xe4, 0xe5, 0xe6, 0xe8 },
{ 0xe...
2013 Jun 20
3
[PATCH] virtio-pci: fix leaks of msix_affinity_masks
...;ffffffff813fdb3d>] vp_try_to_find_vqs+0x25d/0x810
[<ffffffff813fe171>] vp_find_vqs+0x81/0xb0
[<ffffffffa00d2a05>] init_vqs+0x85/0x120 [virtio_balloon]
[<ffffffffa00d2c29>] virtballoon_probe+0xf9/0x1a0 [virtio_balloon]
[<ffffffff813fb61e>] virtio_dev_probe+0xde/0x140
[<ffffffff814452b8>] driver_probe_device+0x98/0x3a0
[<ffffffff8144566b>] __driver_attach+0xab/0xb0
[<ffffffff814432f4>] bus_for_each_dev+0x94/0xb0
[<ffffffff81444f4e>] driver_attach+0x1e/0x20
[<ffffffff81444910>] bus_add_driver+0x200/0x280...
2013 Jun 20
3
[PATCH] virtio-pci: fix leaks of msix_affinity_masks
...;ffffffff813fdb3d>] vp_try_to_find_vqs+0x25d/0x810
[<ffffffff813fe171>] vp_find_vqs+0x81/0xb0
[<ffffffffa00d2a05>] init_vqs+0x85/0x120 [virtio_balloon]
[<ffffffffa00d2c29>] virtballoon_probe+0xf9/0x1a0 [virtio_balloon]
[<ffffffff813fb61e>] virtio_dev_probe+0xde/0x140
[<ffffffff814452b8>] driver_probe_device+0x98/0x3a0
[<ffffffff8144566b>] __driver_attach+0xab/0xb0
[<ffffffff814432f4>] bus_for_each_dev+0x94/0xb0
[<ffffffff81444f4e>] driver_attach+0x1e/0x20
[<ffffffff81444910>] bus_add_driver+0x200/0x280...
2013 Jun 19
2
[PATCH] virtio-pci: fix leaks of msix_affinity_masks
...;ffffffff813fdb3d>] vp_try_to_find_vqs+0x25d/0x810
[<ffffffff813fe171>] vp_find_vqs+0x81/0xb0
[<ffffffffa00d2a05>] init_vqs+0x85/0x120 [virtio_balloon]
[<ffffffffa00d2c29>] virtballoon_probe+0xf9/0x1a0 [virtio_balloon]
[<ffffffff813fb61e>] virtio_dev_probe+0xde/0x140
[<ffffffff814452b8>] driver_probe_device+0x98/0x3a0
[<ffffffff8144566b>] __driver_attach+0xab/0xb0
[<ffffffff814432f4>] bus_for_each_dev+0x94/0xb0
[<ffffffff81444f4e>] driver_attach+0x1e/0x20
[<ffffffff81444910>] bus_add_driver+0x200/0x280...
2013 Jun 19
2
[PATCH] virtio-pci: fix leaks of msix_affinity_masks
...;ffffffff813fdb3d>] vp_try_to_find_vqs+0x25d/0x810
[<ffffffff813fe171>] vp_find_vqs+0x81/0xb0
[<ffffffffa00d2a05>] init_vqs+0x85/0x120 [virtio_balloon]
[<ffffffffa00d2c29>] virtballoon_probe+0xf9/0x1a0 [virtio_balloon]
[<ffffffff813fb61e>] virtio_dev_probe+0xde/0x140
[<ffffffff814452b8>] driver_probe_device+0x98/0x3a0
[<ffffffff8144566b>] __driver_attach+0xab/0xb0
[<ffffffff814432f4>] bus_for_each_dev+0x94/0xb0
[<ffffffff81444f4e>] driver_attach+0x1e/0x20
[<ffffffff81444910>] bus_add_driver+0x200/0x280...
2010 Apr 29
2
Hardware error or ocfs2 error?
...Apr 29 11:01:18 node06 kernel: [2569440.616345] [<ffffffff812ee253>] ? schedule_timeout+0x2e/0xdd
Apr 29 11:01:18 node06 kernel: [2569440.616362] [<ffffffff8118d99a>] ? vsnprintf+0x40a/0x449
Apr 29 11:01:18 node06 kernel: [2569440.616378] [<ffffffff812ee118>] ? wait_for_common+0xde/0x14f
Apr 29 11:01:18 node06 kernel: [2569440.616396] [<ffffffff8104a188>] ? default_wake_function+0x0/0x9
Apr 29 11:01:18 node06 kernel: [2569440.616421] [<ffffffffa0fbac46>] ? __ocfs2_cluster_lock+0x8a4/0x8c5 [ocfs2]
Apr 29 11:01:18 node06 kernel: [2569440.616445] [<ffffffff812e...
2018 Mar 26
0
murmurhash3 test failures on big-endian systems
...{ "", 0, 0, { 0, }, { 0, } },
+ { "", 0, 0x1,
+ { 0x51, 0x4E, 0x28, 0xB7, },
+ { 0x51, 0x4E, 0x28, 0xB7, } },
+ { "", 0, 0xFFFFFFFF,
+ { 0x81, 0xF1, 0x6F, 0x39, },
+ { 0x81, 0xF1, 0x6F, 0x39, } },
+ { "\0\0\0\0", 4, 0,
+ { 0x23, 0x62, 0xF9, 0xDE, },
+ { 0x23, 0x62, 0xF9, 0xDE, } },
+ { "aaaa", 4, 0x9747b28c,
+ { 0x5A, 0x97, 0x80, 0x8A, },
+ { 0x5A, 0x97, 0x80, 0x8A, } },
+ { "aaa", 3, 0x9747b28c,
+ { 0x28, 0x3E, 0x01, 0x30, },
+ { 0x28, 0x3E, 0x01, 0x30, } },
+ { "aa", 2, 0x9747b28c,
+ { 0...
2003 Nov 16
1
Bug in 2.6.0-9
...t;c016ab26>] ext3_dirty_inode+0x4c/0x5f
[<c0156c20>] __mark_inode_dirty+0x24/0xce
[<c01527d2>] inode_setattr+0x118/0x123
[<c016a951>] ext3_setattr+0xc9/0x104
[<c0152903>] notify_change+0xda/0x143
[<c013c6b9>] sys_utime+0xec/0x110
[<c013e5ad>] __fput+0xc2/0xde
[<c013d251>] filp_close+0x5a/0x63
[<c013d2b2>] sys_close+0x58/0x7a
[<c0108807>] syscall_call+0x7/0xb
Code: 0f 0b 8f 06 52 3d 22 c0 83 c4 14 85 f6 75 1f b8 00 e0 ff ff
<6>note: wwwoffled[9533] exited with preempt_count 1
bad: scheduling while atomic!
Call Trace:
[<c...
2005 Jan 14
1
xen-unstable dom0/1 smp schedule while atomic
...resched+0x5/0x2f
<snip>
scheduling while atomic
[schedule+1682/1696] schedule+0x692/0x6a0
[__change_page_attr+203/624] __change_page_attr+0xcb/0x270
[sys_sched_yield+89/112] sys_sched_yield+0x59/0x70
[coredump_wait+63/176] coredump_wait+0x3f/0xb0
[do_coredump+222/471] do_coredump+0xde/0x1d7
[__dequeue_signal+296/432] __dequeue_signal+0x128/0x1b0
[__dequeue_signal+245/432] __dequeue_signal+0xf5/0x1b0
[dequeue_signal+53/160] dequeue_signal+0x35/0xa0
[get_signal_to_deliver+599/848] get_signal_to_deliver+0x257/0x350
[do_signal+103/288] do_signal+0x67/0x120
[dnotify_paren...
2018 Mar 26
2
murmurhash3 test failures on big-endian systems
Hi Aki,
On 15:55 Mon 26 Mar , Aki Tuomi wrote:
> On 26.03.2018 15:49, Apollon Oikonomopoulos wrote:
> > Hi,
> >
> > The dovecot 2.3.0.1 Debian package currently fails to build on all
> > big-endian architectures[1], due to murmurhash3 tests failing. The
> > relevant output from e.g. s390x is:
> >
> > test-murmurhash3.c:22: Assert(#8) failed: