search for: 0xfful

Displaying 9 results from an estimated 9 matches for "0xfful".

Did you mean: 0xffful
2004 Aug 06
0
Re: does installed lib support _int()s ?
...2A3 Then you can define compile-time MACROS for #define SPEEX_GET_FULL_VERSION (speex_lib_version()) #define SPEEX_GET_MAJOR_MINOR_VERSION (speex_lib_version() >> 24) #define SPEEX_GET_MAJOR_VERSION (speex_lib_version() >> 16) #define SPEEX_GET_RELEASE_LEVEL (speex_lib_version() & 0xFFUL) #define SPEEX_GET_FULL_MINOR_VERSION (speex_lib_version() & 0xffffUL) #define SPEEX_GET_MINOR_VERSION ((speex_lib_version() >> 8) & 0xFFUL) 2) A *RUNTIME* call into the library which returns the text-version string of the library in use. peex_lib_version_string() perhaps? =MB= ---...
2004 Aug 06
4
Re: does installed lib support _int()s ?
Hi, Right now, I'm thinking of adding a speex_lib_ctl() call that would support SPEEX_GET_VERSION (and return a string) or SPEEX_GET_MAJOR_VERSION and SPEEX_GET_MINOR_VERSION (and return ints). I'm open to other suggestions though. If there's anything you'd like to see in the API for 1.2, say it now. ...and no, I won't add a speex_do_all_the_work_for_me() call :)
2016 Dec 08
0
[PATCH 2/2] x86, paravirt: Fix bool return type for PVOP_CALL
...irt_types.h +++ b/arch/x86/include/asm/paravirt_types.h @@ -508,6 +508,18 @@ int paravirt_disable_iospace(void); #define PVOP_TEST_NULL(op) ((void)op) #endif +#define PVOP_RETMASK(rettype) \ + ({ unsigned long __mask = ~0UL; \ + switch (sizeof(rettype)) { \ + case 1: __mask = 0xffUL; break; \ + case 2: __mask = 0xffffUL; break; \ + case 4: __mask = 0xffffffffUL; break; \ + default: break; \ + } \ + __mask; \ + }) + + #define ____PVOP_CALL(rettype, op, clbr, call_clbr, extra_clbr, \ pre, post, ...) \ ({ \ @@ -535,7 +547,7...
2020 Jul 01
0
[PATCH v3 2/5] mm/hmm: add hmm_mapping order
...(BITS_PER_LONG - 2), HMM_PFN_ERROR = 1UL << (BITS_PER_LONG - 3), + HMM_PFN_ORDER_SHIFT = (BITS_PER_LONG - 8), /* Input flags */ HMM_PFN_REQ_FAULT = HMM_PFN_VALID, HMM_PFN_REQ_WRITE = HMM_PFN_WRITE, - HMM_PFN_FLAGS = HMM_PFN_VALID | HMM_PFN_WRITE | HMM_PFN_ERROR, + HMM_PFN_FLAGS = 0xFFUL << HMM_PFN_ORDER_SHIFT, }; /* @@ -61,6 +62,25 @@ static inline struct page *hmm_pfn_to_page(unsigned long hmm_pfn) return pfn_to_page(hmm_pfn & ~HMM_PFN_FLAGS); } +/* + * hmm_pfn_to_map_order() - return the CPU mapping size order + * + * The hmm_pfn entry returned by hmm_range_fa...
2016 Dec 08
3
[PATCH 0/2] Fix paravirt fail
Two patches that cure fallout from commit: 3cded4179481 ("x86/paravirt: Optimize native pv_lock_ops.vcpu_is_preempted()")
2016 Dec 08
3
[PATCH 0/2] Fix paravirt fail
Two patches that cure fallout from commit: 3cded4179481 ("x86/paravirt: Optimize native pv_lock_ops.vcpu_is_preempted()")
2013 Mar 21
27
[PATCH 0/4] xen/arm: guest SMP support
Hi all, this small patch series implement guest SMP support for ARM, using the ARM PSCI interface for secondary cpu bringup. Stefano Stabellini (4): xen/arm: basic PSCI support, implement cpu_on xen/arm: support for guest SGI xen/arm: support vcpu_op hypercalls xen: move VCPUOP_register_vcpu_info to common code xen/arch/arm/domain.c | 66 ++++++++++++++++++++++++
2020 Jul 01
8
[PATCH v3 0/5] mm/hmm/nouveau: add PMD system memory mapping
The goal for this series is to introduce the hmm_pfn_to_map_order() function. This allows a device driver to know that a given 4K PFN is actually mapped by the CPU using a larger sized CPU page table entry and therefore the device driver can safely map system memory using larger device MMU PTEs. The series is based on 5.8.0-rc3 and is intended for Jason Gunthorpe's hmm tree. These were
2020 Aug 19
39
a saner API for allocating DMA addressable pages
Hi all, this series replaced the DMA_ATTR_NON_CONSISTENT flag to dma_alloc_attrs with a separate new dma_alloc_pages API, which is available on all platforms. In addition to cleaning up the convoluted code path, this ensures that other drivers that have asked for better support for non-coherent DMA to pages with incurring bounce buffering over can finally be properly supported. I'm still a