Displaying 20 results from an estimated 55 matches for "kenrel".
Did you mean:
kenel
2004 Mar 11
2
kenrel 2.6.4 patch for fixing warning of smbfs on high gid/uid
Hello,
when i compiled latest linux 2.6.4 kernel source with gcc 3.3.2 on
Linux/x86,
i got a few warnings about varaibles beeing compared against constants
where the range of the variable is so that the expression is always
constant.
The explicit comparison has to do with the code for high-uid and gid sheme.
attached you will find a diff which does eliminate this error message
by introducing an
2012 Jan 24
1
Booting custom kernels in centOS6.0
Hi,
?
I'm using the procedure described in building customer kenrels centOS5.0.
?
Except for one step where spec file to comment 25 lines from 638, I did every thing.
I got rpm files in ~/rpmbuild/RPMS.
After installing kernel-firmeware, kernel, kernel-headers, kernel-devel packages it creates OS entry in /boot/grub/menu.lst?following "defaults 0" directi...
2013 Feb 01
2
dom0's layout on physical memory?
Hello, all
I thought dom0 is layed out on physical memory starting 0, linearly, but
this message,
(XEN) *** LOADING DOMAIN 0 ***
(XEN) Xen kernel: 64-bit, lsb, compat32
(XEN) Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x1d87000
(XEN) PHYSICAL MEMORY ARRANGEMENT:
(XEN) Dom0 alloc.: 0000000c14000000->0000000c18000000 (498514 pages to
be allocated)
shows the address range
2019 Mar 08
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
Hello Jeson,
On Fri, Mar 08, 2019 at 04:50:36PM +0800, Jason Wang wrote:
> Just to make sure I understand here. For boosting through huge TLB, do
> you mean we can do that in the future (e.g by mapping more userspace
> pages to kenrel) or it can be done by this series (only about three 4K
> pages were vmapped per virtqueue)?
When I answered about the advantages of mmu notifier and I mentioned
guaranteed 2m/gigapages where available, I overlooked the detail you
were using vmap instead of kmap. So with vmap you're actuall...
2019 Mar 08
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
Hello Jeson,
On Fri, Mar 08, 2019 at 04:50:36PM +0800, Jason Wang wrote:
> Just to make sure I understand here. For boosting through huge TLB, do
> you mean we can do that in the future (e.g by mapping more userspace
> pages to kenrel) or it can be done by this series (only about three 4K
> pages were vmapped per virtqueue)?
When I answered about the advantages of mmu notifier and I mentioned
guaranteed 2m/gigapages where available, I overlooked the detail you
were using vmap instead of kmap. So with vmap you're actuall...
2007 Jun 01
2
lguest problem on boot of guest kernel
Hi !
Kenrel 2.6.21 (kernel.org)
Patch lguest-2.6.21-254.patch
Distro Slackware 11.0
GCC 3.4.6
GLIBC 2.3.6
HW model name : AMD Duron(tm) procu{s{
Module Size Used by
tun 7680 0
lg 54600 0
just started playing with lguest - patching,...
2007 Jun 01
2
lguest problem on boot of guest kernel
Hi !
Kenrel 2.6.21 (kernel.org)
Patch lguest-2.6.21-254.patch
Distro Slackware 11.0
GCC 3.4.6
GLIBC 2.3.6
HW model name : AMD Duron(tm) procu{s{
Module Size Used by
tun 7680 0
lg 54600 0
just started playing with lguest - patching,...
2019 Mar 11
4
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...:
> > Hello Jeson,
> >
> > On Fri, Mar 08, 2019 at 04:50:36PM +0800, Jason Wang wrote:
> > > Just to make sure I understand here. For boosting through huge TLB, do
> > > you mean we can do that in the future (e.g by mapping more userspace
> > > pages to kenrel) or it can be done by this series (only about three 4K
> > > pages were vmapped per virtqueue)?
> > When I answered about the advantages of mmu notifier and I mentioned
> > guaranteed 2m/gigapages where available, I overlooked the detail you
> > were using vmap instead of...
2019 Mar 11
4
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...:
> > Hello Jeson,
> >
> > On Fri, Mar 08, 2019 at 04:50:36PM +0800, Jason Wang wrote:
> > > Just to make sure I understand here. For boosting through huge TLB, do
> > > you mean we can do that in the future (e.g by mapping more userspace
> > > pages to kenrel) or it can be done by this series (only about three 4K
> > > pages were vmapped per virtqueue)?
> > When I answered about the advantages of mmu notifier and I mentioned
> > guaranteed 2m/gigapages where available, I overlooked the detail you
> > were using vmap instead of...
2009 May 28
5
[PATCH] tools/stubdom: get rid of hardcoded pathes
Hi!
Attached patch makes xen-tools and stubdom-dm going independent
from hardcoded pathes. It is no possible to install into /usr/local or any
other non-default directory and use it out-of-the box.
This allows us to have different Xen versions in different directories,
simplifies packaging for distributions.
It also finds ''hvmloader'' and
2019 Oct 23
2
[PATCH v2] vhost: introduce mdev based hardware backend
...gt;
> Still MQ as an example, there's no way (or no need) for parent to know that
> vhost-mdev does not support MQ.
The mdev is a MDEV_CLASS_ID_VHOST mdev device. When the parent
is being developed, it should know the currently supported features
of vhost-mdev.
> And this allows old kenrel to work with new
> parent drivers.
The new drivers should provide things like VIRTIO_MDEV_F_VERSION_1
to be compatible with the old kernels. When VIRTIO_MDEV_F_VERSION_1
is provided/negotiated, the behaviours should be consistent.
>
> So basically we have three choices here:
>
>...
2019 Oct 23
2
[PATCH v2] vhost: introduce mdev based hardware backend
...gt;
> Still MQ as an example, there's no way (or no need) for parent to know that
> vhost-mdev does not support MQ.
The mdev is a MDEV_CLASS_ID_VHOST mdev device. When the parent
is being developed, it should know the currently supported features
of vhost-mdev.
> And this allows old kenrel to work with new
> parent drivers.
The new drivers should provide things like VIRTIO_MDEV_F_VERSION_1
to be compatible with the old kernels. When VIRTIO_MDEV_F_VERSION_1
is provided/negotiated, the behaviours should be consistent.
>
> So basically we have three choices here:
>
>...
2015 May 01
1
kernel-debuginfo
Hi,
Even though I am not running a centos.plus kernel yum wants to install
the kernel-debuginfo for it.
# yum install --disablerepo=\* --enablerepo=base-debuginfo kernel-debuginfo
Loaded plugins: fastestmirror, refresh-packagekit
Setting up Install Process
Loading mirror speeds from cached hostfile
Resolving Dependencies
--> Running transaction check
---> Package kernel-debuginfo.x86_64
2023 Aug 05
1
[PATCH] drm/nouveau/disp: Revert a NULL check inside nouveau_connector_get_modes
The original commit adding that check tried to protect the kenrel against
a potential invalid NULL pointer access.
However we call nouveau_connector_detect_depth once without a native_mode
set on purpose for non LVDS connectors and this broke DP support in a few
cases.
Cc: Olaf Skibbe <news at kravcenko.com>
Cc: Lyude Paul <lyude at redhat.com>
Clos...
2006 Apr 04
1
CentOS 4.3 i586 install option
Johnny Hughes wrote in thread
RE: [CentOS] install CentOS using an external USB cdrom
Changing Subject to CentOS 4.3 i586 install option
Last few post on the thread "install CentOS using an external USB cdrom"
should have been "serial console install" or similar
> instead of
> linux your_options_here
> use
> i586 your_options_here
> (that is w/ the 4.3
2001 Jul 31
0
any news about rsync and acl's?
If we do rsync with an nfsv4-style transport, then ACLs will be transported
as attributes of a file, abstracted into a POSIX/NT common format. For
them to be transported successfully, both machines would have to
support ACLs either in the kenrel or through some kind of vfs
that stores them in a database. Also, ACLs are defined in terms
of usernames so they'll have to have a common username database
or a way of mapping from one to the other.
Since ACLs are transferred into the native filesystem scheme on the
destination, data that c...
2010 Jan 20
1
How to debug Ubuntu 8.04 LTS guest crash during install?
Hello:
I am using kvm on a CentOS 5.4 server.
I am trying to install the TunkeyLinux Core appliance
found here: http://www.turnkeylinux.org/core
I downloaded the ISO file from the web site.
Then, I used this command to intall it:
virt-install -n tkl-core -r 512 --vcpus=1 --check-cpu --os-type=linux
--os-variant=ubuntuhardy -v --accelerate
-c /tmp/turnkey-core-2009.10-hardy-x86.iso
-f
2010 Sep 18
1
+ init-add-sys-wrapperh.patch added to -mm tree
...ely bogus, since allowing
> normal kernel code to issue random syscalls had never been an intention.
Well, AFAICT kernels have pretty much always done direct syscalls in some
places. Originally this was actually going through the user space
syscall entry path which I removed a few years ago for kenrel users.
> IOW, this is a userland code that had been subjected to trivial modifications
> to run in kernel, just before the execve() of init. The only reason why
> we do not simply turn that into a userland binary and execve() that instead
> is that we don't want to complicate kbui...
2019 Mar 11
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...3/9 ??3:48, Andrea Arcangeli wrote:
> Hello Jeson,
>
> On Fri, Mar 08, 2019 at 04:50:36PM +0800, Jason Wang wrote:
>> Just to make sure I understand here. For boosting through huge TLB, do
>> you mean we can do that in the future (e.g by mapping more userspace
>> pages to kenrel) or it can be done by this series (only about three 4K
>> pages were vmapped per virtqueue)?
> When I answered about the advantages of mmu notifier and I mentioned
> guaranteed 2m/gigapages where available, I overlooked the detail you
> were using vmap instead of kmap. So with vmap y...
2019 Mar 12
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...Hello Jeson,
>>>
>>> On Fri, Mar 08, 2019 at 04:50:36PM +0800, Jason Wang wrote:
>>>> Just to make sure I understand here. For boosting through huge TLB, do
>>>> you mean we can do that in the future (e.g by mapping more userspace
>>>> pages to kenrel) or it can be done by this series (only about three 4K
>>>> pages were vmapped per virtqueue)?
>>> When I answered about the advantages of mmu notifier and I mentioned
>>> guaranteed 2m/gigapages where available, I overlooked the detail you
>>> were using vmap...