Displaying 20 results from an estimated 5000 matches similar to: "[newbie] read row from file into vector"
2023 Dec 04
1
Unable to add the CRAN apt repository
Thanks! ?"jammy" made it work. ?
For some reason, ?lsb_release -cs is returning "victoria" rather than "jammy",
and
$> sudo apt update
Hit:1 http://archive.ubuntu.com/ubuntu jammy InRelease
Ign:2 http://packages.linuxmint.com victoria InRelease
Get:3 http://archive.ubuntu.com/ubuntu jammy-updates InRelease [119 kB]
Hit:4 http://packages.linuxmint.com victoria
2023 Dec 04
1
Unable to add the CRAN apt repository
On Mon, 04 Dec 2023 13:41:47 -0500
Steve Gutreuter <sgutreuter at gmail.com> wrote:
> $> sudo /usr/bin/add-apt-repository "deb
> https://cloud.r-project.org/bin/linux/ubuntu $(lsb_release
> -cs)-cran40/"
Looks like `lsb_release -cs` returns a Mint codename for you.
Thankfully, since we know that Linux Mint 21 is based on Ubuntu 22.04
"Jammy Jellyfish", it
2023 Dec 04
2
Unable to add the CRAN apt repository
I just upgraded from Linux Mint 20 to 21 and am no longer able to add the CRAN
Ubuntu repository to my list of repositories. ?I am getting:
$> sudo /usr/bin/add-apt-repository "deb
https://cloud.r-project.org/bin/linux/ubuntu $(lsb_release -cs)-cran40/"
$> sudo apt update
Hit:1 http://security.ubuntu.com/ubuntu jammy-security InRelease
Hit:2 http://archive.ubuntu.com/ubuntu jammy
2014 Aug 01
1
Update R on a new Linux Mint Maya 13 + rJava and XLConnect
Folks,
I was able to get R installed using:
apt-get install r-base-core
R version 2.14.1 (2011-12-22)
Copyright (C) 2011 The R Foundation for Statistical Computing
ISBN 3-900051-07-0
Platform: i686-pc-linux-gnu (32-bit)
* What is the best way to upgrade to the latest version of R.
* What is the best way to install rJava and XLConnect.
Thanks for your time,
Best,
KW
PS. I know some of you
2007 May 15
1
Efficiently reading random lines form a large file
I need to read two different random lines at a time from a large
ASCII file (120 x 296976) containing space delimited 0-1 entries.
The following code does the job and it's reasonable fast for my needs:
lineNumber = sample(120, 2)
line1 = scan(filename, what = "integer", skip=lineNumber[1]-1, nlines=1)
line2 = scan(filename, what = "integer",
2011 Aug 12
11
[net-next RFC PATCH 0/7] multiqueue support for tun/tap
As multi-queue nics were commonly used for high-end servers,
current single queue based tap can not satisfy the
requirement of scaling guest network performance as the
numbers of vcpus increase. So the following series
implements multiple queue support in tun/tap.
In order to take advantages of this, a multi-queue capable
driver and qemu were also needed. I just rebase the latest
version of
2011 Aug 12
11
[net-next RFC PATCH 0/7] multiqueue support for tun/tap
As multi-queue nics were commonly used for high-end servers,
current single queue based tap can not satisfy the
requirement of scaling guest network performance as the
numbers of vcpus increase. So the following series
implements multiple queue support in tun/tap.
In order to take advantages of this, a multi-queue capable
driver and qemu were also needed. I just rebase the latest
version of
2016 Jun 15
7
[PATCH net-next V2] tun: introduce tx skb ring
We used to queue tx packets in sk_receive_queue, this is less
efficient since it requires spinlocks to synchronize between producer
and consumer.
This patch tries to address this by:
- introduce a new mode which will be only enabled with IFF_TX_ARRAY
set and switch from sk_receive_queue to a fixed size of skb
array with 256 entries in this mode.
- introduce a new proto_ops peek_len which was
2016 Jun 15
7
[PATCH net-next V2] tun: introduce tx skb ring
We used to queue tx packets in sk_receive_queue, this is less
efficient since it requires spinlocks to synchronize between producer
and consumer.
This patch tries to address this by:
- introduce a new mode which will be only enabled with IFF_TX_ARRAY
set and switch from sk_receive_queue to a fixed size of skb
array with 256 entries in this mode.
- introduce a new proto_ops peek_len which was
2015 Nov 06
2
corrupt PACKAGES.gz?
Is it just me, or did a corrupt PACKAGES.gz file get installed in the
bin/windows/contrib/3.2 directory of CRAN mirrors recently? gzfile()
complains about it and Cygwin's gzip cannot decompress it. I tried the
following
repos <- "https://cran.rstudio.com"
v <- "3.2"
pkgs.gz <- paste(sep="/", repos, "bin/windows/contrib", v,
2016 Jun 30
9
[PATCH net-next V3 0/6] switch to use tx skb array in tun
Hi all:
This series tries to switch to use skb array in tun. This is used to
eliminate the spinlock contention between producer and consumer. The
conversion was straightforward: just introdce a tx skb array and use
it instead of sk_receive_queue.
A minor issue is to keep the tx_queue_len behaviour, since tun used to
use it for the length of sk_receive_queue. This is done through:
- add the
2016 Jun 30
9
[PATCH net-next V3 0/6] switch to use tx skb array in tun
Hi all:
This series tries to switch to use skb array in tun. This is used to
eliminate the spinlock contention between producer and consumer. The
conversion was straightforward: just introdce a tx skb array and use
it instead of sk_receive_queue.
A minor issue is to keep the tx_queue_len behaviour, since tun used to
use it for the length of sk_receive_queue. This is done through:
- add the
2016 Jun 30
10
[PATCH net-next V4 0/6] switch to use tx skb array in tun
Hi all:
This series tries to switch to use skb array in tun. This is used to
eliminate the spinlock contention between producer and consumer. The
conversion was straightforward: just introdce a tx skb array and use
it instead of sk_receive_queue.
A minor issue is to keep the tx_queue_len behaviour, since tun used to
use it for the length of sk_receive_queue. This is done through:
- add the
2016 Jun 30
10
[PATCH net-next V4 0/6] switch to use tx skb array in tun
Hi all:
This series tries to switch to use skb array in tun. This is used to
eliminate the spinlock contention between producer and consumer. The
conversion was straightforward: just introdce a tx skb array and use
it instead of sk_receive_queue.
A minor issue is to keep the tx_queue_len behaviour, since tun used to
use it for the length of sk_receive_queue. This is done through:
- add the
2017 Jan 06
5
[PATCH V4 net-next 0/3] vhost_net tx batching
Hi:
This series tries to implement tx batching support for vhost. This was
done by using MSG_MORE as a hint for under layer socket. The backend
(e.g tap) can then batch the packets temporarily in a list and
submit it all once the number of bacthed exceeds a limitation.
Tests shows obvious improvement on guest pktgen over over
mlx4(noqueue) on host:
Mpps -+%
2017 Jan 06
5
[PATCH V4 net-next 0/3] vhost_net tx batching
Hi:
This series tries to implement tx batching support for vhost. This was
done by using MSG_MORE as a hint for under layer socket. The backend
(e.g tap) can then batch the packets temporarily in a list and
submit it all once the number of bacthed exceeds a limitation.
Tests shows obvious improvement on guest pktgen over over
mlx4(noqueue) on host:
Mpps -+%
2009 Apr 02
7
[Lguest] [PATCH 4/5] lguest: use KVM hypercalls
fre, 27 03 2009 kl. 10:22 +1030, skrev Rusty Russell:
> From: Matias Zabaljauregui <zabaljauregui at gmail.com>
>
> Impact: cleanup
>
> This patch allow us to use KVM hypercalls
Something has broken in relation to this change. I'm not sure it is this
change itself or one following, but I get the following error when using
lguest:
lguest: unhandled trap 6 at 0x418726
2009 Apr 02
7
[Lguest] [PATCH 4/5] lguest: use KVM hypercalls
fre, 27 03 2009 kl. 10:22 +1030, skrev Rusty Russell:
> From: Matias Zabaljauregui <zabaljauregui at gmail.com>
>
> Impact: cleanup
>
> This patch allow us to use KVM hypercalls
Something has broken in relation to this change. I'm not sure it is this
change itself or one following, but I get the following error when using
lguest:
lguest: unhandled trap 6 at 0x418726
2009 Apr 16
1
[1/2] tun: Only free a netdev when all tun descriptors are closed
On Thu, Apr 16, 2009 at 01:08:18AM -0000, Herbert Xu wrote:
> On Wed, Apr 15, 2009 at 10:38:34PM +0800, Herbert Xu wrote:
> >
> > So how about this? We replace the dev destructor with our own that
> > doesn't immediately call free_netdev. We only call free_netdev once
> > all tun fd's attached to the device have been closed.
>
> Here's the patch.
2009 Apr 16
1
[1/2] tun: Only free a netdev when all tun descriptors are closed
On Thu, Apr 16, 2009 at 01:08:18AM -0000, Herbert Xu wrote:
> On Wed, Apr 15, 2009 at 10:38:34PM +0800, Herbert Xu wrote:
> >
> > So how about this? We replace the dev destructor with our own that
> > doesn't immediately call free_netdev. We only call free_netdev once
> > all tun fd's attached to the device have been closed.
>
> Here's the patch.