Displaying 20 results from an estimated 15188 matches for "duplicable".
Did you mean:
applicable
2018 May 12
2
Latest CentOS does not boot, Proliant ML330 G6
> m.roth at 5-cent.us kirjoitti 11.5.2018 kello 22.52:
>
> Jari Fredriksson wrote:
>> Hello all.
>>
>> I just upgraded to the latest and tried to reboot: kernel panic and dead
>> as a brick.
>>
>> Luckily GRUB still works and booting the to the next option in boot menu
>> succeeds.
>>
>> How can this be? This OS is assumed to be
2012 Jul 23
1
duplicated() variation that goes both ways to capture all duplicates
Dear all
The trouble with the current duplicated() function in is that it can
report duplicates while searching fromFirst _or_ fromLast, but not
both ways. Often users will want to identify and extract all the
copies of the item that has duplicates, not only the duplicates
themselves.
To take the example from the man page:
> data(iris)
> iris[duplicated(iris), ] ##duplicates while
2009 Jun 12
6
Duplicate packets when using aggregate datalinks on bge
I opened a bug report earlier today but it doesn''t seem to have been
added to the bugs database. I''m posting here in case one of the
Crossbow developers might see it and confirm this behavior.
Description
Duplicate packets are generated whenever an aggregate is introduced
into the network configuration. We''ve ruled out switch ports and
physical bge interfaces as
2011 Apr 08
5
duplicates() function
I need a function which is similar to duplicated(), but instead of
returning TRUE/FALSE, returns indices of which element was duplicated.
That is,
> x <- c(9,7,9,3,7)
> duplicated(x)
[1] FALSE FALSE TRUE FALSE TRUE
> duplicates(x)
[1] NA NA 1 NA 2
(so that I know that element 3 is a duplicate of element 1, and element
5 is a duplicate of element 2, whereas the others were
2002 Apr 22
10
How To Fix Duplication Block Error?
Hi there,
I am very new in linux and met the filesystem problem as described in the
following, I tried to use 'fsck' and tried to find some support documents
but failed. Your hints and helps are very appreciated.
Millions of thanks,
Annie
The error message is (sorry for it's length, I just want to make it clear)
====================================================================
2011 Jan 20
6
Identify duplicate numbers and to increase a value
Hi everybody.
I want to identify duplicate numbers and to increase a value of 0.01 for each time that it is duplicated.
Example:
x=c(1,2,3,5,6,2,8,9,2,2)
I want to do this:
1
2 + 0.01
3
5
6
2 + 0.02
8
9
2 + 0.03
2 + 0.04
I am trying to get something like this:
1
2.01
3
5
6
2.02
8
9
2.03
2.04
Actually I just know the way to identify the duplicated numbers
rbind(x, duplicated(x) |
2010 Jun 08
2
duplicated() and unique() problems
Hi everybody
I have found something (for me at least) strange with duplicated(). I will
first provide a replicable example of a certain kind of behaviour that I
find odd and then give a sample of unexpected results from my own data. I
hope someone can help me understand this.
Consider the following
# this works as expected
ex=sample(1:20, replace=TRUE)
ex
duplicated(ex)
ex=sort(ex)
ex
2018 May 13
0
Latest CentOS does not boot, Proliant ML330 G6
> Jari Fredriksson <jarif at iki.fi> kirjoitti 12.5.2018 kello 11.39:
>
>
>
>> m.roth at 5-cent.us kirjoitti 11.5.2018 kello 22.52:
>>
>> Jari Fredriksson wrote:
>>> Hello all.
>>>
>>> I just upgraded to the latest and tried to reboot: kernel panic and dead
>>> as a brick.
>>>
>>> Luckily GRUB still
2018 Apr 23
3
dovecot sieve duplicates detection
On 23/04/18 14:18, Stephan Bosch wrote:
>
>
> Op 11-4-2018 om 23:58 schreef Andr? Rodier:
>> Hello,
>>
>> I have tested the sieve duplicate script with success so far, but I have
>> a question.
>
> Sieve duplicate script? You mean the Sieve duplicate extension (RFC 7352)?
>
>> I would like to know if the "duplicate" sieve flag in
2012 Aug 03
3
all duplicated wanted
Hi,
Has anyone been able to figure out how to print all duplicated observations?
I have a dataset, with patients ID, and other lab records.
Some patients have multiple lab records, but 'duplicated' ID will only show me the duplicates, not the original observation.
How can I print both the original one and the duplicates?
Thanks
2009 Mar 30
2
which rows are duplicates?
I would like to know which rows are duplicates of each other, not
simply that a row is duplicate of another row. In the following
example rows 1 and 3 are duplicates.
> x <- c(1,3,1)
> y <- c(2,4,2)
> z <- c(3,4,3)
> data <- data.frame(x,y,z)
x y z
1 1 2 3
2 3 4 4
3 1 2 3
I can't figure out how to get R to tell me that observation 1 and 3
are the same.
2009 Mar 30
2
which rows are duplicates?
I would like to know which rows are duplicates of each other, not
simply that a row is duplicate of another row. In the following
example rows 1 and 3 are duplicates.
> x <- c(1,3,1)
> y <- c(2,4,2)
> z <- c(3,4,3)
> data <- data.frame(x,y,z)
x y z
1 1 2 3
2 3 4 4
3 1 2 3
I can't figure out how to get R to tell me that observation 1 and 3
are the same.
2016 Oct 27
8
(RFC) Encoding code duplication factor in discriminator
Motivation:
Many optimizations duplicate code. E.g. loop unroller duplicates the loop
body, GVN duplicates computation, etc. The duplicated code will share the
same debug info with the original code. For SamplePGO, the debug info is
used to present the profile. Code duplication will affect profile accuracy.
Taking loop unrolling for example:
#1 foo();
#2 for (i = 0; i < N; i++) {
#3 bar();
2012 Aug 13
5
How can I get the Ids with Duplicated key and corresponding Ids with original key?
In this following example Id 4 is duplicated with Id 1.
Like this I want both Ids (Duplicated and Duplicated with). Can anyone help?
df <- data.frame(
"Publication" = c(1, 2, 3, 1, 4, 5, 2, 3),
"Reference" = c("a", "b", "c", "a", "d", "e", "b", "c"),
"Id"= c(1, 2, 3, 4,
2016 Oct 27
2
(RFC) Encoding code duplication factor in discriminator
The impact to debug_line is actually not small. I only implemented the part
1 (encoding duplication factor) for loop unrolling and loop vectorization.
The debug_line size overhead for "-O2 -g1" binary of speccpu C/C++
benchmarks:
433.milc 23.59%
444.namd 6.25%
447.dealII 8.43%
450.soplex 2.41%
453.povray 5.40%
470.lbm 0.00%
482.sphinx3 7.10%
400.perlbench 2.77%
401.bzip2 9.62%
403.gcc
2011 Feb 28
3
Problems using unique function and !duplicated
Hi, I am trying to simultaneously remove duplicate variables from two or more
variables in a small R data.frame. I am trying to reproduce the SAS
statements from a Proc Sort with Nodupkey for those familiar with SAS.
Here's my example data :
test <- read.csv("test.csv", sep=",", as.is=TRUE)
> test
date var1 var2 num1 num2
1 28/01/11 a 1 213 71
2
2019 Jul 15
1
Sieve problem with duplicate and fileinto in the same set of rules
Hi there,
on my mail server (postfix, dovecot 2.2.27 in Debian 9) I have an automatic
forwarding (with sender_bcc_maps in Postfix) for all the emails sent in smtp
from the same server, that are then put in the Sent folder with a sieve
rule.
In this way, however, when a user sends an e-mail to himself, both copies
end up in the Sent folder and it's not good.
To resolve, I tried using
2016 Oct 27
0
(RFC) Encoding code duplication factor in discriminator
Do you have an estimate of the debug_line size increase? I guess it will be
small.
David
On Thu, Oct 27, 2016 at 11:39 AM, Dehao Chen <dehao at google.com> wrote:
> Motivation:
> Many optimizations duplicate code. E.g. loop unroller duplicates the loop
> body, GVN duplicates computation, etc. The duplicated code will share the
> same debug info with the original code. For
2018 Apr 11
2
dovecot sieve duplicates detection
Hello,
I have tested the sieve duplicate script with success so far, but I have
a question.
I would like to know if the "duplicate" sieve flag in Dovecot is global
to all folders, or specific to one folder only.
For instance, if I copy an email from one folder to another, and I have
a discard action on duplicate email, is this action will be applied (in
this case, discard) or not.
If
2012 Jul 25
4
Simple question on finding duplicates
I'm trying to find duplicate values in a column of a data frame. For
example, dataframe (a) below has two 3's. I would like to mark each value of
each row as either not being a duplicate of the one before (0), or as a
duplicate (1) - for example, as in dataframe (b). In SPSS, I would simply
compare each value to it's "lagged" value, but I can't figure out