similar to: Performance Degradation when copying >1500 files to mac

Displaying 20 results from an estimated 2000 matches similar to: "Performance Degradation when copying >1500 files to mac"

2020 May 06
2
Nodes in CTDB Cluster don't release recovery lock
Hello all, First of all, apologies if this isn't the right location for this question, I couldn't find a CTDB specific mailing list or IRC so I figured the general one would be appropriate. Please let me know if this question is better placed elsewhere. I'm trying to test clustered samba and have a two node CTDB setup (Following the guide here:
2020 Sep 29
2
[CTDB] "use mmap = no" Causes wibind to fail
Hello all, I'm currently running samba on debian buster, version 2:4.9.5+dsfg-5 with lustre 2.13.0. The problem I'm experiencing is pretty straightforward, if I set "use mmap = no" in my global config (as suggested by the wiki), winbind fails to start and the cluster grinds to a halt. Here is the error messages from systemd: Sep 28 22:14:17 tenzin systemd[1]: Starting Samba
2020 May 07
1
Nodes in CTDB Cluster don't release recovery lock
Hello all, Thanks for the input. I opened up the firewall ports and testing connectivity by ctdb's ping to no avail. I did however fix the problem. I must have missed the section of the guide outlining the importance of the nodes file, but it seems the issue was that machine A's nodes file was in reverse order compared to B's. After rectifying that the cluster came up without issue,
2020 Aug 12
5
Using SSSD + AD with Samba seems to require Winbind be running
Hi all, Configuration information right off the bat: Debian Buster 10.5 and Samba 2:4.9.5+dfsg-5+deb10u1. Testparm is at the bottom I'm running into some interesting behavior on a server I've configured to use SSSD to bind to the AD domain. I've successfully bound using "net ads" and can get tickets and so on, and have samba configured to use kerberos through sssd.
2019 Sep 10
2
Calling a LAPACK subroutine from R
Hello R-helpers! I am trying to call a LAPACK subroutine directly from my R code using .Fortran(), but R cannot find the symbol name. How can I register/load the appropriate library? > ### AR(1) Precision matrix > n <- 4L > phi <- 0.64 > AB <- matrix(0, 2, n) > AB[1, ] <- c(1, rep(1 + phi^2, n-2), 1) > AB[2, -n] <- -phi > round(AB, 3) [,1] [,2] [,3]
2019 Sep 10
2
Calling a LAPACK subroutine from R
Hello R-helpers! I am trying to call a LAPACK subroutine directly from my R code using .Fortran(), but R cannot find the symbol name. How can I register/load the appropriate library? > ### AR(1) Precision matrix > n <- 4L > phi <- 0.64 > AB <- matrix(0, 2, n) > AB[1, ] <- c(1, rep(1 + phi^2, n-2), 1) > AB[2, -n] <- -phi > round(AB, 3) [,1] [,2] [,3]
2020 Apr 28
2
mclapply returns NULLs on MacOS when running GAM
Dear R-devel, I am experiencing issues with running GAM models using mclapply, it fails to return any values if the data input becomes large. For example here the code runs fine with a df of 100 rows, but fails at 1000. library(mgcv) library(parallel) > df <- data.frame( + x = 1:100, + y = 1:100 + ) > > mclapply(1:2, function(i, df) { + fit <- gam(y ~ s(x, bs =
2019 Aug 04
1
iconv: embedded nulls when converting to UTF-16
R-devel community: I have encountered some unexpected behavior using iconv, which may be the source of errors I am getting when connecting to a UTF-16 -encoded SQL Server database. A simple example is below. When researching this problem, I found r-devel reports of the same problem in threads from June 2010 and February, 2016, and that bug #16738 was posted to Bugzilla as a result. However, I
2004 May 13
3
EXT3 performance on Large (multi-TeraByte) RAID
Has anyone experienced a significant degradation in ext3 performance when using it on a Multi-TeraByte RAID? As part of an experimental setup, I hooked up three 300GB drives and made an EXT3 RAID5 out of them, using the entire space one each drive, and started throwing a large number of files in the size-range 3KB to 50 KB. Then, I deleted the raid, and created a new one, but this time, I used
2020 Apr 28
2
mclapply returns NULLs on MacOS when running GAM
Yes I am running on Rstudio 1.2.5033. I was also running this code without error on Ubuntu in Rstudio. Checking again on the terminal and it does indeed work fine even with large data.frames. Any idea as to what interaction between Rstudio and mclapply causes this? Thanks, Shian On 28 Apr 2020, at 7:29 pm, Simon Urbanek <simon.urbanek at R-project.org<mailto:simon.urbanek at
2019 Sep 11
4
Fw: Calling a LAPACK subroutine from R
Sorry for cross-posting, but I realized my question might be more appropriate for r-devel... Thank you, Giovanni ________________________________________ From: R-help <r-help-bounces at r-project.org> on behalf of Giovanni Petris <gpetris at uark.edu> Sent: Tuesday, September 10, 2019 16:44 To: r-help at r-project.org Subject: [R] Calling a LAPACK subroutine from R Hello R-helpers!
2005 May 20
1
Degradation model
Dear list, I have a degradation model: dX/dt = * I(X2)*( k1*X(t) )/( X(t)+k2 ) where X(t) is concentration at time t, and k1 and k2 are parameters that I want to estimate. I(X) is a known inhibitor function. My questions is whether this is implemented or easily computed in any R package. I have searched the archives but without luck. Any help or comments on this would be appreciated, Klaus
2017 May 22
2
network performance degradation in virtio_net in 4.12-rc
Hi I see severe network performance degradation with the kernels 4.12-rc1 and 4.12-rc2 in the network virtio driver. Download rate drops down to about 100kB/s. I bisected it and it is caused by patch d85b758f72b05a774045545f24d70980e3e9aac4 ("virtio_net: fix support for small rings"). When I revert this patch, the problem goes away. The host is Debian Jessie with kernel 4.4.62,
2017 May 22
2
network performance degradation in virtio_net in 4.12-rc
Hi I see severe network performance degradation with the kernels 4.12-rc1 and 4.12-rc2 in the network virtio driver. Download rate drops down to about 100kB/s. I bisected it and it is caused by patch d85b758f72b05a774045545f24d70980e3e9aac4 ("virtio_net: fix support for small rings"). When I revert this patch, the problem goes away. The host is Debian Jessie with kernel 4.4.62,
2009 Jul 06
1
Performance degradation on multi-processor system
Hi, We are seeing performance degradation when running the same R script in multiple instances of R on a multi-processor system. We are a bit surprised by this because we figured that each instance of R is running in its own processor, and therefore running a second, third or fourth instance should not affect the performance of the first instance. Here's a test script that exhibits this
2005 Jul 26
0
Call quality degradation after time
Thanks for the reply, Adam. If this is the case, it would seem to me (because the degradation happens only after a period of time, and quite suddenly) that the issue lies with digium's implementation of g729. As an interesting note, I had the same problems using ulaw -> ulaw over the local network (from internal phone to internal phone) with a much shorter period of 'good
2006 Mar 20
0
print server degradation
Did somebody got to put 1.000 printing queues in a Linux+Samba+Cups production server without degradation? Thanks for any reply, Bruno Gomes Pessanha
2014 Mar 11
0
VGA passthrough with Xen 4.3 and xl toolstack - performance degradation resolved?
Hello, Hope you can help. A while ago users noted performance degradation or dom0 stability issues when shuting down a HVM guest that uses VGA passthrough (e.g. Windows 7), and booting up the guest again. A workaround was to eject the graphics card within Windows, before shutting down the guest. This process is described here: http://blog.ktz.me/?p=219. I tried to follow those instructions, but
2020 Feb 26
0
Quality degradation with 1.3.1 when using FEC
Hi, I noticed that in some scenarios, Opus 1.2.1 produces better quality than 1.3.1 does. In the use case here, I'm enabling FEC and "transcode" signals from telephony networks (PCMU, 8kHz sampling) to VoIP (48kHz here). In this case, Opus always produced some leakage/ringing above 4kHz but for 1.3.1, these artifacts became worse. The small script below can be used to demonstrate
2020 Feb 21
0
Quality degradation with 1.3.1 when using FEC
Hi, I noticed that in some scenarios, Opus 1.2.1 produces better quality than 1.3.1 does. In the use case here, I'm enabling FEC and "transcode" signals from telephony networks (PCMU, 8kHz sampling) to VoIP (48kHz here). In this case, Opus always produced some leakage/ringing above 4kHz but for 1.3.1, these artifacts became worse. The small script below can be used to demonstrate