search for: 110k

Displaying 20 results from an estimated 45 matches for "110k".

Did you mean: 10k
2024 Dec 11
1
Cores hang when calling mcapply
...llo Thomas, Consider that the primary bottleneck may be tied to memory usage and the complexity of pivoting extremely large datasets into wide formats with tens of thousands of unique values per column. Extremely large expansions of columns inherently stress both memory and CPU, and splitting into 110k separate data frames before pivoting and combining them again is likely causing resource overhead and system instability. Perhaps, evaluate if the presence/absence transformation can be done in a more memory-efficient manner without pivoting all at once. Since you are dealing with extremely large...
2009 Jul 30
1
sieve filtering setup
I'm looking at implementing sieve in my environment. Software is: dovecot-1.2-sieve revision 1022:3c9a22c28156 dovecot-1.2 revision 9269:a303bb82c1c9 AIX 5.3 with sendmail mta using prescribed deliver lda. I have a few questions. I'll have 110k sieve files(1 for each user). Does sieve read the file each time a new message is accepted by sendmail? Are there any measurements on cpu load for sieve filters? Thanks, Jonathan -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pk...
2018 Nov 02
2
[PATCH 0/1] vhost: add vhost_blk driver
...t; 5 755k 697k 136k 128k 737k 693k 579k > 6 887k 808k 131k 120k 830k 782k 640k > 7 1004k 926k 126k 131k 926k 863k 693k > 8 1099k 1015k 117k 115k 1001k 931k 712k > 9 1194k 1119k 115k 111k 1055k 991k 711k > 10 1278k 1207k 109k 114k 1130k 1046k 695k > 11 1345k 1280k 110k 108k 1119k 1091k 663k > 12 1411k 1356k 104k 106k 1201k 1142k 629k > 13 1466k 1423k 106k 106k 1260k 1170k 607k > 14 1517k 1486k 103k 106k 1296k 1179k 589k > 15 1552k 1543k 102k 102k 1322k 1191k 571k > 16 1480k 1506k 101k 102k 1346k 1202k 566k > > Vitaly Mayatskikh (1): > A...
2024 Dec 11
1
Cores hang when calling mcapply
...llo Thomas, Consider that the primary bottleneck may be tied to memory usage and the complexity of pivoting extremely large datasets into wide formats with tens of thousands of unique values per column. Extremely large expansions of columns inherently stress both memory and CPU, and splitting into 110k separate data frames before pivoting and combining them again is likely causing resource overhead and system instability. Perhaps, evaluate if the presence/absence transformation can be done in a more memory-efficient manner without pivoting all at once. Since you are dealing with extremely large...
2015 Feb 03
2
Very slow disk I/O
On Mon, Feb 2, 2015 at 11:37 PM, Jatin Davey <jashokda at cisco.com> wrote: > > I will test and get the I/O speed results with the following and see what > works best with the given workload: > > Create 5 volumes each with 150 GB in size for the 5 VMs that i will be > running on the server > Create 1 volume with 600GB in size for the 5 VMs that i will be running on >
2018 Sep 04
3
authentication performance with 4.7.6 -> 4.7.8 upgrade (was: Re: gencache.tdb size and cache flush)
On Wed, 2018-08-29 at 15:36 +0200, Peter Eriksson via samba wrote: > For what it’s worth you are not alone in seeing similar problems with Samba and gencache. > > Our site has some 110K users (university with staff & students (including former ones), and currently around 2000 active (SMB) clients connecting to 5 different Samba servers (around 400-500 clients per server). When we previously just let things “run” gencache.tdb would grow forever and authentication login performa...
2018 Nov 05
2
[PATCH 0/1] vhost: add vhost_blk driver
...t; 5 755k 697k 136k 128k 737k 693k 579k > 6 887k 808k 131k 120k 830k 782k 640k > 7 1004k 926k 126k 131k 926k 863k 693k > 8 1099k 1015k 117k 115k 1001k 931k 712k > 9 1194k 1119k 115k 111k 1055k 991k 711k > 10 1278k 1207k 109k 114k 1130k 1046k 695k > 11 1345k 1280k 110k 108k 1119k 1091k 663k > 12 1411k 1356k 104k 106k 1201k 1142k 629k > 13 1466k 1423k 106k 106k 1260k 1170k 607k > 14 1517k 1486k 103k 106k 1296k 1179k 589k > 15 1552k 1543k 102k 102k 1322k 1191k 571k > 16 1480k 1506k 101k 102k 1346k 1202k 566k > > Vitaly Mayatskikh (1): > A...
2018 Nov 05
2
[PATCH 0/1] vhost: add vhost_blk driver
...t; 5 755k 697k 136k 128k 737k 693k 579k > 6 887k 808k 131k 120k 830k 782k 640k > 7 1004k 926k 126k 131k 926k 863k 693k > 8 1099k 1015k 117k 115k 1001k 931k 712k > 9 1194k 1119k 115k 111k 1055k 991k 711k > 10 1278k 1207k 109k 114k 1130k 1046k 695k > 11 1345k 1280k 110k 108k 1119k 1091k 663k > 12 1411k 1356k 104k 106k 1201k 1142k 629k > 13 1466k 1423k 106k 106k 1260k 1170k 607k > 14 1517k 1486k 103k 106k 1296k 1179k 589k > 15 1552k 1543k 102k 102k 1322k 1191k 571k > 16 1480k 1506k 101k 102k 1346k 1202k 566k > > Vitaly Mayatskikh (1): > A...
2024 Dec 11
1
Cores hang when calling mcapply
...> > Consider that the primary bottleneck may be tied to memory usage and the complexity of pivoting extremely large datasets into wide formats with tens of thousands of unique values per column. Extremely large expansions of columns inherently stress both memory and CPU, and splitting into 110k separate data frames before pivoting and combining them again is likely causing resource overhead and system instability. > > Perhaps, evaluate if the presence/absence transformation can be done in a more memory-efficient manner without pivoting all at once. Since you are dealing with extre...
2024 Dec 11
2
Cores hang when calling mcapply
...) return(df) } sum_group_function <- function(df) { df <- df |> group_by(ID_Key) |> summarise(across(c(starts_with("column1_name_"),starts_with("column2_name_"),), ~ sum(.x, na.rm = TRUE))) |> ungroup() return(df) } and splitting up the data into a list of 110k individual dataframes based on Key_ID temp <- open_dataset( sources = input_files, format = 'csv', unify_schema = TRUE, col_types = schema( "ID_Key" = string(), "column1" = string(), "column1" = string() ) ) |> a...
2024 Dec 11
1
Cores hang when calling mcapply
...llo Thomas, Consider that the primary bottleneck may be tied to memory usage and the complexity of pivoting extremely large datasets into wide formats with tens of thousands of unique values per column. Extremely large expansions of columns inherently stress both memory and CPU, and splitting into 110k separate data frames before pivoting and combining them again is likely causing resource overhead and system instability. Perhaps, evaluate if the presence/absence transformation can be done in a more memory-efficient manner without pivoting all at once. Since you are dealing with extremely large...
2018 Aug 29
6
gencache.tdb size and cache flush
Hi all, I have a midsize AD domain with some 50k users but only 100 workstations joined. Sometimes I find server CPU throttling at 100%. In order to let it drop and have smooth performance I delete cache: systemctl stop samba net cache flush systemctl start samba First of all, is it needed a samba stop to flush the cache? Even if cache flush does the job to restore performance, I am clueless
2006 Aug 04
11
Assertion raised during zfs share?
....org matches Solaris 10 U2, then it looks like it''s associated with a popen of /usr/sbin/share. Can anyone shed any light on this? Thanks, -- Jim C # zfs list NAME USED AVAIL REFER MOUNTPOINT SYS 83K 163M 30.5K /SYS export 110K 72.8G 25.5K /export export/home 24.5K 72.8G 24.5K /export/home # zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT SYS 195M 90K 195M 0% ONLINE - export 74G 114K 74.0G 0% ONLINE - # z...
2018 Aug 29
2
gencache.tdb size and cache flush
On Wed, Aug 29, 2018 at 03:36:23PM +0200, Peter Eriksson via samba wrote: > For what it’s worth you are not alone in seeing similar problems with Samba and gencache. > > Our site has some 110K users (university with staff & students (including former ones), and currently around 2000 active (SMB) clients connecting to 5 different Samba servers (around 400-500 clients per server). When we previously just let things “run” gencache.tdb would grow forever and authentication login performa...
2024 Dec 11
1
Cores hang when calling mcapply
...> > Consider that the primary bottleneck may be tied to memory usage and the complexity of pivoting extremely large datasets into wide formats with tens of thousands of unique values per column. Extremely large expansions of columns inherently stress both memory and CPU, and splitting into 110k separate data frames before pivoting and combining them again is likely causing resource overhead and system instability. > > > > Perhaps, evaluate if the presence/absence transformation can be done in a more memory-efficient manner without pivoting all at once. Since you are dealing...
2006 Dec 29
3
production-izing a popular site
Lets say you have a site that is serving, oh, around 100k unique visitors a day (plain ole'' browser requests) - plus probably ~ 40k uniques to feeds. Assume this site is 90% read like most of the web, so the traffic looks like your typical news or portal site. There are two web boxes behind a hardware load balancer, each doing apache 2.2.3 -> mongrel_proxy_balancer -> mongrel
2009 Apr 16
2
Weird performance problem
...0 0| 0 6016k| 181k 176k| 0 0 |2169 5996 missed 50 ticks 4 2 91 1 0 0| 28k 8744k| 216k 214k| 0 0 |2159 5438 missed 37 ticks 1 1 98 0 0 0| 0 2632k| 93k 91k| 0 0 | 983 1381 missed 34 ticks 1 1 98 1 0 0| 0 5624k| 113k 110k| 0 0 |1569 2643 missed 52 ticks 1 1 98 1 0 0| 0 2432k| 29k 28k| 0 0 | 679 647 missed 12 ticks 0 0 100 0 0 0| 0 0 | 60B 374B| 0 0 | 13 15 2 3 94 0 0 0| 0 1872k| 209k 210k| 0 0 |1375 3590 missed 30 ticks...
2018 Nov 05
0
[PATCH 0/1] vhost: parallel virtqueue handling
...al > # virtio-blk > # vhost-blk > > 1 171k 148k 195k > 2 328k 249k 349k > 3 479k 179k 501k > 4 622k 143k 620k > 5 755k 136k 737k > 6 887k 131k 830k > 7 1004k 126k 926k > 8 1099k 117k 1001k > 9 1194k 115k 1055k > 10 1278k 109k 1130k > 11 1345k 110k 1119k > 12 1411k 104k 1201k > 13 1466k 106k 1260k > 14 1517k 103k 1296k > 15 1552k 102k 1322k > 16 1480k 101k 1346k > > Vitaly Mayatskikh (1): > vhost: add per-vq worker thread > > drivers/vhost/vhost.c | 123 +++++++++++++++++++++++++++++++----------- > drive...
2015 Feb 03
0
Very slow disk I/O
Lol - spinning disks? Really? SSD is down to like 50cents a gig. And they have 1TB disks... slow disks = you get what you deserve... welcome to 2015. Autolacing shoes, self drying jackets, hoverboards - oh, yeah, and 110k IOPS 1TB SamSung Pro 850 SSD Drives for $449 on NewEgg. dumbass -----Original Message----- From: centos-bounces at centos.org [mailto:centos-bounces at centos.org] On Behalf Of Les Mikesell Sent: Tuesday, February 03, 2015 12:42 AM To: CentOS mailing list Subject: Re: [CentOS] Very slow disk I/O...
2018 Aug 29
0
gencache.tdb size and cache flush
For what it’s worth you are not alone in seeing similar problems with Samba and gencache. Our site has some 110K users (university with staff & students (including former ones), and currently around 2000 active (SMB) clients connecting to 5 different Samba servers (around 400-500 clients per server). When we previously just let things “run” gencache.tdb would grow forever and authentication login performa...