Displaying 20 results from an estimated 20 matches for "8cores".
Did you mean:
scores
2020 May 12
4
CentOS7 and NFS
Hi,
I need some help with NFSv4 setup/tuning. I have a dedicated nfs server
(2 x E5-2620? 8cores/16 threads each, 64GB RAM, 1x10Gb ethernet and 16x
8TB HDD) used by two servers and a small cluster (400 cores). All the
servers are running CentOS 7, the cluster is running CentOS6.
Time to time on the server I get:
?kernel: NFSD: client xxx.xxx.xxx.xxx testing state ID with
incorrect cl...
2010 Oct 19
5
max utilization
hi,
--> i have 8cores cpu. but my xenserver is using only one cpu, how to make
use of all cpu cores
--> machine is having 16G ram but domain0 is having very less memory. is
there any way to increase memory for domain0
[root@xenserver-DZONGRI ~]# cat /proc/meminfo
MemTotal: 574464 kB
MemFree: 111776 kB...
2020 Jun 01
3
CentOS7 and NFS
Le 13/05/2020 ? 02:13, Orion Poplawski a ?crit?:
> On 5/12/20 2:46 AM, Patrick B?gou wrote:
>> Hi,
>>
>> I need some help with NFSv4 setup/tuning. I have a dedicated nfs server
>> (2 x E5-2620? 8cores/16 threads each, 64GB RAM, 1x10Gb ethernet and 16x
>> 8TB HDD) used by two servers and a small cluster (400 cores). All the
>> servers are running CentOS 7, the cluster is running CentOS6.
>>
>> Time to time on the server I get:
>>
>> ???? ?kernel: NFSD: client x...
2020 May 12
2
CentOS7 and NFS
Le 12/05/2020 ? 16:10, James Pearson a ?crit?:
> Patrick B?gou wrote:
>>
>> Hi,
>>
>> I need some help with NFSv4 setup/tuning. I have a dedicated nfs server
>> (2 x E5-2620? 8cores/16 threads each, 64GB RAM, 1x10Gb ethernet and 16x
>> 8TB HDD) used by two servers and a small cluster (400 cores). All the
>> servers are running CentOS 7, the cluster is running CentOS6.
>>
>> Time to time on the server I get:
>>
>> ????? kernel: NFSD: client x...
2020 May 13
2
CentOS7 and NFS
.../05/2020 ? 07:32, Simon Matter via CentOS a ?crit?:
>> Le 12/05/2020 ? 16:10, James Pearson a ?crit?:
>>> Patrick B?gou wrote:
>>>> Hi,
>>>>
>>>> I need some help with NFSv4 setup/tuning. I have a dedicated nfs server
>>>> (2 x E5-2620? 8cores/16 threads each, 64GB RAM, 1x10Gb ethernet and 16x
>>>> 8TB HDD) used by two servers and a small cluster (400 cores). All the
>>>> servers are running CentOS 7, the cluster is running CentOS6.
>>>>
>>>> Time to time on the server I get:
>>>&g...
2014 Jul 01
2
[PATCH v3 0/2] block: virtio-blk: support multi vq per virtio-blk
...d, iodepth=64, bs=4K, jobs=N) is run inside VM to
>> verify the improvement.
>>
>> I just create a small quadcore VM and run fio inside the VM, and
>> num_queues of the virtio-blk device is set as 2, but looks the
>> improvement is still obvious. The host is 2 sockets, 8cores(16threads)
>> server.
>>
>> 1), about scalability
>> - jobs = 2, thoughput: +33%
>> - jobs = 4, thoughput: +100%
>>
>> 2), about top thoughput: +39%
>>
>> So in my test, even for a quad-core VM, if the virtqueue number
>> is increased from...
2014 Jul 01
2
[PATCH v3 0/2] block: virtio-blk: support multi vq per virtio-blk
...d, iodepth=64, bs=4K, jobs=N) is run inside VM to
>> verify the improvement.
>>
>> I just create a small quadcore VM and run fio inside the VM, and
>> num_queues of the virtio-blk device is set as 2, but looks the
>> improvement is still obvious. The host is 2 sockets, 8cores(16threads)
>> server.
>>
>> 1), about scalability
>> - jobs = 2, thoughput: +33%
>> - jobs = 4, thoughput: +100%
>>
>> 2), about top thoughput: +39%
>>
>> So in my test, even for a quad-core VM, if the virtqueue number
>> is increased from...
2020 May 15
2
CentOS7 and NFS
...:
>>>> Le 12/05/2020 ? 16:10, James Pearson a ?crit :
>>>>> Patrick B?gou wrote:
>>>>>> Hi,
>>>>>>
>>>>>> I need some help with NFSv4 setup/tuning. I have a dedicated nfs server
>>>>>> (2 x E5-2620 8cores/16 threads each, 64GB RAM, 1x10Gb ethernet and 16x
>>>>>> 8TB HDD) used by two servers and a small cluster (400 cores). All the
>>>>>> servers are running CentOS 7, the cluster is running CentOS6.
>>>>>>
>>>>>> Time to time on...
2020 Jul 09
1
CentOS7 and NFS
...B?gou wrote:
>> Le 13/05/2020 ? 02:13, Orion Poplawski a ?crit?:
>>> On 5/12/20 2:46 AM, Patrick B?gou wrote:
>>>> Hi,
>>>>
>>>> I need some help with NFSv4 setup/tuning. I have a dedicated nfs
>>>> server
>>>> (2 x E5-2620? 8cores/16 threads each, 64GB RAM, 1x10Gb ethernet and
>>>> 16x
>>>> 8TB HDD) used by two servers and a small cluster (400 cores). All the
>>>> servers are running CentOS 7, the cluster is running CentOS6.
>>>>
>>>> Time to time on the server I ge...
2020 May 12
0
CentOS7 and NFS
Patrick B?gou wrote:
>
> Hi,
>
> I need some help with NFSv4 setup/tuning. I have a dedicated nfs server
> (2 x E5-2620 8cores/16 threads each, 64GB RAM, 1x10Gb ethernet and 16x
> 8TB HDD) used by two servers and a small cluster (400 cores). All the
> servers are running CentOS 7, the cluster is running CentOS6.
>
> Time to time on the server I get:
>
> kernel: NFSD: client xxx.xxx.xxx.xxx testing...
2020 May 13
0
CentOS7 and NFS
> Le 12/05/2020 ? 16:10, James Pearson a ?crit?:
>> Patrick B?gou wrote:
>>>
>>> Hi,
>>>
>>> I need some help with NFSv4 setup/tuning. I have a dedicated nfs server
>>> (2 x E5-2620? 8cores/16 threads each, 64GB RAM, 1x10Gb ethernet and 16x
>>> 8TB HDD) used by two servers and a small cluster (400 cores). All the
>>> servers are running CentOS 7, the cluster is running CentOS6.
>>>
>>> Time to time on the server I get:
>>>
>>> ????...
2020 May 13
0
CentOS7 and NFS
On 5/12/20 2:46 AM, Patrick B?gou wrote:
> Hi,
>
> I need some help with NFSv4 setup/tuning. I have a dedicated nfs server
> (2 x E5-2620? 8cores/16 threads each, 64GB RAM, 1x10Gb ethernet and 16x
> 8TB HDD) used by two servers and a small cluster (400 cores). All the
> servers are running CentOS 7, the cluster is running CentOS6.
>
> Time to time on the server I get:
>
> ?kernel: NFSD: client xxx.xxx.xxx.xxx testing...
2020 Jul 02
0
CentOS7 and NFS
On 6/1/20 3:08 AM, Patrick B?gou wrote:
> Le 13/05/2020 ? 02:13, Orion Poplawski a ?crit?:
>> On 5/12/20 2:46 AM, Patrick B?gou wrote:
>>> Hi,
>>>
>>> I need some help with NFSv4 setup/tuning. I have a dedicated nfs server
>>> (2 x E5-2620? 8cores/16 threads each, 64GB RAM, 1x10Gb ethernet and 16x
>>> 8TB HDD) used by two servers and a small cluster (400 cores). All the
>>> servers are running CentOS 7, the cluster is running CentOS6.
>>>
>>> Time to time on the server I get:
>>>
>>> ???...
2020 May 15
0
CentOS7 and NFS
...Matter via CentOS a ?crit?:
>>> Le 12/05/2020 ? 16:10, James Pearson a ?crit?:
>>>> Patrick B?gou wrote:
>>>>> Hi,
>>>>>
>>>>> I need some help with NFSv4 setup/tuning. I have a dedicated nfs server
>>>>> (2 x E5-2620? 8cores/16 threads each, 64GB RAM, 1x10Gb ethernet and 16x
>>>>> 8TB HDD) used by two servers and a small cluster (400 cores). All the
>>>>> servers are running CentOS 7, the cluster is running CentOS6.
>>>>>
>>>>> Time to time on the server I get...
2014 Jun 26
0
[PATCH v3 0/2] block: virtio-blk: support multi vq per virtio-blk
...Fio(libaio, randread, iodepth=64, bs=4K, jobs=N) is run inside VM to
> verify the improvement.
>
> I just create a small quadcore VM and run fio inside the VM, and
> num_queues of the virtio-blk device is set as 2, but looks the
> improvement is still obvious. The host is 2 sockets, 8cores(16threads)
> server.
>
> 1), about scalability
> - jobs = 2, thoughput: +33%
> - jobs = 4, thoughput: +100%
>
> 2), about top thoughput: +39%
>
> So in my test, even for a quad-core VM, if the virtqueue number
> is increased from 1 to 2, both scalability and performanc...
2014 Jul 01
0
[PATCH v3 0/2] block: virtio-blk: support multi vq per virtio-blk
..., jobs=N) is run inside VM to
>>> verify the improvement.
>>>
>>> I just create a small quadcore VM and run fio inside the VM, and
>>> num_queues of the virtio-blk device is set as 2, but looks the
>>> improvement is still obvious. The host is 2 sockets, 8cores(16threads)
>>> server.
>>>
>>> 1), about scalability
>>> - jobs = 2, thoughput: +33%
>>> - jobs = 4, thoughput: +100%
>>>
>>> 2), about top thoughput: +39%
>>>
>>> So in my test, even for a quad-core VM, if the virtque...
2014 Jun 26
6
[PATCH v3 0/2] block: virtio-blk: support multi vq per virtio-blk
...epends on x-data-plane.
Fio(libaio, randread, iodepth=64, bs=4K, jobs=N) is run inside VM to
verify the improvement.
I just create a small quadcore VM and run fio inside the VM, and
num_queues of the virtio-blk device is set as 2, but looks the
improvement is still obvious. The host is 2 sockets, 8cores(16threads)
server.
1), about scalability
- jobs = 2, thoughput: +33%
- jobs = 4, thoughput: +100%
2), about top thoughput: +39%
So in my test, even for a quad-core VM, if the virtqueue number
is increased from 1 to 2, both scalability and performance can
get improved a lot.
In above qemu implem...
2014 Jun 26
6
[PATCH v3 0/2] block: virtio-blk: support multi vq per virtio-blk
...epends on x-data-plane.
Fio(libaio, randread, iodepth=64, bs=4K, jobs=N) is run inside VM to
verify the improvement.
I just create a small quadcore VM and run fio inside the VM, and
num_queues of the virtio-blk device is set as 2, but looks the
improvement is still obvious. The host is 2 sockets, 8cores(16threads)
server.
1), about scalability
- jobs = 2, thoughput: +33%
- jobs = 4, thoughput: +100%
2), about top thoughput: +39%
So in my test, even for a quad-core VM, if the virtqueue number
is increased from 1 to 2, both scalability and performance can
get improved a lot.
In above qemu implem...
2020 May 16
0
CentOS7 and NFS
...Le 12/05/2020 ? 16:10, James Pearson a ?crit :
>>>>>> Patrick B?gou wrote:
>>>>>>> Hi,
>>>>>>>
>>>>>>> I need some help with NFSv4 setup/tuning. I have a dedicated nfs server
>>>>>>> (2 x E5-2620 8cores/16 threads each, 64GB RAM, 1x10Gb ethernet and 16x
>>>>>>> 8TB HDD) used by two servers and a small cluster (400 cores). All the
>>>>>>> servers are running CentOS 7, the cluster is running CentOS6.
>>>>>>>
>>>>>>>...
2017 Nov 10
0
Replication oddities - different sizes between replicated nodes
...ary host.
Both machines are located at Hetzner in Germany. HostA is in DC6 and HostB is in DC12. Not situated next to eachother but low latency links between them. Both machines are connected to gbit uplinks and are not very highly
loaded. The machines consist of 2x2TB disk (mirrored with ZFS) and 8cores, 32GB RAM.
There were warnings when manually doing the sync that the mailbox changed in between and should be reissued, after which the mailboxes kept growing.
Previously I reported two messages here which might be the foundation of the same issue:
https://dovecot.org/list/dovecot/2016-July/10487...