Displaying 3 results from an estimated 3 matches for "801218".
Did you mean:
  80121
  
2007 May 31
1
[PATCH 1/3] lguest: speed up PARAVIRT_LAZY_FLUSH handling
...lient once: 1076218 (1066203 - 1085937)
Time for one fork/exit/wait: 1193125 (574750 - 1197750)
Time for two PTE updates: 10844 (10659 - 20703)
After:
Time for one context switch via pipe: 6745 (6521 - 13966)
Time for one Copy-on-Write fault: 44734 (11468 - 91988)
Time to exec client once: 815984 (801218 - 878218)
Time for one fork/exit/wait: 1023250 (397687 - 1030375)
Time for two PTE updates: 6699 (6475 - 9279)
(Native for comparison):
Time for one context switch via pipe: 4031 (3212 - 4146)
Time for one Copy-on-Write fault: 4402 (4388 - 4426)
Time to exec client once: 343859 (336859 - 349484)
T...
2007 May 31
1
[PATCH 1/3] lguest: speed up PARAVIRT_LAZY_FLUSH handling
...lient once: 1076218 (1066203 - 1085937)
Time for one fork/exit/wait: 1193125 (574750 - 1197750)
Time for two PTE updates: 10844 (10659 - 20703)
After:
Time for one context switch via pipe: 6745 (6521 - 13966)
Time for one Copy-on-Write fault: 44734 (11468 - 91988)
Time to exec client once: 815984 (801218 - 878218)
Time for one fork/exit/wait: 1023250 (397687 - 1030375)
Time for two PTE updates: 6699 (6475 - 9279)
(Native for comparison):
Time for one context switch via pipe: 4031 (3212 - 4146)
Time for one Copy-on-Write fault: 4402 (4388 - 4426)
Time to exec client once: 343859 (336859 - 349484)
T...
2013 Apr 29
1
Replicated and Non Replicated Bricks on Same Partition
Gluster-Users,
We currently have a 30 node Gluster Distributed-Replicate 15 x 2 
filesystem.  Each node has a ~20TB xfs filesystem mounted to /data and 
the bricks live on /data/brick.  We have been very happy with this 
setup, but are now collecting more data that doesn't need to be 
replicated because it can be easily regenerated.  Most of the data lives 
on our replicated volume and is