search for: 35g

Displaying 15 results from an estimated 15 matches for "35g".

Did you mean: 35
2016 Jan 22
4
LVM mirror database to ramdisk
...rd with an Intel S1400FP4, so the host went from a 4 core Xeon with 32G RAM to a 6 core Xeon with 48G RAM, max 96G. The drives are SSD. I was recently asked to move an InterBase server from Windows 7 to Windows Server. The database is 30G. I'm speculating that if I put the database on a 35G virtual disk and mirror it to a 35G RAM disk, the speed of database access might improve. I use local LVM for my virtual disks with DRBD on top to mirror the disk to a backup server. If I change grub.conf to increase RAM disk size and increase host RAM, I could create a 35G RAM disk. I'...
2010 Oct 21
2
Bug? Mount and fstab
...b1 /gluster ext3 defaults 0 1 /etc/glusterfs/glusterfs.vol /pifs/ glusterfs defaults 0 0 [root at vm-container-0-0 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 16G 2.6G 12G 18% / /dev/sda5 883G 35G 803G 5% /state/partition1 /dev/sda2 3.8G 121M 3.5G 4% /var tmpfs 7.7G 0 7.7G 0% /dev/shm /dev/sdb1 917G 200M 871G 1% /gluster none 7.7G 104K 7.7G 1% /var/lib/xenstored glusterfs#/etc/glusterfs/glusterfs.vol...
2016 Jan 22
0
LVM mirror database to ramdisk
...o the host went from a 4 core Xeon with 32G RAM to a 6 core > Xeon with 48G RAM, max 96G. The drives are SSD. > > I was recently asked to move an InterBase server from Windows 7 to > Windows Server. The database is 30G. > > I'm speculating that if I put the database on a 35G virtual disk and > mirror it to a 35G RAM disk, the speed of database access might improve. > > I use local LVM for my virtual disks with DRBD on top to mirror the > disk to a backup server. > > If I change grub.conf to increase RAM disk size and increase host RAM, > I coul...
2016 Jan 22
2
LVM mirror database to ramdisk
...Xeon with 32G RAM to a 6 core > > Xeon with 48G RAM, max 96G. The drives are SSD. > > > > I was recently asked to move an InterBase server from Windows 7 to > > Windows Server. The database is 30G. > > > > I'm speculating that if I put the database on a 35G virtual disk and > > mirror it to a 35G RAM disk, the speed of database access might improve. > > > > I use local LVM for my virtual disks with DRBD on top to mirror the > > disk to a backup server. > > > > If I change grub.conf to increase RAM disk size and...
2016 Jan 22
2
LVM mirror database to ramdisk
...ght help with that a bit. With this older verion, I'd be hoping that the next available disk would handle each request. If the physical disk takes longer to deal with the writes, the RAM disk might be the one that is available most of the time. I'd much prefer a method of pre-filling a 35G cache but I saw a reference to creating a disk mirror in RAM and decided to explore it.
2016 Jan 23
0
LVM mirror database to ramdisk
...the host went from a 4 core Xeon with 32G RAM to a 6 core > Xeon with 48G RAM, max 96G. The drives are SSD. > > I was recently asked to move an InterBase server from Windows 7 to > Windows Server. The database is 30G. > > I'm speculating that if I put the database on a 35G virtual disk and > mirror it to a 35G RAM disk, the speed of database access might improve. If that were running under Linux rather than Windows I'd suggest just giving that extra 35GB to its kernel and letting its normal caching keep everything in RAM. Whether Windows (7 or Server) would b...
2019 Oct 17
2
[RFC] Propeller: A frame work for Post Link Optimizations
...s or m:ss): 1:33.74 Maximum resident set size (kbytes): 14824444 93 seconds and ~14G of RAM version 2 : Elapsed (wall clock) time (h:mm:ss or m:ss): 1:21.90 Maximum resident set size (kbytes): 14511912 similar 91 secs and ~14G Now, coming back to the bug in the Makefile, we originally reported ~35G. That is *wrong* since the clang binary used to measure bolt overheads was built with basic block labels. Our *sincere apologies* for this, this showed BOLT as consuming more memory than is actual for clang. We double-checked BOLT numbers with the internal benchmark search2 for sanity and that i...
2019 Oct 14
2
[RFC] Propeller: A frame work for Post Link Optimizations
...ngADistrib utedBuildSystemAtGoogleScale.pdf, a distributed build system at Google scale is shown where 5 million binary and test builds are performed every day on several thousands of machines, each with a limitation of 12G of memory per process and 15 minute time-out on tests. Memory overheads of 35G (clang) are well above these thresholds. We have developed Propeller like ThinLTO that can be used to obtain similar performance gains like BOLT in such environments. Thanks Sri On Fri, Oct 11, 2019 at 11:25 AM Xinliang David Li via llvm-dev < llvm-dev at lists.llvm.org> wrote: > >...
2016 Jan 22
0
LVM mirror database to ramdisk
.... With this older verion, I'd be hoping that the next > available disk would handle each request. If the physical disk takes > longer to deal with the writes, the RAM disk might be the one that is > available most of the time. > > I'd much prefer a method of pre-filling a 35G cache but I saw a > reference to creating a disk mirror in RAM and decided to explore it. > Can you post the results of your test when you get it working?
2019 Oct 18
3
[RFC] Propeller: A frame work for Post Link Optimizations
...and ~14G of RAM > > > > version 2 : > > Elapsed (wall clock) time (h:mm:ss or m:ss): 1:21.90 > > Maximum resident set size (kbytes): 14511912 > > > > similar 91 secs and ~14G > > > > Now, coming back to the bug in the Makefile, we originally reported ~35G. > That is *wrong* since the clang binary used to measure bolt overheads was > built with basic block labels. Our *sincere apologies* for this, this > showed BOLT as consuming more memory than is actual for clang. We > double-checked BOLT numbers with the internal benchmark search2 f...
2016 Jan 22
2
LVM mirror database to ramdisk
..., I'd be hoping that the next > > available disk would handle each request. If the physical disk takes > > longer to deal with the writes, the RAM disk might be the one that is > > available most of the time. > > > > I'd much prefer a method of pre-filling a 35G cache but I saw a > > reference to creating a disk mirror in RAM and decided to explore it. > > > > Can you post the results of your test when you get it working? Absolutely, I'll share my real world results. I'm happy that I'm not the only person interested in th...
2019 Oct 22
2
[RFC] Propeller: A frame work for Post Link Optimizations
...and ~14G of RAM > > > > version 2 : > > Elapsed (wall clock) time (h:mm:ss or m:ss): 1:21.90 > > Maximum resident set size (kbytes): 14511912 > > > > similar 91 secs and ~14G > > > > Now, coming back to the bug in the Makefile, we originally reported ~35G. > That is *wrong* since the clang binary used to measure bolt overheads was > built with basic block labels. Our *sincere apologies* for this, this > showed BOLT as consuming more memory than is actual for clang. We > double-checked BOLT numbers with the internal benchmark search2 f...
2019 Oct 11
2
[RFC] Propeller: A frame work for Post Link Optimizations
Is there large value from deferring the block ordering to link time? That is, does the block layout algorithm need to consider global layout issues when deciding which blocks to put together and which to relegate to the far-away part of the code? Or, could the propellor-optimized compile step instead split each function into only 2 pieces -- one containing an "optimally-ordered" set of
2003 Apr 26
2
Duplicating Hard Drive Problem
...e student that I am, I was told to 'make sure that it works.' After investigating it, I noticed a problem with two of the hard drives one nodes 14 and 16. On most of the nodes, a 'df -hT' will give you the following: Filesystem Type Size Used Avail Use% Mounted On /dev/hda3 ext3 35G 1.7G 31G 6% / /dev/hda1 ext3 99M 14M 79M 15% /boot none tmpfs 250M 0 250M 0% /dev/shm master:/home nfs 660GB 33M 626G 1% /home which is as it should be. Note that the /home directory on all the nodes is nfs'ed over to our RAID array. However, on the faulty nodes, the first line of 'd...
2006 Mar 23
17
Poor performance on NFS-exported ZFS volumes
...- ds-store/rs/test available 35.0G - ds-store/rs/test referenced 149K - ds-store/rs/test compressratio 1.00x - ds-store/rs/test mounted yes - ds-store/rs/test quota 35G local ds-store/rs/test reservation none default ds-store/rs/test recordsize 128K default ds-store/rs/test mountpoint /ds-store/rs/test default ds-store/rs/test sharenfs rw=hercules,root=hercules local ds...