search for: 102mb

Displaying 18 results from an estimated 18 matches for "102mb".

Did you mean: 1024mb
2006 Aug 21
4
Making DOS/Win9x HD Image for memdisk
Hi list! I'm trying to make a hd image with can boot from memdisk over PXE. So this is what i did: - Created a new VM [VMware] -Installed DOS on a 100MB HD from a floppy -bootet Linux over PXE [DamnSmallLinux] -in terminal: dd if=/dev/hda of=dos.img This image [dos.img] isn't working! Memdisk loads the Image and boots the HD! The error message is something like this: I/O Device
2013 Apr 04
1
Freenas domU network performance issue
...lly achieve better performance than dom0. 3. As a comparasion, here is a iperf test between linux domU and dom0. dom0 => lin domU ~10300Mbps lin domU => dom0 ~13500Mbps 4. file transfer from external host to nas PS: Here the bottleneck is the network rather than the disk. NFS: 50MB/s FTP: 102MB/s I''m not quite sure about how much perf to expect from NFS. But it appears to be a bottleneck. 5. file transfer from dom0 to nas copy to the NFS mount served by the nas domU can cause hang. Almost 100% reproduce. Typically the hang is limited to the nas and dom0 processes that accesses t...
2009 Oct 19
0
EON ZFS Storage 0.59.4 based on snv_124 released!
...snv_124 * eon-0.594-124-64-cifs.iso * MD5: 4bda930d1abc08666bf2f576b5dd006c * Size: ~89Mb * Released: Monday 19-October-2009 tryitEON 64-bit x86 Samba ISO image version 0.59.4 based on snv_124 * eon-0.594-124-64-smb.iso * MD5: 80af8b288194377f13706572f7b174b3 * Size: ~102Mb * Released: Monday 19-October-2009 tryitEON 32-bit x86 CIFS ISO image version 0.59.4 based on snv_124 * eon-0.594-124-32-cifs.iso * MD5: dcc6f8cb35719950a6d4320aa5925d22 * Size: ~56Mb * Released: Monday 19-October-2009 tryitEON 32-bit x86 Samba ISO image version 0.59.4 based...
2020 Mar 31
1
Ways to make "smbd" use less memory?
...61 0.0 0.1 199468 153156 - I 11:51 0:01.39 | |-- /liu/sbin/smbd --daemon --configfile=/liu/etc/samba/smb.conf Looking at the memory allocation output (procstat -v on a FreeBSD machine, a test server with not much activity) on a master ?smbd? with VSZ 160MB and RSS 117 it looks like 102MB of it is allocated memory (the rest is shared libraries) spread out as: Size Allocations 4096 8868 4K * 88868 = 36MB 8192 1 16384 1 32768 1 45056 1 49152 2 73728 1 135168 1 143360 1 180224 1 270336 1 544768 1 2093056 16 2*16 = 32...
2005 Oct 02
1
Size of jpegs/pngs
Dear all I have trouble with setting the size for jpegs and pngs. I need to save a dendrogram of 1000 words into a jpeg or png file. On one of my computers, the following works just fine: bb<-agnes(aa, method="ward") jpeg("C:/Temp/test.txt", width=17000, height=2000) plot(bb) dev.off() On my main computer, however, this doesn't work: >
2007 Mar 16
2
re: o2hb_do_disk_heartbeat:963 ERROR: Device "sdb1" another node is heartbeating in our slot!
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Folks, I'm trying to wrap my head around something that happened in our environment. Basically, we noticed the error in /var/log/messages with no other errors. "Mar 16 13:38:02 dbo3 kernel: (3712,3):o2hb_do_disk_heartbeat:963 ERROR: Device "sdb1": another node is heartbeating in our slot!" Usually there are a
2008 Jan 06
4
Increasing throughput on xen bridges
Hi all, I have a rhel 5.1 xen server with two rhel 3 ES hvm guests installed. Both rhel3 guests use an internal xen bridge (xenbr1) which it isn''t binded to any physical interface host. On this bridge throughput it is very very poor, only 2.5 Mbs. How can I increase this throughput??? Many thanks. -- CL Martinez carlopmart {at} gmail {d0t} com
2017 Jul 06
2
Very slow performance on Sharded GlusterFS
...dd if=/dev/zero of=/mnt/ddfile4 bs=1G count=2 2+0 records in 2+0 records out 2147483648 bytes (2.1 GB, 2.0 GiB) copied, 24.7695 s, 86.7 MB/s I see improvements (from 70-75mb to 90-100mb per second) after eager-lock off setting. Also, I monitoring the bandwidth between two nodes. I see up to 102MB/s. Is there anything I can do to optimize more? Or is it last stop? Note: I deleted all files again and reformat then re-create volume with shard then mount it. Tried with 16MB, 32MB and 512MB shard sizes. Results are equal. Thanks, Gencer. From: Krutika Dhananjay [mailto:kdhananj...
2017 Jul 06
0
Very slow performance on Sharded GlusterFS
...dd if=/dev/zero of=/mnt/ddfile4 bs=1G count=2 2+0 records in 2+0 records out 2147483648 bytes (2.1 GB, 2.0 GiB) copied, 24.7695 s, 86.7 MB/s I see improvements (from 70-75mb to 90-100mb per second) after eager-lock off setting. Also, I monitoring the bandwidth between two nodes. I see up to 102MB/s. Is there anything I can do to optimize more? Or is it last stop? Note: I deleted all files again and reformat then re-create volume with shard then mount it. Tried with 16MB, 32MB and 512MB shard sizes. Results are equal. Thanks, Gencer. From: Krutika Dhananjay [mailto:kdhananj...
2002 Feb 05
3
Doubt in Rsync !!
Sir, How to run the rsync server in the remote machine. I want to mirror the set of directories from one machine to another machine which are in the network. I used the following command : /usr/sbin/rsync -vv --delete --recursive --times --perms --update source directory destinationmachine:destination directory. I got the error: Permission denied. How to tackle this error ? thanks laks
2017 Jul 06
2
Very slow performance on Sharded GlusterFS
...dd if=/dev/zero of=/mnt/ddfile4 bs=1G count=2 2+0 records in 2+0 records out 2147483648 bytes (2.1 GB, 2.0 GiB) copied, 24.7695 s, 86.7 MB/s I see improvements (from 70-75mb to 90-100mb per second) after eager-lock off setting. Also, I monitoring the bandwidth between two nodes. I see up to 102MB/s. Is there anything I can do to optimize more? Or is it last stop? Note: I deleted all files again and reformat then re-create volume with shard then mount it. Tried with 16MB, 32MB and 512MB shard sizes. Results are equal. Thanks, Gencer. From: Krutika Dhananjay [mailto:kdhananj...
2007 Oct 19
0
HVM Migration issues
.../s, dirtied 59Mb/s 18117 pages 2: sent 17470, skipped 632, delta 6237ms, dom0 37%, target 68%, sent 91Mb/s, dirtied 5Mb/s 1038 pages 3: sent 995, skipped 28, delta 211ms, dom0 32%, target 39%, sent 154Mb/s, dirtied 92Mb/s 594 pages 4: sent 546, skipped 27, delta 175ms, dom0 35%, target 83%, sent 102Mb/s, dirtied 9Mb/s 53 pages 5: sent 49, skipped 5, delta 48ms, dom0 8%, target 37%, sent 33Mb/s, dirtied 9Mb/s 14 pages 6: sent 9, skipped 5, Start last iteration094) Saving memory pages: iter 6 0% [2007-10-19 18:02:56 4489] DEBUG (__init__:1094) suspend [2007-10-19 18:02:56 4489] DEBUG (__init__...
2017 Jul 10
0
Very slow performance on Sharded GlusterFS
...dd if=/dev/zero of=/mnt/ddfile4 bs=1G count=2 2+0 records in 2+0 records out 2147483648 bytes (2.1 GB, 2.0 GiB) copied, 24.7695 s, 86.7 MB/s I see improvements (from 70-75mb to 90-100mb per second) after eager-lock off setting. Also, I monitoring the bandwidth between two nodes. I see up to 102MB/s. Is there anything I can do to optimize more? Or is it last stop? Note: I deleted all files again and reformat then re-create volume with shard then mount it. Tried with 16MB, 32MB and 512MB shard sizes. Results are equal. Thanks, Gencer. From: Krutika Dhananjay [mailto:kdhananj...
2017 Jul 12
1
Very slow performance on Sharded GlusterFS
...gt; 2+0 records in > > 2+0 records out > > 2147483648 bytes (2.1 GB, 2.0 GiB) copied, 24.7695 s, 86.7 MB/s > > > > I see improvements (from 70-75mb to 90-100mb per second) after eager-lock > off setting. Also, I monitoring the bandwidth between two nodes. I see up > to 102MB/s. > > > > Is there anything I can do to optimize more? Or is it last stop? > > > > Note: I deleted all files again and reformat then re-create volume with > shard then mount it. Tried with 16MB, 32MB and 512MB shard sizes. Results > are equal. > > > > Than...
2007 Dec 28
7
Xen and networking.
I have a beefy machine (Intel dual-quad core, 16GB memory 2 x GigE) I have loaded RHEL5.1-xen on the hardware and have created two logical systems: 4 cpus, 7.5 GB memory 1 x Gige Following RHEL guidelines, I have it set up so that eth0->xenbr0 and eth1->xenbr1 Each of the two RHEL5.1 guests uses one of the interfaces and this is verified at the switch by seeing the unique MAC addresses.
2017 Jul 06
0
Very slow performance on Sharded GlusterFS
What if you disabled eager lock and run your test again on the sharded configuration along with the profile output? # gluster volume set <VOL> cluster.eager-lock off -Krutika On Tue, Jul 4, 2017 at 9:03 PM, Krutika Dhananjay <kdhananj at redhat.com> wrote: > Thanks. I think reusing the same volume was the cause of lack of IO > distribution. > The latest profile output
2017 Jul 04
2
Very slow performance on Sharded GlusterFS
Thanks. I think reusing the same volume was the cause of lack of IO distribution. The latest profile output looks much more realistic and in line with i would expect. Let me analyse the numbers a bit and get back. -Krutika On Tue, Jul 4, 2017 at 12:55 PM, <gencer at gencgiyen.com> wrote: > Hi Krutika, > > > > Thank you so much for myour reply. Let me answer all: > >
2008 Feb 15
38
Performance with Sun StorageTek 2540
Under Solaris 10 on a 4 core Sun Ultra 40 with 20GB RAM, I am setting up a Sun StorageTek 2540 with 12 300GB 15K RPM SAS drives and connected via load-shared 4Gbit FC links. This week I have tried many different configurations, using firmware managed RAID, ZFS managed RAID, and with the controller cache enabled or disabled. My objective is to obtain the best single-file write performance.