search for: 65mb

Displaying 20 results from an estimated 35 matches for "65mb".

Did you mean: 64mb
2004 Feb 22
2
Crashed filesystem - directory recovery
...y, the filesystem was on one of the infamous DTLA-3070xx drives - and the drive decided to fail at the worst moment it possibly could, trashing the filesystem fairly well. The situation is as follows: I used dd_rescue to create an image of what is left of the filesystem, but I ended up with some 65MB of 'holes' in the image. Among the 'holes' is the sector that hosts a directory, /home/weyrmount/MOO (indeed, on the original drive, trying to CD into that gives IO Error) That directory contained three files, plus an 'arch' directory. Now, while I understand that recove...
2010 Aug 05
7
Search large XML file -- REXML slower than a slug, regex instantaneous
Got a question hopefully someone can answer - I am working on functionality to match on certain nodes of a largish (65mb) xml file. I implemented this with REXML and was 2 minutes and counting before I killed the process. After this, I just opened the console and loaded the file into a string and did a regex search for my data -- the result was almost instantaneous. The question is, if I can get away with it, am I b...
2010 Feb 07
2
Client link utilization
Hello everybody! This is probably going to be a classic question but I cannot find a decent answer on net. I have samba server set up and the following things work flawlessly: - iperf shows 92% link utilization - FTP/SCP/HTTP transfers work in 10MB/s range. However, when I mount samba share with linux client (mount.cifs) the link utilization cannot bypass cca 33%. Transfer speeds constantly
1999 Nov 03
1
2.0.6pre3 and FreeBSD 3.3-RELEASE
2.0.6pre3 compiles and installs under FreeBSD 3.2-RELEASE; however, throughput from an NT 4 SP6 workstation with domain security and only TCP_NODELAY as a socket option is clocking around 200K/sec for large files (65MB). Hardware is PPro200, single CPU kernel (never have gotten 2.0.5 or later to work SMP), Adaptec 2944UW/3 Quantum 9GB Atlas II wide diff drives (clock 9MB/sec using bonnie), Intel Pro100B, full duplex 100Mbit switch between client (PII/400 3Com905B, 128MB RAM) and server. The behavior I'm see...
2002 Nov 29
3
Samba + Clipper
...is the problem > he don't know nothing about linux * network bandwidth is the problem [100 and 10 Mbit/s] > maybe ... * server is the problem [ Compaq ML330G2 : PIII 1GHz, 256, 18GB SCSI, 100Mbit/s only file server for 33 clients ] > I don't believe in this ... Our major DBF has 65MB and the major NSX has 18MB. I think that is big and the problem is it, but system developer say that isn't. I don't want to come back to NT4, where the clipper system too crash. Resume: Where I can find information about samba and clipper systems ? Thank you for read about my problem. S...
2006 Apr 19
2
Dropped frames streaming video to samba
...etup here with a samba server in front of a fibrechannel array. It's a pretty vanilla samba setup exporting an xfs filesystem. The box is an opteron 265 with 4G RAM and a QLogic QLA2312 HBA running SLES9. We can dd an 8G file to a share from a windows workstation in just over 2 minutes (about 65MB/s) and dd back in about 4 minutes. Local disk performance in the machine gives about 195MB/s to disk and 120MB/s from disk. The problem comes when trying to capture video using Adobe Premiere 6.5 and write straight to the share - we get a *lot* of dropped frames (somewhere in the region of 25%+)....
2013 Oct 25
1
GlusterFS 3.4 Fuse client Performace
Dear GlusterFS Engineer, I have questions that my glusterfs server and fuse client perform properly on below specification. It can write only *65MB*/s through FUSE client to 1 glusterfs server (1 brick and no replica for 1 volume ) - NW bandwidth are enough for now. I've check it with iftop - However it can write *120MB*/s when I mount nfs on the same volume. Could anyone check if the glusterfs and fuse client perform properly? Detail...
2015 Jul 17
0
[Bug 3099] Please parallelize filesystem scan
...he read rate of the HDDs as 'transfer' bandwidth, because this is the speed at which we can verify that the data is the same on source and target. The sequential approach like it is now reduces the initial check to half the HDD read rate, so transfering unchanged files will only yield about 65MB/s in my case, which is slower than simple copying. Is this patch you proposed some years ago something I can apply to and try on a current rsync version? If not, could you update it to the 3.1.x version so I can benchmark the parallel checksumming in my situation? Best Regards Rainer -- You are...
2011 May 26
2
libvirt boots from Knoppix-CD but not Debian-CD
Debian Squeeze, libvirt 0.8.3, qemu-kvm 0.15.5 After configuring the cd image the bios screen shows: (tested with virsh, virt-install and virt-manager) Starting SeaBIOS (version 0.5.1-20110523_174945-brahms) Booting from CD-Rom 65MB medium found (bzw. whatever size) Boot failed: Could not read from CDROM (code 000c) ... the cd images (debian6-cd1, debian6-netinst, debian6-buisnesscard) have the correct md5sum an can be mounted, they simply do not boot. On the other hand, a recent knoppix ISO works well. Tested countless times...
2008 Feb 28
4
Gluster / DRBD Anyone using either?
Anyone using either Glusterfs or DRBD in their mail setup? How is performance, manageability? Problems? Tips? Ed W
2017 Sep 06
2
Slow performance of gluster volume
...ead: off >> performance.quick-read: off >> transport.address-family: inet >> performance.readdir-ahead: on >> nfs.disable: on >> nfs.export-volumes: on >> >> >> I observed that when testing with dd if=/dev/zero of=testfile bs=1G >> count=1 I get 65MB/s on the vms gluster volume (and the network traffic >> between the servers reaches ~ 500Mbps), while when testing with dd >> if=/dev/zero of=testfile bs=1G count=1 *oflag=direct *I get a consistent >> 10MB/s and the network traffic hardly reaching 100Mbps. >> >> Any o...
2012 Apr 17
2
[LLVMdev] InstCombine adds bit masks, confuses self, others
.../CINT2000/255_vortex/255_vortex 1.814 2.044 +12.7% +52mB SingleSource/Benchmarks/Shootout-C++/heapsort 1.871 2.132 +13.9% +57mB SingleSource/Benchmarks/Shootout-C++/ary3 1.087 1.264 +16.3% +65mB MultiSource/Benchmarks/SciMark2-C/scimark2 27.491 23.596 -14.2% -66mB MultiSource/Benchmarks/Olden/bisort/bisort 0.360 0.428 +19.0% +75mB MultiSource/Benchmarks/Olden/bh/bh 1.074 1....
2023 Aug 19
1
does Xapian::Enquire hold an MVCC revision?
...t libstdc++ > > you noted. > > I suppose for an mbox export you may not be too bothered about order (or > are happy to have the raw order be that in which messages were added) > in which case we only need to track the docid, so that could be just 4 > bytes per result which is ~65MB. Right, that's great news as we creep towards 50 or 100 million docs > Incidentally you don't mind the export order and only have single term > queries you can just use a PostingIterator to get a stream of document > ids matching a particular term (in the order documents were add...
2012 Apr 16
0
[LLVMdev] InstCombine adds bit masks, confuses self, others
On Tue, Apr 17, 2012 at 12:23 AM, Jakob Stoklund Olesen <stoklund at 2pi.dk>wrote: > I am not sure how best to fix this. If possible, InstCombine's > canonicalization shouldn't hide arithmetic progressions behind bit masks. The entire concept of cleverly converting arithmetic to bit masks seems like the perfect domain for DAGCombine instead of InstCombine: 1) We know the
2017 Sep 05
3
Slow performance of gluster volume
...32 performance.stat-prefetch: on performance.io-cache: off performance.read-ahead: off performance.quick-read: off transport.address-family: inet performance.readdir-ahead: on nfs.disable: on nfs.export-volumes: on I observed that when testing with dd if=/dev/zero of=testfile bs=1G count=1 I get 65MB/s on the vms gluster volume (and the network traffic between the servers reaches ~ 500Mbps), while when testing with dd if=/dev/zero of=testfile bs=1G count=1 *oflag=direct *I get a consistent 10MB/s and the network traffic hardly reaching 100Mbps. Any other things one can do? On Tue, Sep 5, 2017...
2017 Sep 06
0
Slow performance of gluster volume
...ick-read: off >>> transport.address-family: inet >>> performance.readdir-ahead: on >>> nfs.disable: on >>> nfs.export-volumes: on >>> >>> >>> I observed that when testing with dd if=/dev/zero of=testfile bs=1G >>> count=1 I get 65MB/s on the vms gluster volume (and the network traffic >>> between the servers reaches ~ 500Mbps), while when testing with dd >>> if=/dev/zero of=testfile bs=1G count=1 *oflag=direct *I get a >>> consistent 10MB/s and the network traffic hardly reaching 100Mbps. >>&gt...
2017 Sep 06
2
Slow performance of gluster volume
...ransport.address-family: inet >>>> performance.readdir-ahead: on >>>> nfs.disable: on >>>> nfs.export-volumes: on >>>> >>>> >>>> I observed that when testing with dd if=/dev/zero of=testfile bs=1G >>>> count=1 I get 65MB/s on the vms gluster volume (and the network traffic >>>> between the servers reaches ~ 500Mbps), while when testing with dd >>>> if=/dev/zero of=testfile bs=1G count=1 *oflag=direct *I get a >>>> consistent 10MB/s and the network traffic hardly reaching 100Mbps....
2017 Sep 10
2
Slow performance of gluster volume
...ache: off > performance.read-ahead: off > performance.quick-read: off > transport.address-family: inet > performance.readdir-ahead: on > nfs.disable: on > nfs.export-volumes: on > > > I observed that when testing with dd if=/dev/zero of=testfile bs=1G count=1 I > get 65MB/s on the vms gluster volume (and the network traffic between the > servers reaches ~ 500Mbps), while when testing with dd if=/dev/zero > of=testfile bs=1G count=1 oflag=direct I get a consistent 10MB/s and the > network traffic hardly reaching 100Mbps. > > Any other things one can d...
2017 Sep 08
0
Slow performance of gluster volume
...ransport.address-family: inet >>>> performance.readdir-ahead: on >>>> nfs.disable: on >>>> nfs.export-volumes: on >>>> >>>> >>>> I observed that when testing with dd if=/dev/zero of=testfile bs=1G >>>> count=1 I get 65MB/s on the vms gluster volume (and the network traffic >>>> between the servers reaches ~ 500Mbps), while when testing with dd >>>> if=/dev/zero of=testfile bs=1G count=1 *oflag=direct *I get a >>>> consistent 10MB/s and the network traffic hardly reaching 100Mbps....
2015 May 21
26
[Bug 90567] New: Display freeze when starting League of Legends (Wine)
https://bugs.freedesktop.org/show_bug.cgi?id=90567 Bug ID: 90567 Summary: Display freeze when starting League of Legends (Wine) Product: xorg Version: 7.6 (2010.12) Hardware: Other OS: All Status: NEW Severity: normal Priority: medium Component: Driver/nouveau Assignee: