search for: 700gb

Displaying 18 results from an estimated 18 matches for "700gb".

Did you mean: 200gb
2010 May 21
2
fsck.ocfs2 using huge amount of memory?
...ironment. We use 3Par SANs and their snap clone options. The current production system we snap clone from is EL4 U5 with ocfs2 1.2.9, the new servers have ocfs2 1.4.3 installed. Part of the refresh process is to run fsck.ocfs2 on the volume to recover, but right now as I am trying to run it on our 700GB volume it shows a virtual memory size of 21.9GB, resident of 10GB and it is killing the machine with swapping (24GB physical memory). Can anyone enlighten what is going on? Ulf.
2011 Jun 30
14
700GB gone?
I have a 1.5TB disk that has several partitions. One of them is 900GB. Now I can only see 300GB. Where is the rest? Is there a command I can do to reach the rest of the data? Will scrub help? -- This message posted from opensolaris.org
2012 Oct 01
5
s3 as mysql directory
...lp with an idea I have. I've setup a bacula backup system on an AWS volume. Bacula stores a LOT of information in it's mysql database (in my setup, you can also use postgres or sqlite if you chose). Since I've started doing this I notice that the mysql data directory has swelled to over 700GB! That's quite a lot and its' easting up valuable disk space. So I had an idea. What about uses the fuse based s3fs to mount an S3 bucket on the local filesystem and use that as your mysql data dir? In other words mount your s3 bucket on /var/lib/mysql I used this article to setup the s3f...
2011 Jul 11
4
extremely slow syncing on btrfs with 2.6.39.1
I''ve been monitoring the lists for a while now but didn''t see this problem mentioned in particular: I''ve got a fairly standard desktop system at home, 700gb WD drive, nothing special, with 2 btrfs filesystems and some snapshots. The system runs for days, and I''ve noticed unusual disk activity the other evening - turns out that it''s taking forever to sync(). $ uname -r 2.6.39.1 $ grep btrfs /proc/mounts /dev/root / btrfs rw,relatime 0...
2011 Oct 08
1
CentOS 6.0 CR mdadm-3.2.2 breaks Intel BIOS RAID
...There are two RAIDs on this one controller...a RAID1 which still functions and a RAID5 which is the one that is unable to be seen. I don't know what IMSM is for, but the only thing strange about that array is it is 2.7TB so the BIOS configured it as two separate arrays, one as 2TB and one as 700GB, but it was showing up to CentOS as a single volume. I downgraded to 3.2.1 , ran mdadm again and bam...it works, # mdadm --detail --scan ARRAY /dev/md0 metadata=imsm UUID=734f79cf:22200a5a:73be2b52:3388006b ARRAY /dev/md126 metadata=imsm UUID=3d135942:f0fad0b0:33255f78:29c3f50a ARRAY /dev/md127 c...
2006 Oct 15
1
Proper partition/LVM growth after RAID migration
Hi This topic is perhaps not for this list, but it I'm running on a CentOS 4.4 and it seems that a lot of people here uses 3Ware and RAID volumes. I did a RAID migration on a 3Ware 9590SE-12, so that an exported disk grew from 700GB to 1400GB. The exported disk is managed by LVM. The problem now is that I don't really know what to do now to let LVM and my locigal volume to make use of this new disk size, and probably future disk size growth. I initially imagined that I could let the physical partition grow and then the LV...
2019 Jul 15
2
broken mdboxes
...dovecot 2.2.22. User mailboxes are mdbox format and I did an rsync from the old to the new. My screw up however was that my rsync was not --delete files, and had to be run multiple times over the course of a few weeks due to to bandwidth disparities and the sheer size of the mail spool approaching 700GB (source server had 10mbps. Should have used sneakernet instead). The bottom line however is that I wound up having many extra m.* files in each user storage dir that were not actually present on the source when the final, final sync was made and I went live with the migration server.? I know this i...
2011 Apr 19
1
Linux RHEL 5.2 hangs for 1.5 hrs while fsck'ing the OCFS2 file system
Hi there, A month ago we ran into the fsck issue while rebooting one of the Oracle RAC nodes running on Linux RHEL 5.2. It was hanging for 1.5 hours During the reboot, OS portion went fine, then it activated the data volumes in all data vg's with [OK] Then displayed message: Checking filesystems - and it took it 1.5 hrs, then it finished the reboot. Last weekend we rebooted the same box and
2004 Dec 19
1
very big rsync only worked partially what are size limitations?
Hi, I am very grateful for rsync!!! Two days ago i started a backup of a 630 GB directory on a 700GB raid which has 13,945 subdirectories, to another server. It copied over 525GB worth of data, and I see that there are 12,627 directories on the destination. The command I used was rsync -a -e ssh /big/dir/ 192.168.1.2:/big/dir. I thought maybe it ran out of memory or something - yes there a...
2006 Apr 04
2
rsync not removing files that are out of date
I'm running on Red Hat Enterprise 3 machine with ~700Gb of Storage backing up to a similar machine using rsync backups running 6 times a day on various directories in /home/. The inital rsync copied roughly 150gb. Everything appears to be backing up correctly so no loss of data which is good :) The problem is that he backup server is not removing data...
2011 Jul 13
0
How to call rsync client so that is detects that server has gone away?
...he client is rsync ?version 3.0.4 ?protocol version 30, natively compiled on Debian 6, kernel 2.6.32.5-amd64. The server is a QNAP NAS device running rsync version 3.0.6 protocol version 30. (I can change the client version if I need to, but not the version on the QNAP device) I am trying to rsync ~700gb to the QNAP device, hopefully once per day. There are ~40,000,000 files, many of them rsync-snapshot hard-links. It takes a long time, so I'm using LVM2 snapshots to get static views of the data partitions. I need to preserve the links or the destination size will multiply by a factor of 6 or s...
2019 Jul 15
0
broken mdboxes
...however was that my rsync </div> <div> was not --delete files, and had to be run multiple times over the course </div> <div> of a few weeks due to to bandwidth disparities and the sheer size of the </div> <div> mail spool approaching 700GB (source server had 10mbps. Should have used </div> <div> sneakernet instead). The bottom line however is that I wound up having </div> <div> many extra m.* files in each user storage dir that were not actually </div> <div> present on...
2019 Jul 15
1
broken mdboxes
...as that my rsync </div> <div> was not --delete files, and had to be run multiple times over the course </div> <div> of a few weeks due to to bandwidth disparities and the sheer size of the </div> <div> mail spool approaching 700GB (source server had 10mbps. Should have used </div> <div> sneakernet instead). The bottom line however is that I wound up having </div> <div> many extra m.* files in each user storage dir that were not actually </div> <div> p...
2017 Feb 01
2
virt-p2v migration
...only the successfully migrated sda and associated files, but I suspect the second as well. Most of the rest of the log can be seen at: http://theninthdimension.blogspot.co.uk/2017/02/virt-p2v-error.html The conversion host has plenty of disk space in the migration path: the image of sdb is about 700GB, and it has 5TB free. I can always successfully migrate just sda. I can't just choose sdb as it needs an OS disk included. Details of the hosts being migrated: RHEL 5 x86_64 Details of the conversion server: Fedora 23 with virt-v2v-1.32.10-1.fc23.x86_64, libvirt-1.2.18.4-1.fc23.x86_64 The boo...
2011 Jul 11
3
Feature request, or HowTo? State-full resume rsync transfer
I am looking to do state-full resume of rsync transfers. My network environment is is an unreliable and slow satellite infrastructure, and the files I need to send are approaching 10 gigs in size. In this network environment often times links cannot be maintained for more than a few minutes at a time. In this environment, bandwidth is at a premium, which is why rsync was chosen as ideal for the
2010 Jul 08
5
No space left on device on not full filesystem
Hello, We have running lustre 1.8.1 and have met "No space lest on device" error when uploading 500 Gb small files (less then 100 Kb each). The problem seems to depends on the number of files. If we remove one file, we can create one new file, even with Gb size; but if we haven''t remove something we can''t create even very little file, as an example using touch
2008 Nov 26
8
disk space issues...any help is greatly appreciated
Hi all, Please pardon my newbie-ness on this issue....I've a / partition which is full (quite suddenly, actually) and I'm not sure how to fix this. I've searched for uneeded logs, etc in /var/log and /tmp to no avail. The system is CentOS 5.2 and is not connected to the internet, serves as a local LAN server running stock stuff...sendmail, dovecot, apache..nothing strange or
2007 Feb 12
17
NFS/ZFS performance problems - txg_wait_open() deadlocks?
Hi. System is snv_56 sun4u sparc SUNW,Sun-Fire-V440, zil_disable=1 We see many operation on nfs clients to that server really slow (like 90 seconds for unlink()). It''s not a problem with network, there''s also plenty oc CPU available. Storage isn''t saturated either. First strange thing - normally on that server nfsd has about 1500-2500 number of threads. I did