similar to: 700GB gone?

Displaying 20 results from an estimated 130 matches similar to: "700GB gone?"

2011 May 17
3
Reboots when importing old rpool
I have a fresh install of Solaris 11 Express on a new SSD. I have inserted the old hard disk, and tried to import it, with: # zpool import -f <long id number> Old_rpool but the computer reboots. Why is that? On my old hard disk, I have 10-20 BE, starting with OpenSolaris 2009.06 and upgraded to b134 up to snv_151a. I also have a WinXP entry in GRUB. This hard disk is partitioned, with a
2016 Apr 18
3
Suddenly increased my hard disk
Hi Geleem, Please have a look below of my result. For my system shows like below. df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 909G 576G 287G 67% / tmpfs 3.9G 0 3.9G 0% /dev/shm /dev/sda1 3.9G 160M 3.5G 5% /boot /dev/sdb1 916G 382G 488G 44% /bkhdd First hard disk /dev/sda Second hard disk /dev/sdb The problem is in first hard
2009 Jul 10
5
Slow Resilvering Performance
I know this topic has been discussed many times... but what the hell makes zpool resilvering so slow? I''m running OpenSolaris 2009.06. I have had a large number of problematic disks due to a bad production batch, leading me to resilver quite a few times, progressively replacing each disk as it dies (and now preemptively removing disks.) My complaint is that resilvering ends up
2016 Apr 07
1
Suddenly increased my hard disk
Hi John, Thank you. For my system shows like below. df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 909G 576G 287G 67% / tmpfs 3.9G 0 3.9G 0% /dev/shm /dev/sda1 3.9G 160M 3.5G 5% /boot /dev/sdb1 916G 382G 488G 44% /bkhdd First hard disk /dev/sda Second hard disk /dev/sdb The problem is in first hard disk show above size but my email
2016 Jul 12
3
Broken output for fdisk -l
Hi, There was some problem with our system so I re-installed the server with CentOS 7. Now, when I am trying to run 'fdisk -l' command, it is returning a broken output. It throws this error- "fdisk: cannot open /dev/sdc: Input/output error". There are valid /dev/sdd and /dev/sde devices which are mounted and they are accessible but, somehow /dev/sdc is having a problem and
2012 Mar 26
2
One disk speed problem [SOLVED], and a question on hdparm
I believe I've posted before about one of the speed issues we were having, of backups taking many, many hours that should *not* take that long. My manager and I finally nailed it down to the h/d itself. Identical boxes, and he tried a backup of one system which took under two hours, while the same regular one rand nearly six. I'd been googling on and off for weeks, and this morning, ran
2011 Sep 12
2
Approximate size of CentOS mirror
Hey all, Roughly, how much disk space would I need on my server to mirror the entire ISO collection and the repository files. Also, how would I tell my server to only mirror CentOS 5 and 6? -- If you have any questions, please do not hesitate to contact me on +61 478 241 896. Regards, Christopher Hawker
2016 Apr 07
2
Suddenly increased my hard disk
Hi John, Ashish, Still no luck . I have tried your commands in root folder. It's showing max size 384 only in home directory. But if i try df -h shown 579. Is there any way to find out recycle bin folder On Thu, Apr 7, 2016 at 2:16 PM, Ashish Yadav <gwalashish at gmail.com> wrote: > Hi Chandran, > > > On Thu, Apr 7, 2016 at 10:38 AM, Chandran Manikandan <tech2mani at
2018 May 08
1
mount failing client to gluster cluster.
Hi, On a debian 9 client, ======== root at kvm01:/var/lib/libvirt# dpkg -l glusterfs-client 8><--- ii glusterfs-client 3.8.8-1 amd64 clustered file-system (client package) root at kvm01:/var/lib/libvirt# ======= I am trying to to do a mount to a Centos 7 gluster setup, ======= [root at glustep1 libvirt]# rpm -q glusterfs glusterfs-4.0.2-1.el7.x86_64
2018 May 21
2
split brain? but where?
Hi, I seem to have a split brain issue, but I cannot figure out where this is and what it is, can someone help me pls, I cant find what to fix here. ========== root at salt-001:~# salt gluster* cmd.run 'df -h' glusterp2.graywitch.co.nz: Filesystem Size Used Avail Use% Mounted on /dev/mapper/centos-root
2013 Jul 04
3
odd inconsistency with nfs
I'm having an interesting/odd problem with nfs (I think). We recently (Monday/Tuesday) upgraded our file server from an ancient redhat 7.3 system to a shiny new centos 6.4 system. We don't see any issues between the other centos boxes, but things get a bit weird when we start mounting on the old solaris clients. The initial symptom was that the 'tab complete' wasn't
2018 May 22
2
split brain? but where?
Hi, Which version of gluster you are using? You can find which file is that using the following command find <brickpath> -samefile <brickpath/.glusterfs/<first two bits of gfid>/<next 2 bits of gfid>/<full gfid> Please provide the getfatr output of the file which is in split brain. The steps to recover from split-brain can be found here,
2018 May 21
0
split brain? but where?
How do I find what "eafb8799-4e7a-4264-9213-26997c5a4693" is? https://docs.gluster.org/en/v3/Troubleshooting/gfid-to-path/ On May 21, 2018 3:22:01 PM PDT, Thing <thing.thing at gmail.com> wrote: >Hi, > >I seem to have a split brain issue, but I cannot figure out where this >is >and what it is, can someone help me pls, I cant find what to fix here. >
2016 Apr 07
7
Suddenly increased my hard disk
Hi All, I have running Centos 6.5 32 bit machine. This machine is running qmailtoaster packages and mailbox size is 385 GB. if i run the df -h command it show 385 GB out of 1TB I have run the same command today suddenly shows 576 GB out of 1 TB. I didn't update any bulk file and mail transaction is not very high. How do i check this issue and fix it. how do i find out and why suddenly
2023 Mar 16
1
How to configure?
OOM is just just a matter of time. Today mem use is up to 177G/187 and: # ps aux|grep glfsheal|wc -l 551 (well, one is actually the grep process, so "only" 550 glfsheal processes. I'll take the last 5: root 3266352 0.5 0.0 600292 93044 ? Sl 06:55 0:07 /usr/libexec/glusterfs/glfsheal cluster_data info-summary --xml root 3267220 0.7 0.0 600292 91964 ?
2018 May 22
0
split brain? but where?
I tried this already. 8><--- [root at glusterp2 fb]# find /bricks/brick1/gv0 -samefile /bricks/brick1/gv0/.glusterfs/ea/fb/eafb8799-4e7a-4264-9213-26997c5a4693 /bricks/brick1/gv0/.glusterfs/ea/fb/eafb8799-4e7a-4264-9213-26997c5a4693 [root at glusterp2 fb]# 8><--- gluster 4 Centos 7.4 8><--- df -h [root at glusterp2 fb]# df -h Filesystem
2010 May 21
2
fsck.ocfs2 using huge amount of memory?
We are setting up 2 new EL5 U4 machines to replace our current database servers running our demo environment. We use 3Par SANs and their snap clone options. The current production system we snap clone from is EL4 U5 with ocfs2 1.2.9, the new servers have ocfs2 1.4.3 installed. Part of the refresh process is to run fsck.ocfs2 on the volume to recover, but right now as I am trying to run it on our
2023 Mar 16
1
How to configure?
Can you restart glusterd service (first check that it was not modified to kill the bricks)? Best Regards,Strahil Nikolov? On Thu, Mar 16, 2023 at 8:26, Diego Zuccato<diego.zuccato at unibo.it> wrote: OOM is just just a matter of time. Today mem use is up to 177G/187 and: # ps aux|grep glfsheal|wc -l 551 (well, one is actually the grep process, so "only" 550 glfsheal
2018 May 22
1
split brain? but where?
I tried looking for a file of the same size and the gfid doesnt show up, 8><--- [root at glusterp2 fb]# pwd /bricks/brick1/gv0/.glusterfs/ea/fb [root at glusterp2 fb]# ls -al total 3130892 drwx------. 2 root root 64 May 22 13:01 . drwx------. 4 root root 24 May 8 14:27 .. -rw-------. 1 root root 3294887936 May 4 11:07 eafb8799-4e7a-4264-9213-26997c5a4693 -rw-r--r--. 1 root
2012 Oct 01
5
s3 as mysql directory
Hello list, I am soliciting opinion here as opposed technical help with an idea I have. I've setup a bacula backup system on an AWS volume. Bacula stores a LOT of information in it's mysql database (in my setup, you can also use postgres or sqlite if you chose). Since I've started doing this I notice that the mysql data directory has swelled to over 700GB! That's quite a lot and