search for: 117g

Displaying 20 results from an estimated 25 matches for "117g".

Did you mean: 117
2020 Sep 26
7
Using CentOS 7 to attempt recovery of failed disk
I have a disk that is flagging errors, attempting to rescue the data. I tried dd first - if gets about 117G of 320G disk and stops incrementing the save image any more. Now I'm trying ddrescue and it also stops about the same point Thoughts on how to continue past that point ? Thanks, Jerry
2013 Nov 22
1
FreeBSD 10-BETA3 - zfs clone of zvol snapshot is not created
...oing something wrong, ZFS does not support that or there is a bug that zvol clone does not show up under /dev/zvol after creating it from other zvol snapshot? # zfs list -t all | grep local local 136G 76.8G 144K none local/home 117G 76.8G 117G /home local/vm 18.4G 76.8G 144K none local/vm/vbox_pcbsd_10 5.35G 76.8G 5.35G - local/vm/vbox_windows_7 10.8G 76.8G 9.86G - local/vm/vbox_windows_7 at clean 940M - 8.12G - local/vm/vbox_wi...
2020 Sep 26
1
Using CentOS 7 to attempt recovery of failed disk
...probably nothing there to read. On Sat, Sep 26, 2020 at 1:41 PM Jerry Geis <jerry.geis at gmail.com> wrote: > Hello > > I did try the "dd conv=noerror ?" > The ddrescue - doesnt stop - it just doesnt "continue" past a certain > point. Somewhere around the 117G mark - it just doesnt go past that . > (same with dd, gets to 117G and just doesnt continue. > I have let the dd run all night - did not go past the 117G. > > Thanks for any suggestions. > > Jerry > _______________________________________________ > CentOS mailing list > C...
2020 Sep 26
0
Using CentOS 7 to attempt recovery of failed disk
Hello I did try the "dd conv=noerror ?" The ddrescue - doesnt stop - it just doesnt "continue" past a certain point. Somewhere around the 117G mark - it just doesnt go past that . (same with dd, gets to 117G and just doesnt continue. I have let the dd run all night - did not go past the 117G. Thanks for any suggestions. Jerry
2020 Sep 27
2
Using CentOS 7 to attempt recovery of failed disk
...i Galtsev <galtsev at kicp.uchicago.edu> wrote: > > > > On Sep 26, 2020, at 8:05 AM, Jerry Geis <jerry.geis at gmail.com> wrote: > > > > I have a disk that is flagging errors, attempting to rescue the data. > > > > I tried dd first - if gets about 117G of 320G disk and stops incrementing > > the save image any more. > > did you try > > dd conv=noerror ??? > > this flag makes dd not stop on input error. Whatever is irrecoverable is irrecoverable, but this way you will get stuff > beyond failure point. You need conv=n...
2015 Apr 18
2
Dovecot 2.2.16: disappearing messages, mismatched summaries, duplicated messages, excessive full re-downloads
...These really shouldn't be happening.. > One possibility is that there is a scrub/verify routine running that is checking the actual size vs. reported size of messages, and perhaps that routine doesn't know about ZFS compression: zroot/ezjail used 117G - zroot/ezjail compressratio 1.25x - zroot/ezjail compression lz4 local zroot/ezjail logicalused 137G - and is seeing something anomalous and trying to "fix" that, triggering a...
2023 Mar 15
1
How to configure?
...nonymous-inode: off diagnostics.brick-sys-log-level: CRITICAL features.scrub-freq: monthly cluster.data-self-heal: on cluster.brick-multiplex: on cluster.daemon-log-level: ERROR -8<-- htop reports that memory usage is up to 143G, there are 602 tasks and 5232 threads (~20 running) on clustor00, 117G/49 tasks/1565 threads on clustor01 and 126G/45 tasks/1574 threads on clustor02. I see quite a lot (284!) of glfsheal processes running on clustor00 (a "gluster v heal cluster_data info summary" is running on clustor02 since yesterday, still no output). Shouldn't be just one per bri...
2020 Sep 26
0
Using CentOS 7 to attempt recovery of failed disk
> On Sep 26, 2020, at 8:05 AM, Jerry Geis <jerry.geis at gmail.com> wrote: > > I have a disk that is flagging errors, attempting to rescue the data. > > I tried dd first - if gets about 117G of 320G disk and stops incrementing > the save image any more. did you try dd conv=noerror ? this flag makes dd not stop on input error. Whatever is irrecoverable is irrecoverable, but this way you will get stuff beyond failure point. Valeri > > Now I'm trying ddrescue and it als...
2020 Sep 27
0
Using CentOS 7 to attempt recovery of failed disk
...chicago.edu> wrote: > > > > > > > On Sep 26, 2020, at 8:05 AM, Jerry Geis <jerry.geis at gmail.com> wrote: > > > > > > I have a disk that is flagging errors, attempting to rescue the data. > > > > > > I tried dd first - if gets about 117G of 320G disk and stops > incrementing > > > the save image any more. > > > > did you try > > > > dd conv=noerror ? > > > > this flag makes dd not stop on input error. Whatever is irrecoverable is > irrecoverable, but this way you will get stuff &gt...
2015 Apr 18
0
Dovecot 2.2.16: disappearing messages, mismatched summaries, duplicated messages, excessive full re-downloads
...7215.host,S=5389,W=5442:2,S) > > One possibility is that there is a scrub/verify routine running that is checking the actual size vs. reported size of messages, and perhaps that routine doesn't know about ZFS compression: > > zroot/ezjail used 117G - > zroot/ezjail compressratio 1.25x - > zroot/ezjail compression lz4 local > zroot/ezjail logicalused 137G - > > and is seeing something anomalous and trying to "fix&...
2008 Apr 01
2
strange error in df -h
Hi All, I just saw this in output from df -h: # df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 131G 4.6G 120G 4% / /dev/sdc1 271G 141G 117G 55% /home /dev/sdd1 271G 3.9G 253G 2% /home/admin /dev/sda1 99M 20M 74M 22% /boot tmpfs 442M 0 442M 0% /dev/shm /dev/hda 11M 11M 0 100% /media/TestCD df: `status': No such file or directory df: `status': No such f...
2023 Mar 15
1
How to configure?
...nonymous-inode: off diagnostics.brick-sys-log-level: CRITICAL features.scrub-freq: monthly cluster.data-self-heal: on cluster.brick-multiplex: on cluster.daemon-log-level: ERROR -8<-- htop reports that memory usage is up to 143G, there are 602 tasks and 5232 threads (~20 running) on clustor00, 117G/49 tasks/1565 threads on clustor01 and 126G/45 tasks/1574 threads on clustor02. I see quite a lot (284!) of glfsheal processes running on clustor00 (a "gluster v heal cluster_data info summary" is running on clustor02 since yesterday, still no output). Shouldn't be just one per bri...
2023 Mar 16
1
How to configure?
...features.scrub-freq: monthly > cluster.data-self-heal: on > cluster.brick-multiplex: on > cluster.daemon-log-level: ERROR > -8<-- > > htop reports that memory usage is up to 143G, there are 602 tasks and > 5232 threads (~20 running) on clustor00, 117G/49 tasks/1565 threads on > clustor01 and 126G/45 tasks/1574 threads on clustor02. > I see quite a lot (284!) of glfsheal processes running on clustor00 (a > "gluster v heal cluster_data info summary" is running on clustor02 > since > yesterday, still no...
2023 Mar 16
1
How to configure?
...L >? ? features.scrub-freq: monthly >? ? cluster.data-self-heal: on >? ? cluster.brick-multiplex: on >? ? cluster.daemon-log-level: ERROR >? ? -8<-- > >? ? htop reports that memory usage is up to 143G, there are 602 tasks and >? ? 5232 threads (~20 running) on clustor00, 117G/49 tasks/1565 threads on >? ? clustor01 and 126G/45 tasks/1574 threads on clustor02. >? ? I see quite a lot (284!) of glfsheal processes running on clustor00 (a >? ? "gluster v heal cluster_data info summary" is running on clustor02 >? ? since >? ? yesterday, still no outpu...
2023 Mar 15
1
How to configure?
Do you use brick multiplexing ? Best Regards,Strahil Nikolov? On Tue, Mar 14, 2023 at 16:44, Diego Zuccato<diego.zuccato at unibo.it> wrote: Hello all. Our Gluster 9.6 cluster is showing increasing problems. Currently it's composed of 3 servers (2x Intel Xeon 4210 [20 cores dual thread, total 40 threads], 192GB RAM, 30x HGST HUH721212AL5200 [12TB]), configured in replica 3
2023 Mar 21
1
How to configure?
...al: on >? ? ? >? ? cluster.brick-multiplex: on >? ? ? >? ? cluster.daemon-log-level: ERROR >? ? ? >? ? -8<-- >? ? ? > >? ? ? >? ? htop reports that memory usage is up to 143G, there are 602 >? ? tasks and >? ? ? >? ? 5232 threads (~20 running) on clustor00, 117G/49 tasks/1565 >? ? threads on >? ? ? >? ? clustor01 and 126G/45 tasks/1574 threads on clustor02. >? ? ? >? ? I see quite a lot (284!) of glfsheal processes running on >? ? clustor00 (a >? ? ? >? ? "gluster v heal cluster_data info summary" is running on clustor02 &...
2017 May 18
2
[R] R-3.4.0 fails test
...igabit Ethernet Controller driver: e1000 v: 7.3.21-k8-NAPI port: d240 bus-ID: 00:08.0 IF: eth0 state: up speed: 1000 Mbps duplex: full mac: <filter> Drives: HDD Total Size: 131.7GB (56.5% used) ID-1: /dev/sda model: VBOX_HARDDISK size: 131.7GB Partition: ID-1: / size: 117G used: 66G (60%) fs: ext4 dev: /dev/sda1 ID-2: swap-1 size: 4.16GB used: 0.00GB (0%) fs: swap dev: /dev/sda5 RAID: No RAID devices: /proc/mdstat, md_mod kernel module present Sensors: None detected - is lm-sensors installed and configured? Info: Processes: 256 Uptime: 17:01 Me...
2023 Mar 21
1
How to configure?
...>? ? ? >? ? cluster.daemon-log-level: ERROR > >? ? ? >? ? -8<-- > >? ? ? > > >? ? ? >? ? htop reports that memory usage is up to 143G, there are 602 > >? ? tasks and > >? ? ? >? ? 5232 threads (~20 running) on clustor00, 117G/49 tasks/1565 > >? ? threads on > >? ? ? >? ? clustor01 and 126G/45 tasks/1574 threads on clustor02. > >? ? ? >? ? I see quite a lot (284!) of glfsheal processes running on > >? ? clustor00 (a > >? ? ? >? ? "gluster v heal cluster...
2023 Mar 21
1
How to configure?
...? >? ? ? >? ? cluster.daemon-log-level: ERROR >? ? ? >? ? ? >? ? -8<-- >? ? ? >? ? ? > >? ? ? >? ? ? >? ? htop reports that memory usage is up to 143G, there are 602 >? ? ? >? ? tasks and >? ? ? >? ? ? >? ? 5232 threads (~20 running) on clustor00, 117G/49 tasks/1565 >? ? ? >? ? threads on >? ? ? >? ? ? >? ? clustor01 and 126G/45 tasks/1574 threads on clustor02. >? ? ? >? ? ? >? ? I see quite a lot (284!) of glfsheal processes running on >? ? ? >? ? clustor00 (a >? ? ? >? ? ? >? ? "gluster v heal cluster...
2023 Mar 24
1
How to configure?
...>? ? ? >? ? ? >? ? -8<-- > >? ? ? >? ? ? > > >? ? ? >? ? ? >? ? htop reports that memory usage is up to 143G, > there are 602 > >? ? ? >? ? tasks and > >? ? ? >? ? ? >? ? 5232 threads (~20 running) on clustor00, 117G/49 > tasks/1565 > >? ? ? >? ? threads on > >? ? ? >? ? ? >? ? clustor01 and 126G/45 tasks/1574 threads on > clustor02. > >? ? ? >? ? ? >? ? I see quite a lot (284!) of glfsheal processes > running on > >? ? ? >? ? cl...