search for: 19g

Displaying 20 results from an estimated 45 matches for "19g".

Did you mean: 19
2024 Oct 19
2
How much disk can fail after a catastrophic failure occur?
...with this number of disks in each side: pve01:~# df | grep disco /dev/sdd 1.0T 9.4G 1015G 1% /disco1TB-0 /dev/sdh 1.0T 9.3G 1015G 1% /disco1TB-3 /dev/sde 1.0T 9.5G 1015G 1% /disco1TB-1 /dev/sdf 1.0T 9.4G 1015G 1% /disco1TB-2 /dev/sdg 2.0T 19G 2.0T 1% /disco2TB-1 /dev/sdc 2.0T 19G 2.0T 1% /disco2TB-0 /dev/sdj 1.0T 9.2G 1015G 1% /disco1TB-4 I have a Type: Distributed-Replicate gluster So my question is: how much disk can be in fail state after losing data or something? Thanks in advance --- Gilberto Nunes...
2018 May 22
0
split brain? but where?
...ea/fb/eafb8799-4e7a-4264-9213-26997c5a4693 [root at glusterp2 fb]# 8><--- gluster 4 Centos 7.4 8><--- df -h [root at glusterp2 fb]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/centos-root 19G 3.4G 16G 19% / devtmpfs 3.8G 0 3.8G 0% /dev tmpfs 3.8G 12K 3.8G 1% /dev/shm tmpfs 3.8G 9.0M 3.8G 1% /run tmpfs...
2009 Oct 22
3
what else is missing in 5.4?
[root at alan centos]# du -sh 5.* 19G 5.3 14G 5.4 -- ?Don't eat anything you've ever seen advertised on TV? - Michael Pollan, author of "In Defense of Food"
2018 May 22
2
split brain? but where?
...gt;========== > >root at salt-001:~# salt gluster* cmd.run 'df -h' > >glusterp2.graywitch.co.nz: > > Filesystem Size Used > >Avail Use% Mounted on > > /dev/mapper/centos-root 19G 3.4G > > 16G 19% / > > devtmpfs 3.8G 0 > >3.8G 0% /dev > > tmpfs 3.8G 12K > >3.8G 1% /dev/shm > > tmpfs...
2018 May 22
1
split brain? but where?
...lusterp2 fb]# > 8><--- > > gluster 4 > Centos 7.4 > > 8><--- > df -h > [root at glusterp2 fb]# df -h > Filesystem Size Used Avail > Use% Mounted on > /dev/mapper/centos-root 19G 3.4G 16G > 19% / > devtmpfs 3.8G 0 3.8G > 0% /dev > tmpfs 3.8G 12K 3.8G > 1% /dev/shm > tmpfs 3.8G 9.0M 3.8G > 1...
2024 Oct 20
1
How much disk can fail after a catastrophic failure occur?
...s with this number of disks in each side: pve01:~# df | grep disco /dev/sdd ? ? ? ? ?1.0T ?9.4G 1015G ? 1% /disco1TB-0 /dev/sdh ? ? ? ? ?1.0T ?9.3G 1015G ? 1% /disco1TB-3 /dev/sde ? ? ? ? ?1.0T ?9.5G 1015G ? 1% /disco1TB-1 /dev/sdf ? ? ? ? ?1.0T ?9.4G 1015G ? 1% /disco1TB-2 /dev/sdg ? ? ? ? ?2.0T ? 19G ?2.0T ? 1% /disco2TB-1 /dev/sdc ? ? ? ? ?2.0T ? 19G ?2.0T ? 1% /disco2TB-0 /dev/sdj ? ? ? ? ?1.0T ?9.2G 1015G ? 1% /disco1TB-4 I have a?Type: Distributed-Replicate glusterSo my question is: how much disk can be in fail state after losing data or something? Thanks in advance --- Gilberto Nunes Ferr...
2024 Oct 21
1
How much disk can fail after a catastrophic failure occur?
...side: > > pve01:~# df | grep disco > /dev/sdd 1.0T 9.4G 1015G 1% /disco1TB-0 > /dev/sdh 1.0T 9.3G 1015G 1% /disco1TB-3 > /dev/sde 1.0T 9.5G 1015G 1% /disco1TB-1 > /dev/sdf 1.0T 9.4G 1015G 1% /disco1TB-2 > /dev/sdg 2.0T 19G 2.0T 1% /disco2TB-1 > /dev/sdc 2.0T 19G 2.0T 1% /disco2TB-0 > /dev/sdj 1.0T 9.2G 1015G 1% /disco1TB-4 > > I have a Type: Distributed-Replicate gluster > So my question is: how much disk can be in fail state after losing data or > something? > > T...
2006 May 11
5
Issue with hard links, please help!
Hello, Sometimes when creating hard links to the rsync destination directory, it seems like the new directory (created from the cp -al command) ends up with all the data. This causes a problem in the sense that if the rsync destination directory had 21GB, after the cp -al command, it ends up having only 8mb, then the rsync source directory determines that it now requires 21.98GB to update the
2018 May 21
2
split brain? but where?
...ne help me pls, I cant find what to fix here. ========== root at salt-001:~# salt gluster* cmd.run 'df -h' glusterp2.graywitch.co.nz: Filesystem Size Used Avail Use% Mounted on /dev/mapper/centos-root 19G 3.4G 16G 19% / devtmpfs 3.8G 0 3.8G 0% /dev tmpfs 3.8G 12K 3.8G 1% /dev/shm tmpfs 3.8G 9.1M 3.8G 1% /run tmpfs...
2014 Dec 07
1
Permission issues
...noauto? 0?????? 0 UUID=875f1e47-9bf8-4d25-b629-fb777bb183b7?????? /disk2????????? ext4??? user_xattr,acl,barrier=1??????? 1 1 ? root at samba4:~# df -h Filesystem????????????????????????????????????????????? Size? Used Avail Use% Mounted on rootfs?????????????????????????????????????????????????? 19G? 2.0G?? 16G? 11% / udev???????????????????????????????????????????????????? 10M???? 0?? 10M?? 0% /dev tmpfs?????????????????????????????????????????????????? 396M? 188K? 396M?? 1% /run /dev/disk/by-uuid/eda48b2a-5977-4651-a735-807ca9056802?? 19G? 2.0G?? 16G? 11% / tmpfs?????????????????????????????...
2018 May 21
0
split brain? but where?
...what to fix here. > >========== >root at salt-001:~# salt gluster* cmd.run 'df -h' >glusterp2.graywitch.co.nz: > Filesystem Size Used >Avail Use% Mounted on > /dev/mapper/centos-root 19G 3.4G > 16G 19% / > devtmpfs 3.8G 0 >3.8G 0% /dev > tmpfs 3.8G 12K >3.8G 1% /dev/shm > tmpfs 3.8G 9.1M >3.8...
2011 Jun 30
1
LOAD GNU/LINUX DEBIAN SYSTEM FROM ISO IMAGE.BUT NOT INSTALL
...ation I have old server GNU/Linux Debian with next disc configuration: /dev/hda1 ext3 8,9G 4,6G 3,9G 55% / tmpfs tmpfs 252M 0 252M 0% /lib/init/rw udev tmpfs 10M 668K 9,4M 7% /dev tmpfs tmpfs 252M 0 252M 0% /dev/shm /dev/hdb1 ext4 19G 7,7G 9,8G 44% /home And I want port this physical machine to one VM into the XEN server, but i need know if is possible create and iso image from hda disk of this old server and load directly with XEN into VM, this can work?¿ Any one can guide me to do this task and port old physical server to...
2015 Aug 04
2
451 4.3.0 Temporary internal failure
Hi, OS: Debian GNU/Linux 7 df -hT Filesystem Type Size Used Avail Use% Mounted on /dev/vzfs reiserfs 30G 12G 19G 38% / /etc/fstab proc /proc proc defaults 0 0 none /dev/pts devpts rw,gid=5,mode=620 0 0 none /run/shm tmpfs defaults 0 0 If someone knows an option to change the tmp directory in dovecot.conf, it would be very helpful. I can't find it. I can't in...
2019 Feb 13
4
/boot partition running out of space randomly. Please help!
...mpfs 2.8G 0 2.8G 0% /dev tmpfs 2.8G 0 2.8G 0% /dev/shm tmpfs 2.8G 8.5M 2.8G 1% /run tmpfs 2.8G 0 2.8G 0% /sys/fs/cgroup /dev/mapper/VolGroup00-LogVolRoot 30G 19G 12G 63% / /dev/sda2 594M 594M 0 100% /boot /dev/sda1 238M 9.7M 229M 5% /boot/efi /dev/mapper/VolGroup00-LogVolHome 3.3G 415M 2.9G 13% /home tmpfs 565M 0 565M 0% /run/user/54321 tmpfs...
2024 Feb 16
1
Graceful shutdown doesn't stop all Gluster processes
...ut not to unmount the external gluster volume mounted on the server. However, running stop-all-gluster-processes.sh unmounted the dc.local:/docker_config volume from the server. /dev/mapper/tier1data 6.1T 4.7T 1.4T 78% /opt/tier1data/brick dc.local:/docker_config 100G 81G 19G 82% /opt/docker_config Do you think stop-all-gluster-processes.sh should unmount the fuse mount? Thanks, Anant ________________________________ From: Gluster-users <gluster-users-bounces at gluster.org> on behalf of Strahil Nikolov <hunter86_bg at yahoo.com> Sent: 09 February 2024 5:...
2009 May 04
2
FW: Oracle 9204 installation on linux x86-64 on ocfs
...9nj3el21, s602749nj3el20 [root at s602749nj3el19 bin]# df -kh Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 2.0G 368M 1.6G 20% / /dev/sda1 61M 24M 35M 41% /boot /dev/mapper/VolGroup00-LogVol05 19G 11G 7.6G 58% /data01 none 7.9G 0 7.9G 0% /dev/shm /dev/mapper/VolGroup00-LogVol04 2.0G 53M 1.9G 3% /tmp /dev/mapper/VolGroup00-LogVol02 3.0G 1.8G 1.1G 64% /usr /dev/mapper/VolGroup00-LogVol03 2.0G...
2024 Feb 16
2
Graceful shutdown doesn't stop all Gluster processes
...ut not to unmount the external gluster volume mounted on the server. However, running stop-all-gluster-processes.sh unmounted the dc.local:/docker_config volume from the server. /dev/mapper/tier1data 6.1T 4.7T 1.4T 78% /opt/tier1data/brick dc.local:/docker_config 100G 81G 19G 82% /opt/docker_config Do you think stop-all-gluster-processes.sh should unmount the fuse mount? Thanks, Anant ________________________________ From: Gluster-users <gluster-users-bounces at gluster.org<mailto:gluster-users-bounces at gluster.org>> on behalf of Strahil Nikolov <hun...
2024 Feb 16
1
Graceful shutdown doesn't stop all Gluster processes
...ot to unmount the external gluster volume mounted on the server. However, running stop-all-gluster-processes.sh unmounted the dc.local:/docker_config volume from the server. /dev/mapper/tier1data ? ? ? ? ? ? ? ? ? 6.1T ?4.7T ?1.4T ?78% /opt/tier1data/brick dc.local:/docker_config ?100G ? 81G ? 19G ?82% /opt/docker_config Do you think stop-all-gluster-processes.sh should unmount the fuse mount? Thanks, Anant From:?Gluster-users <mailto:gluster-users-bounces at gluster.org> on behalf of Strahil Nikolov <mailto:hunter86_bg at yahoo.com> Sent:?09 February 2024 5:23 AM To:?...
2015 Aug 04
0
451 4.3.0 Temporary internal failure
...ich LDA/LMTP temporarily stores incoming mails >128 kB. #mail_temp_dir = /tmp Best Urban Am 04.08.2015 um 12:36 schrieb Nutsch: > Hi, > > OS: Debian GNU/Linux 7 > > df -hT > Filesystem Type Size Used Avail Use% Mounted on > /dev/vzfs reiserfs 30G 12G 19G 38% / > > /etc/fstab > proc /proc proc defaults 0 0 > none /dev/pts devpts rw,gid=5,mode=620 0 0 > none /run/shm tmpfs defaults 0 0 > > > If someone knows an option to change the tmp directory in dovecot.conf, it would be very helpf...
2007 Apr 05
0
(open iscsi) initiator crashes
...Linux > linux:/mnt # df -h > Dateisystem Größe Benut Verf Ben% Eingehängt auf > /dev/sda3 9,9G 7,4G 2,0G 80% / > udev 257M 204K 256M 1% /dev > /dev/sda4 22G 6,9G 14G 35% /home > /dev/sdb1 20G 173M 19G 1% /mnt > linux:/mnt # ls -la > insgesamt 24 > drwxr-xr-x 3 root root 4096 3. Apr 11:36 ./ > drwxr-xr-x 22 root root 4096 3. Apr 10:44 ../ > drwx------ 2 root root 16384 30. Mär 16:27 lost+found/ > linux:/mnt # logger dk teststart > linux:/mnt # while : > &g...