Displaying 10 results from an estimated 10 matches for "235m".
Did you mean:
235
2018 May 22
0
split brain? but where?
...47G 38M 47G
1% /home
/dev/mapper/centos-var_lib 9.4G 178M 9.2G
2% /var/lib
/dev/mapper/vg--gluster--prod--1--2-gluster--prod--1--2 932G 263G 669G
29% /bricks/brick1
/dev/sda1 950M 235M 715M
25% /boot
8><---
So the output isnt helping..........
On 23 May 2018 at 00:29, Karthik Subrahmanya <ksubrahm at redhat.com> wrote:
> Hi,
>
> Which version of gluster you are using?
>
> You can find which file is that using the following command
> find...
2018 May 22
2
split brain? but where?
...> > /dev/mapper/centos-var_lib 9.4G 178M
> >9.2G 2% /var/lib
> > /dev/mapper/vg--gluster--prod--1--2-gluster--prod--1--2 932G 264G
> >668G 29% /bricks/brick1
> > /dev/sda1 950M 235M
> >715M 25% /boot
> > tmpfs 771M 12K
> >771M 1% /run/user/42
> > glusterp2:gv0/glusterp2/images 932G 273G
> >659G 30% /var/lib/libvirt/images
> > glusterp2:gv0...
2018 May 22
1
split brain? but where?
...47G 38M 47G
> 1% /home
> /dev/mapper/centos-var_lib 9.4G 178M 9.2G
> 2% /var/lib
> /dev/mapper/vg--gluster--prod--1--2-gluster--prod--1--2 932G 263G
> 669G 29% /bricks/brick1
> /dev/sda1 950M 235M 715M
> 25% /boot
> 8><---
>
>
>
> So the output isnt helping..........
>
>
>
>
>
>
>
> On 23 May 2018 at 00:29, Karthik Subrahmanya <ksubrahm at redhat.com> wrote:
>
>> Hi,
>>
>> Which version of gluster you are using?
>...
2019 Oct 12
0
qeum on centos 8 with nvme disk
I have CentOS 8 install solely on one nvme drive and it works fine and
relatively quickly.
/dev/nvme0n1p4????????? 218G?? 50G? 168G? 23% /
/dev/nvme0n1p2????????? 2.0G? 235M? 1.6G? 13% /boot
/dev/nvme0n1p1????????? 200M? 6.8M? 194M?? 4% /boot/efi
You might want to partition the device (p3 is swap)
Alan
On 13/10/2019 10:38, Jerry Geis wrote:
> Hi All - I use qemu on my centOS 7.7 box that has software raid of 2- SSD
> disks.
>
> I installed an nVME drive...
2018 May 21
2
split brain? but where?
...112G 33M
112G 1% /data1
/dev/mapper/centos-var_lib 9.4G 178M
9.2G 2% /var/lib
/dev/mapper/vg--gluster--prod--1--2-gluster--prod--1--2 932G 264G
668G 29% /bricks/brick1
/dev/sda1 950M 235M
715M 25% /boot
tmpfs 771M 12K
771M 1% /run/user/42
glusterp2:gv0/glusterp2/images 932G 273G
659G 30% /var/lib/libvirt/images
glusterp2:gv0 932G 273G
659G 30%...
2018 May 21
0
split brain? but where?
...33M
>112G 1% /data1
> /dev/mapper/centos-var_lib 9.4G 178M
>9.2G 2% /var/lib
> /dev/mapper/vg--gluster--prod--1--2-gluster--prod--1--2 932G 264G
>668G 29% /bricks/brick1
> /dev/sda1 950M 235M
>715M 25% /boot
> tmpfs 771M 12K
>771M 1% /run/user/42
> glusterp2:gv0/glusterp2/images 932G 273G
>659G 30% /var/lib/libvirt/images
> glusterp2:gv0...
2013 Jan 26
4
Write failure on distributed volume with free space available
...0866302 s, 194 MB/s
Filesystem Size Used Avail Use% Mounted on
192.168.192.5:/test 291M 219M 73M 76% /mnt/gluster1
1+0 records in
1+0 records out
16777216 bytes (17 MB) copied, 0.0898677 s, 187 MB/s
Filesystem Size Used Avail Use% Mounted on
192.168.192.5:/test 291M 235M 57M 81% /mnt/gluster1
dd: opening `16_18': No space left on device
Filesystem Size Used Avail Use% Mounted on
192.168.192.5:/test 291M 235M 57M 81% /mnt/gluster1
1+0 records in
1+0 records out
16777216 bytes (17 MB) copied, 0.126375 s, 133 MB/s
Filesystem Size U...
2019 Oct 12
7
qeum on centos 8 with nvme disk
Hi All - I use qemu on my centOS 7.7 box that has software raid of 2- SSD
disks.
I installed an nVME drive in the computer also. I tried to insall CentOS8
on it
(the physical /dev/nvme0n1 with the -hda /dev/nvme0n1 as the disk.
The process started installing but is really "slow" - I was expecting with
the nvme device it would be much quicker.
Is there something I am missing how to
2006 Aug 24
5
unaccounted for daily growth in ZFS disk space usage
We finally flipped the switch on one of our ZFS-based servers, with
approximately 1TB of 2.8TB (3 stripes of 950MB or so, each of which is
a RAID5 volume on the adaptec card). We have snapshots every 4 hours
for the first few days. If you add up the snapshot references it
appears somewhat high versus daily use (mostly mail boxes, spam, etc
changing), but say an aggregate of no more than 400+MB a
2010 Apr 02
6
L2ARC & Workingset Size
...erations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
hyb 2.52G 925G 91 5 985K 210K
c10t0d0 2.52G 925G 91 5 985K 210K
cache - - - - - -
c9t0d0 235M 7.22G 0 10 0 802K
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
hyb 2.52G 925G 73 15 590K 671K...