Displaying 11 results from an estimated 11 matches for "917g".
Did you mean:
917
2011 Dec 31
1
problem with missing bricks
Gluster-user folks,
I'm trying to use gluster in a way that may be a considered an unusual use
case for gluster. Feel free to let me know if you think what I'm doing
is dumb. It just feels very comfortable doing this with gluster.
I have been using gluster in other, more orthodox configurations, for
several years.
I have a single system with 45 inexpensive sata drives - it's a
2010 Oct 21
2
Bug? Mount and fstab
...-0 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 16G 2.6G 12G 18% /
/dev/sda5 883G 35G 803G 5% /state/partition1
/dev/sda2 3.8G 121M 3.5G 4% /var
tmpfs 7.7G 0 7.7G 0% /dev/shm
/dev/sdb1 917G 200M 871G 1% /gluster
none 7.7G 104K 7.7G 1% /var/lib/xenstored
glusterfs#/etc/glusterfs/glusterfs.vol
2.7T 600M 2.6T 1% /pifs
[root at vm-container-0-0 ~]# mount -a
[root at vm-container-0-0 ~]# mount -a
[root at vm-container-0-0 ~]# mount -a
[roo...
2015 Feb 22
5
unable to umount
Hi,
on an EL5 XEN DOM0 system I have following volume
$ df -h /srv
Filesystem Size Used Avail Use% Mounted on
/dev/sdc1 917G 858G 60G 94% /srv
that partition was used by virtual machines but they were all halted.
service xendomains stop
$ xm list
Name ID Mem(MiB) VCPUs State Time(s)
Domain-0 0 3000 2 r----- 695.1
$ service xend st...
2011 Feb 15
6
HVM domU doesnt start
...count=921600
I cant remember very well, but I think the issue is because I created a
image with dd bigger than hard disk size, If I do it dd command doesnt says
nothing, let''s to see that.
# df -h
Size Used Avail Use Mounted on
/dev/sdd1 917G 1,7G 869G 1%
/vserver/images/domains/mahone
Let''s create a image of 900GB it should be report errors but NO errors are
reported!!
# dd if=/dev/zero of=/vserver/images/domains/mahone/mahone.img bs=1M
count=921600
921600+0 records in
921600+0 records out
966367641600 bytes (966 GB) cop...
2016 Aug 11
5
Software RAID and GRUB on CentOS 7
Hi,
When I perform a software RAID 1 or RAID 5 installation on a LAN server
with several hard disks, I wonder if GRUB already gets installed on each
individual MBR, or if I have to do that manually. On CentOS 5.x and 6.x,
this had to be done like this:
# grub
grub> device (hd0) /dev/sda
grub> device (hd1) /dev/sdb
grub> root (hd0,0)
grub> setup (hd0)
grub> root (hd1,0)
grub>
2016 Jul 12
3
Broken output for fdisk -l
...1.5T 234M 1.5T 1% /share
/dev/mapper/centos-home 1.8T 5.5G 1.7T 1% /home
/dev/sda1 497M 158M 340M 32% /boot
tmpfs 13G 32K 13G 1% /run/user/1000
tmpfs 13G 36K 13G 1% /run/user/0
/dev/sde 917G 625G 247G 72% /media/usb
[root at localhost ~]# fdisk -l /dev/sdc
fdisk: cannot open /dev/sdc: Input/output error
[root at localhost ~]#
------------------------------
Regards
Hersh
2015 Feb 22
0
unable to umount
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On 22/02/15 14:19, Leon Fauster wrote:
> Hi,
>
> on an EL5 XEN DOM0 system I have following volume
>
> $ df -h /srv Filesystem Size Used Avail Use% Mounted
> on /dev/sdc1 917G 858G 60G 94% /srv
>
> that partition was used by virtual machines but they were all
> halted.
>
> service xendomains stop
>
> $ xm list Name ID Mem(MiB)
> VCPUs State Time(s) Domain-0 0
> 3000...
2016 Aug 11
0
Software RAID and GRUB on CentOS 7
...(& maintains) that setup automatically. I
got that recommendation from a mailing list ages ago, can't remember
where, sorry. $0.02, no more, no less ....
[root at Q6600:/etc, Thu Aug 11, 08:25 AM] 1018 # df -h
Filesystem Type Size Used Avail Use% Mounted on
/dev/md1 ext4 917G 8.0G 863G 1% /
tmpfs tmpfs 4.0G 0 4.0G 0% /dev/shm
/dev/md0 ext4 186M 60M 117M 34% /boot
/dev/md3 ext4 1.8T 1.4T 333G 81% /home
[root at Q6600:/etc, Thu Aug 11, 08:26 AM] 1019 # uname -a
Linux Q6600 2.6.35.14-106.fc14.x86_64 #1 SMP Wed Nov 23 13:07:52 UTC...
2017 Oct 03
0
multipath
...Type Size Used Avail Use% Mounted on
/dev/sdc2 ext4 31G 26G 4.0G 87% /
tmpfs tmpfs 16G 92K 16G 1% /dev/shm
/dev/sdc1 ext4 969M 127M 793M 14% /boot
/dev/sdc6 ext4 673G 242G 398G 38% /data01
/dev/mapper/mpathjp1 ext4 917G 196G 676G 23% /data02
/dev/sdc5 ext4 182G 169G 3.9G 98% /home
/dev/mapper/mpathep1 ext4 13T 11T 1005G 92% /SAN101
/dev/mapper/mpathep2 ext4 13T 5.0T 7.0T 42% /SAN102
/dev/mapper/mpathep3 ext4 13T 4.9T 7.1T 42% /SAN103
/dev/mapper/mpathep4 ext4 13T 8.2T 3.8T...
2015 Feb 18
5
CentOS 7: software RAID 5 array with 4 disks and no spares?
...226G 1,1G 213G 1% /
devtmpfs 1,4G 0 1,4G 0% /dev
tmpfs 1,4G 0 1,4G 0% /dev/shm
tmpfs 1,4G 8,5M 1,4G 1% /run
tmpfs 1,4G 0 1,4G 0% /sys/fs/cgroup
/dev/md125 194M 80M 101M 45% /boot
/dev/sde1 917G 88M 871G 1% /mnt
The root partition (/dev/md127) only shows 226 G of space. So where has
everything gone?
[root at nestor:~] # cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md125 : active raid1 sdc2[2] sdd2[3] sdb2[1] sda2[0]
204736 blocks super 1.0 [4/4] [UUUU]...
2015 Feb 18
0
CentOS 7: software RAID 5 array with 4 disks and no spares?
...226G 1,1G 213G 1% /
devtmpfs 1,4G 0 1,4G 0% /dev
tmpfs 1,4G 0 1,4G 0% /dev/shm
tmpfs 1,4G 8,5M 1,4G 1% /run
tmpfs 1,4G 0 1,4G 0% /sys/fs/cgroup
/dev/md125 194M 80M 101M 45% /boot
/dev/sde1 917G 88M 871G 1% /mnt
The root partition (/dev/md127) only shows 226 G of space. So where has
everything gone?
[root at nestor:~] # cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md125 : active raid1 sdc2[2] sdd2[3] sdb2[1] sda2[0]
204736 blocks super 1.0 [4/4] [UUUU]...