Displaying 20 results from an estimated 2000 matches similar to: "disk partitioning - I'm missing something simple, I think"
2008 Nov 26
8
disk space issues...any help is greatly appreciated
Hi all,
Please pardon my newbie-ness on this issue....I've a / partition which
is full (quite suddenly, actually) and I'm not sure how to fix this.
I've searched for uneeded logs, etc in /var/log and /tmp to no avail.
The system is CentOS 5.2 and is not connected to the internet, serves as
a local LAN server running stock stuff...sendmail, dovecot,
apache..nothing strange or
2008 Apr 01
2
strange error in df -h
Hi All,
I just saw this in output from df -h:
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
131G 4.6G 120G 4% /
/dev/sdc1 271G 141G 117G 55% /home
/dev/sdd1 271G 3.9G 253G 2% /home/admin
/dev/sda1 99M 20M 74M 22% /boot
tmpfs 442M 0 442M 0% /dev/shm
2011 Jul 15
1
Strange Behavior using FUSE client
I've recently setup a distributed/replicated cluster and have had an issue
with seeing the directories on the cluster. Also, a df -h only shows data
from one of the three bricks.
The strange behavior doesn't end there. If I log into the 'primary' server
as root, then do an ls on the client, the directories appear. However, df -h
is still incorrect.
I'm not sure exactly
2007 Nov 29
1
RAID, LVM, extra disks...
Hi,
This is my current config:
/dev/md0 -> 200 MB -> sda1 + sdd1 -> /boot
/dev/md1 -> 36 GB -> sda2 + sdd2 -> form VolGroup00 with md2
/dev/md2 -> 18 GB -> sdb1 + sde1 -> form VolGroup00 with md1
sda,sdd -> 36 GB 10k SCSI HDDs
sdb,sde -> 18 GB 10k SCSI HDDs
I have added 2 36 GB 10K SCSI drives in it, they are detected as sdc and
sdf.
What should I do if I
2017 Sep 28
1
upgrade to 3.12.1 from 3.10: df returns wrong numbers
Hi,
When I upgraded my cluster, df started returning some odd numbers for my
legacy volumes.
Newly created volumes after the upgrade, df works just fine.
I have been researching since Monday and have not found any reference to
this symptom.
"vm-images" is the old legacy volume, "test" is the new one.
[root at st-srv-03 ~]# (df -h|grep bricks;ssh st-srv-02 'df -h|grep
2008 Feb 13
6
pvmove speed
Are there any ways to improve/manage the speed of pvmove? Man doesn't show any documented switches for priority scheduling.
Iostat shows the system way underutilized even though the lv whose pe's are being migrated is continuously being written (slowly) to.
Thanks!
jlc
2019 Jan 29
2
C7, mdadm issues
Alessandro Baggi wrote:
> Il 29/01/19 15:03, mark ha scritto:
>
>> I've no idea what happened, but the box I was working on last week has
>> a *second* bad drive. Actually, I'm starting to wonder about that
>> particulare hot-swap bay.
>>
>> Anyway, mdadm --detail shows /dev/sdb1 remove. I've added /dev/sdi1...
>> but see both /dev/sdh1 and
2013 Aug 11
2
(un)mounting takes a long time
Hello!
I''m using ArchLinux with kernel Linux horus 3.10.5-1-ARCH #1 SMP PREEMPT.
Mounting and unmounting takes a long time:
# time mount -v /mnt/Archiv
mount: /dev/sde1 mounted on /mnt/Archiv.
mount -v /mnt/Archiv 0,00s user 0,16s system 1% cpu 9,493 total
# sync && time umount -v /mnt/Archiv
umount: /mnt/Archiv (/dev/sdd1) unmounted
umount -v /mnt/Archiv 0,00s user
2006 Aug 04
3
OCFS2 and ASM Question
Ok guys & gals here is the scenario:
1.) Host RHEL 4 U3 2.6.9-34.0.2.EL
2.) OCFS2 latest version
3.) Successfully formatted & mounted OCFS2 filesystems on 2 nodes
/dev/sdb1 /u02/oradata/usdev/voting
/dev/sdc1 /u02/oradata/usdev/data01
/dev/sdd1 /u02/oradata/usdev/data02
/dev/sde1 /u02/oradata/usdev/data03
4.) Downloaded & installed ASMLib 2.0 on both nodes
5.) Ran
2019 Jan 29
2
C7, mdadm issues
Alessandro Baggi wrote:
> Il 29/01/19 18:47, mark ha scritto:
>> Alessandro Baggi wrote:
>>> Il 29/01/19 15:03, mark ha scritto:
>>>
>>>> I've no idea what happened, but the box I was working on last week
>>>> has a *second* bad drive. Actually, I'm starting to wonder about
>>>> that particulare hot-swap bay.
>>>>
2019 Jan 30
4
C7, mdadm issues
On 01/30/19 03:45, Alessandro Baggi wrote:
> Il 29/01/19 20:42, mark ha scritto:
>> Alessandro Baggi wrote:
>>> Il 29/01/19 18:47, mark ha scritto:
>>>> Alessandro Baggi wrote:
>>>>> Il 29/01/19 15:03, mark ha scritto:
>>>>>
>>>>>> I've no idea what happened, but the box I was working on last week
2011 Apr 09
16
wrong values in "df" and "btrfs filesystem df"
Hallo, linux-btrfs,
First I create an array of 2 disks with
mkfs.btrfs -d raid0 -m raid1 /dev/sdb1 /dev/sdd1
and mount it at /srv/MM.
Then I fill it with about 1,6 TByte.
And then I add /dev/sde1 via
btrfs device add /dev/sde1 /srv/MM
btrfs filesystem balance /srv/MM
(it run about 20 hours)
Then I work on it, copy some new files, delete some old files - all
works well. Only
df
2013 May 13
7
Remove a materially failed device from a Btrfs "single-raid" using partitions
Hello,
I am on Ubuntu Server 13.04 with Linux 3.8.
I''ve created a "single-raid" using /dev/sd{a,b,c,d}{1,3}. One of my hard
drives has failed, I mean it''s materially dead.
:~$ sudo btrfs filesystem show
Label: none uuid: 40886f51-8c9b-4be1-8721-83bf5653d2a0
Total devices 5 FS bytes used 226.90GB
devid 4 size 37.27GB used 31.01GB path /dev/sdd1
2019 Jan 30
2
C7, mdadm issues
Alessandro Baggi wrote:
> Il 30/01/19 14:02, mark ha scritto:
>> On 01/30/19 03:45, Alessandro Baggi wrote:
>>> Il 29/01/19 20:42, mark ha scritto:
>>>> Alessandro Baggi wrote:
>>>>> Il 29/01/19 18:47, mark ha scritto:
>>>>>> Alessandro Baggi wrote:
>>>>>>> Il 29/01/19 15:03, mark ha scritto:
2019 Jan 22
2
C7 and mdadm
A user's system had a hard drive failure over the weekend. Linux RAID 6. I
identified the drive, brought the system down (8 drives, and I didn't know
the s/n of the bad one. why it was there in the box, rather than where I
started looking...) Brought it up, RAID not working. I finally found that
I had to do an mdadm --stop /dev/md0, then I could do an assemble, then I
could add the new
2008 Mar 16
8
Un Installing a hard drive in a Centos 5.1 box
Hi Guys,
I'm fairly new to Linux and I'm trying to un install a hard drive from my Centos 5.1 box running KDE. When I built the PC, I installed two 500 gig maxtors in the tower, then I installed Centos. Now I've decided that I want to remove the slave drive and use it as an external backup drive - I am mounting it into one of those external drive cases with a built in fan.
When I
2015 Jun 25
0
LVM hatred, was Re: /boot on a separate partition?
On 06/25/2015 01:20 PM, Chris Adams wrote:
> ...It's basically a way to assemble one arbitrary set of block devices
> and then divide them into another arbitrary set of block devices, but
> now separate from the underlying physical structure.
> Regular partitions have various limitations (one big one on Linux
> being that modifying the partition table of a disk with in-use
2019 Jan 29
2
C7, mdadm issues
I've no idea what happened, but the box I was working on last week has a
*second* bad drive. Actually, I'm starting to wonder about that
particulare hot-swap bay.
Anyway, mdadm --detail shows /dev/sdb1 remove. I've added /dev/sdi1... but
see both /dev/sdh1 and /dev/sdi1 as spare, and have yet to find a reliable
way to make either one active.
Actually, I would have expected the linux
2019 Jan 30
1
C7, mdadm issues
Alessandro Baggi wrote:
> Il 30/01/19 16:33, mark ha scritto:
>
>> Alessandro Baggi wrote:
>>
>>> Il 30/01/19 14:02, mark ha scritto:
>>>
>>>> On 01/30/19 03:45, Alessandro Baggi wrote:
>>>>
>>>>> Il 29/01/19 20:42, mark ha scritto:
>>>>>
>>>>>> Alessandro Baggi wrote:
2012 Jan 17
2
Transition to CentOS - RAID HELP!
Hi Folks,
I've inherited an old RH7 system that I'd like to upgrade to
CentOS6.1 by means of wiping it clean and doing a fresh install.
However, the system has a software raid setup that I wish to keep
untouched as it has data on that I must keep. Or at the very least, TRY
to keep. If all else fails, then so be it and I'll just recreate the
thing. I do plan on backing up