Displaying 20 results from an estimated 10000 matches similar to: "dfree command..."
2018 May 02
2
dfree command...
On Wed, May 02, 2018 at 10:00:02PM +0200, Robert S. Irrgang via samba wrote:
> Nobody any idea?
Use debug level 10 and log statements in your script to ensure
it's being invoked and returning values.
Without knowing details this is impossible to debug.
> Am 30.04.2018 um 19:40 schrieb Robert S. Irrgang via samba:
> > Hello,
> >
> > I've a little problem with
2018 May 04
0
dfree command...
I've put it in debug mode level 10.
The only place where dfree was displayed in the log is this.
[2018/05/04 04:53:53.397520, 3, pid=13784, effective(0, 0), real(0, 0)]
../source3/param/loadparm.c:2668(lp_do_section)
Processing section "[global]"
...
doing parameter read raw = no
doing parameter write raw = no
doing parameter write cache size = 262144
doing
2002 Aug 25
2
2 root disks sdb1,sdc1; if set "root=/dev/sdc1", mtab lies saying sdb1 is root!?
I have 2 SCSI disks each w/a RH 7.3 ext3 root filesystem: /dev/sdb1, and /dev/sdc1.
/dev/sda1 is an old RH4.2 root filesystem. (sdb1 was created as an image of sdc1
using dd.)
I have no problem booting from a SYSLINUX 1.52 floppy with SYSLINUX.CFG
containing "append initrd=initrd.img root=/dev/sdb1".
When I alter SYSLINUX.CFG with:
"append initrd=initrd.img root=/dev/sdc1".
2018 May 04
1
dfree command...
On Fri, May 04, 2018 at 05:10:47AM +0200, Robert S. Irrgang via samba wrote:
> I've put it in debug mode level 10.
> The only place where dfree was displayed in the log is this.
>
> [2018/05/04 04:53:53.397520, 3, pid=13784, effective(0, 0), real(0, 0)]
> ../source3/param/loadparm.c:2668(lp_do_section)
> Processing section "[global]"
> ...
> doing
2015 Jun 10
2
[PATCH] New API: btrfs_replace_start
Signed-off-by: Pino Tsao <caoj.fnst@cn.fujitsu.com>
---
daemon/btrfs.c | 40 +++++++++++++++++++++++++++++++++++++++
generator/actions.ml | 19 +++++++++++++++++++
tests/btrfs/test-btrfs-devices.sh | 8 ++++++++
3 files changed, 67 insertions(+)
diff --git a/daemon/btrfs.c b/daemon/btrfs.c
index 39392f7..acc300d 100644
--- a/daemon/btrfs.c
+++
2017 Sep 28
1
upgrade to 3.12.1 from 3.10: df returns wrong numbers
Hi,
When I upgraded my cluster, df started returning some odd numbers for my
legacy volumes.
Newly created volumes after the upgrade, df works just fine.
I have been researching since Monday and have not found any reference to
this symptom.
"vm-images" is the old legacy volume, "test" is the new one.
[root at st-srv-03 ~]# (df -h|grep bricks;ssh st-srv-02 'df -h|grep
2015 Jun 12
2
Re: [PATCH] New API: btrfs_replace_start
在 2015年06月12日 17:12, Pino Toscano 写道:
> On Friday 12 June 2015 10:58:34 Pino Tsao wrote:
>> Hi,
>>
>> 在 2015年06月11日 17:43, Pino Toscano 写道:
>>> Hi,
>>>
>>> On Wednesday 10 June 2015 17:54:18 Pino Tsao wrote:
>>>> Signed-off-by: Pino Tsao <caoj.fnst@cn.fujitsu.com>
>>>> ---
>>>> daemon/btrfs.c
2015 Jun 12
2
Re: [PATCH] New API: btrfs_replace_start
Hi,
在 2015年06月11日 17:43, Pino Toscano 写道:
> Hi,
>
> On Wednesday 10 June 2015 17:54:18 Pino Tsao wrote:
>> Signed-off-by: Pino Tsao <caoj.fnst@cn.fujitsu.com>
>> ---
>> daemon/btrfs.c | 40 +++++++++++++++++++++++++++++++++++++++
>> generator/actions.ml | 19 +++++++++++++++++++
>> tests/btrfs/test-btrfs-devices.sh |
2007 Apr 28
1
Problems with RAID0 array on new server
Hello,
i recently installed Centos 5 on a new server with a single scsii
disk. After the installation, i added 2 additional disks that were
once the components of a raid0 array on another server.
I get some errors and am unable to start the array
the following is an extract from dmesg output:
md: Autodetecting RAID arrays.
md: could not open unknown-block(8,17).
md: could not open
2015 Feb 28
9
Looking for a life-save LVM Guru
Dear All,
I am in desperate need for LVM data rescue for my server.
I have an VG call vg_hosting consisting of 4 PVs each contained in a
separate hard drive (/dev/sda1, /dev/sdb1, /dev/sdc1, and /dev/sdd1).
And this LV: lv_home was created to use all the space of the 4 PVs.
Right now, the third hard drive is damaged; and therefore the third PV
(/dev/sdc1) cannot be accessed anymore. I would like
2009 May 01
1
Rosewill RSV-S8 Storage Enclosure Support
I'm trying to get RSV-S8 working with Citrix XenServer 5 update 3 (which
I believe runs CentOS 5.something).
I have the Rosewill card that comes with it in there (sil3132 based).
It's seeing the card, and seeing all my drives.
I fdisk the drives and I can create the partitions, but I am unable to
set up either software raid or create filesystems. I keep getting
errors saying that
2013 Nov 19
2
virsh and multi source-dev
Hi,
I'm using LVM based storage pools and I'm wondering
if there is a way to specify several source-dev on the command line
for creating a volume group spread over several devices :
one device /dev/sdc1 is ok:
* virsh pool-define-as --name lvmpool --type logical --source-dev /dev/sdc1 --source-name vg --target /dev/vg
I would like something like (but sadly doesn't work):
? virsh
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
On 28 February 2018 at 18:28, Jose V. Carri?n <jocarbur at gmail.com> wrote:
> Hi Nithya,
>
> I applied the workarround for this bug and now df shows the right size:
>
> That is good to hear.
> [root at stor1 ~]# df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
> /dev/sdc1
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
I applied the workarround for this bug and now df shows the right size:
[root at stor1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
/dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1
stor1data:/volumedisk0
101T 3,3T 97T 4% /volumedisk0
stor1data:/volumedisk1
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
My initial setup was composed of 2 similar nodes: stor1data and stor2data.
A month ago I expanded both volumes with a new node: stor3data (2 bricks
per volume).
Of course, then to add the new peer with the bricks I did the 'balance
force' operation. This task finished successfully (you can see info below)
and number of files on the 3 nodes were very similar .
For volumedisk1 I
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
On 28 February 2018 at 22:31, Jose V. Carri?n <jocarbur at gmail.com> wrote:
> Hi Nithya,
>
> My initial setup was composed of 2 similar nodes: stor1data and stor2data.
> A month ago I expanded both volumes with a new node: stor3data (2 bricks
> per volume).
> Of course, then to add the new peer with the bricks I did the 'balance
> force' operation.
2015 Feb 28
1
Looking for a life-save LVM Guru
Dear James,
Thank you for being quick to help.
Yes, I could see all of them:
# vgs
# lvs
# pvs
Regards,
Khem
On Sat, February 28, 2015 7:37 am, James A. Peltier wrote:
>
>
> ----- Original Message -----
> | Dear All,
> |
> | I am in desperate need for LVM data rescue for my server.
> | I have an VG call vg_hosting consisting of 4 PVs each contained in a
> | separate
2018 Mar 01
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
Below the output of both volumes:
[root at stor1t ~]# gluster volume rebalance volumedisk1 status
Node Rebalanced-files size
scanned failures skipped status run time in
h:m:s
--------- ----------- -----------
----------- ----------- ----------- ------------
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
I'm sorry for my last incomplete message.
Below the output of both volumes:
[root at stor1t ~]# gluster volume rebalance volumedisk1 status
Node Rebalanced-files size
scanned failures skipped status run time in
h:m:s
--------- ----------- -----------
----------- -----------
2013 May 13
7
Remove a materially failed device from a Btrfs "single-raid" using partitions
Hello,
I am on Ubuntu Server 13.04 with Linux 3.8.
I''ve created a "single-raid" using /dev/sd{a,b,c,d}{1,3}. One of my hard
drives has failed, I mean it''s materially dead.
:~$ sudo btrfs filesystem show
Label: none uuid: 40886f51-8c9b-4be1-8721-83bf5653d2a0
Total devices 5 FS bytes used 226.90GB
devid 4 size 37.27GB used 31.01GB path /dev/sdd1