Displaying 20 results from an estimated 200 matches similar to: "MDS read-only"
2008 Dec 23
1
device error without a check sum error ?
I have a system running in a VM with a root pool. The root pool
occasionally shows a fairly stern warning. This warning comes with no check
sum errors.
bash-3.00# zpool status -vx
pool: rpool
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore
2007 Oct 15
3
Trying to recover data off SATA-to-SCSI external 2TB ARRAY
Original the array was attach to an HP server via a Smart Array
Controller.(which I didn't setup, I just inherited the problem)
This controller no longer recognizes the array even though the front panel
of the array indicates its intact.
I then took the array and plugged it into my Centos server and it recognized
it ...
cat /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun:
2007 May 15
2
Clear corrupted data
Hey,
I''m currently running on Nexenta alpha 6 and I have some corrupted data in a
pool.
The output from sudo zpool status -v data is:
pool: data
> state: ONLINE
> status: One or more devices has experienced an error resulting in data
> corruption. Applications may be affected.
> action: Restore the file in question if possible. Otherwise restore the
> entire
2011 Apr 01
15
Zpool resize
Hi,
LUN is connected to solaris 10u9 from NETAP FAS2020a with ISCSI. I''m
changing LUN size on netapp and solaris format see new value but zpool
still have old value.
I tryed zpool export and zpool import but it didn''t resolve my problem.
bash-3.00# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0d1 <DEFAULT cyl 6523 alt 2 hd 255 sec 63>
2009 Jun 16
3
Adding zvols to a DomU
I''m trying to add extra zvols to a Solaris10 DomU, sv_113 Dom0
I can use
virsh attach-disk <name> <zvol> hdb --device phy
to attach the zvol as c0d1. Replacing hdb by hdd gives me c1d1 but then
that is it. Being able to attach several more zvols would be nice but even
being able to get at c1d0 would be useful
Am I missing something or can I only attach to hda/hdb/hdd?
2008 Feb 12
1
LDISKFS-fs warnings on MDS lustre 1.6.4.2
Hi Folks,
We can see these massages on our MDS
Feb 12 12:46:08 mds01 kernel: LDISKFS-fs warning (device dm-0):
empty_dir: bad directory (dir #31452569) - no `.'' or `..''
Feb 12 12:46:08 mds01 kernel: LDISKFS-fs warning (device dm-0):
ldiskfs_rmdir: empty directory has too many links (3)
It seem to indicate that we have bad(corrupted) directory. Do you have
any idea how to
2013 Oct 12
2
Warning: appears to have a negative number of dependencies
Hello
After add some packagelist to puppet class.
I get lots of warning message "appears to have a negative number of
dependencies".
My puppet master and agent version are 3.3.1.
Here is the log out:
[root@gpu022 ~]# puppet agent --test
Info: Retrieving plugin
Info: Loading facts in /var/lib/puppet/lib/facter/puppet_vardir.rb
Info: Loading facts in
2007 May 14
37
Lots of overhead with ZFS - what am I doing wrong?
I was trying to simply test bandwidth that Solaris/ZFS (Nevada b63) can deliver from a drive, and doing this:
dd if=(raw disk) of=/dev/null gives me around 80MB/s, while dd if=(file on ZFS) of=/dev/null gives me only 35MB/s!?. I am getting basically the same result whether it is single zfs drive, mirror or a stripe (I am testing with two Seagate 7200.10 320G drives hanging off the same interface
2007 Oct 14
1
odd behavior from zpool replace.
i''ve got a little zpool with a naughty raidz vdev that won''t take a
replacement that as far as i can tell should be adequate.
a history: this could well be some bizarro edge case, as the pool doesn''t
have the cleanest lineage. initial creation happened on NexentaCP inside
vmware in linux. i had given the virtual machine raw device access to 4
500gb drives and 1 ~200gb
2013 Feb 12
2
Lost folders after changing MDS
OK, so our old MDS had hardware issues so I configured a new MGS / MDS on a VM (this is a backup lustre filesystem and I wanted to separate the MGS / MDS from OSS of the previous), and then did this:
For example:
mount -t ldiskfs /dev/old /mnt/ost_old
mount -t ldiskfs /dev/new /mnt/ost_new
rsync -aSv /mnt/ost_old/ /mnt/ost_new
# note trailing slash on ost_old/
If you are unable to connect both
2008 Dec 24
6
Bug when using /dev/cciss/c0d2 as mdt/ost
I am trying to build lustre-1.6.6 against the pre-patched kernel downloaded
from SUN.
But as written in Operations manual, it creates rpms for
2.6.18-92.1.10.el5_lustrecustom. Is there a way to ask it not to append
custom as extraversion.
Running kernel is 2.6.18-92.1.10.el5_lustre.1.6.6smp.
--
Regards--
Rishi Pathak
National PARAM Supercomputing Facility
Center for Development of Advanced
2006 Oct 18
0
Is there a way to expand a formated ocfs2 partition without loosi ng the data on it?
Hello
I have the feeling this may not be the right forum for the following
question, but I d like to try it here anyway:
This is the case:
I had a 3x72GB HDD RAID5 shared external disk drives (HP MSA500),
totalizing about 145,6 GB of data.
I needed to increase the available amount of data, so I added a 4th 72GB HDD
and using HP ACU I expanded my existing RAID5 Array, I have now about 218,5
2007 Feb 10
16
How to backup a slice ? - newbie
... though I tried, read and typed the last 4 hours; still no clue.
Please, can anyone give a clear idea on how this works:
Get the content of c0d1s1 to c0d0s7 ?
c0d1s1 is pool home and active; c0d0s7 is not active.
I have followed the suggestion on
http://www.opensolaris.org/os/community/zfs/demos/zfs_demo.pdf
% sudo zfs snapshot home at backup
% zfs list
NAME USED AVAIL REFER
2015 Feb 19
3
iostat a partition
Hey guys,
I need to use iostat to diagnose a disk latency problem we think we may be
having.
So if I have this disk partition:
[root at uszmpdblp010la mysql]# df -h /mysql
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/MysqlVG-MysqlVol
9.9G 1.1G 8.4G 11% /mysql
And I want to correlate that to the output of fdisk -l, so that I can feed
the disk
2006 Oct 05
0
Crash when doing rm -rf
Not an really good subject, I know but that''s kind of what happend.
I''m trying to build an backup-solution server, Windows users using OSCAR (which uses rsync) to sync their files to an folder and when complete takes a snapshot. It has worked before but then I turned on the -R switch to rsync and when I then removed the folder with rm -rf it crashed. I didn''t save what
2008 Feb 05
2
obdfilter/datafs-OST0000/recovery_status
I''m evaluating lustre. I''m trying what I think is a basic/simple
ethernet config. with MDT and OST on the same node. Can someone tell
me if the following (~150 second recovery occurring when small 190 GB
OST is re-mounted) is expected behavior or if I''m missing something?
I thought I would send this and continue with the eval while awaiting
a
response.
I''m using
2010 Jul 07
0
How to evict a dead client?
Dear, everyone
We have stuck with the problem that the OSS connect one dead client or one with changed IP address all the time until we reboot the dead client. From the OSS log message, we can get the information as follows:
Jul 7 14:45:07 com01 kernel: Lustre: 12180:0:(socklnd_cb.cLustre: 12180:0:(socklnd_cb.c:915:ksocknal_launch_packet()) No usable routes to 12345-202.Lustre:
2006 Jul 13
7
system unresponsive after issuing a zpool attach
Today I attempted to upgrade to S10_U2 and migrate some mirrored UFS SVM
partitions to ZFS.
I used Live Upgrade to migrate from U1 to U2 and that went without a
hitch on my SunBlade 2000. And the initial conversion of one side of the
UFS mirrors to a ZFS pool and subsequent data migration went fine.
However, when I attempted to attach the second side mirrors as a mirror
of the ZFS pool, all
2005 Nov 19
11
ZFS related panic!
> My current zfs setup lookst like this:
> > homepool 3.63G 34.1G 8K /homepool
> > homepool/db 61.6M 34.1G 8.50K /var/db
> > homepool/db/pgsql 61.5M 34.1G 61.5M
> > /var/db/pgsql
> > homepool/home 3.57G 34.1G 10.0K /users
> > homepool/home/carrie 8K 34.1G 8K
> > /users/carrie
> >
2010 Sep 30
1
ldiskfs-ext4 interoperability question
Our current Lustre servers run the version 1.8.1.1 with the regular ldiskfs.
We are looking to expand our Lustre file system with new servers/storage and upgrade to all the lustre servers to 1.8.4 as well at the same time. We
would like to make use of the ldiskfs-ext4 on the new servers to use larger OSTs.
I just want to confirm the following facts:
1. Is is possible to run different versions