similar to: Input/output error when running `ls` and `cd` on directories

Displaying 20 results from an estimated 100 matches similar to: "Input/output error when running `ls` and `cd` on directories"

2011 Feb 04
1
3.1.2 Debian - client_rpc_notify "failed to get the port number for remote subvolume"
I have glusterfs 3.1.2 running on Debian, I'm able to start the volume and now mount it via mount -t gluster and I can see everything. I am still seeing the following error in /var/log/glusterfs/nfs.log [2011-02-04 13:09:16.404851] E [client-handshake.c:1079:client_query_portmap_cbk] bhl-volume-client-98: failed to get the port number for remote subvolume [2011-02-04 13:09:16.404909] I
2010 Apr 30
1
gluster-volgen - syntax for mirroring/distributing across 6 nodes
NOTE: posted this to gluster-devel when I meant to post it to gluster-users 01 | 02 mirrored --| 03 | 04 mirrored --| distributed 05 | 06 mirrored --| 1) Would this command work for that? glusterfs-volgen --name repstore1 --raid 1 clustr-01:/mnt/data01 clustr-02:/mnt/data01 --raid 1 clustr-03:/mnt/data01 clustr-04:/mnt/data01 --raid 1 clustr-05:/mnt/data01 clustr-06:/mnt/data01 So the
2010 May 04
1
Posix warning : Access to ... is crossing device
I have a distributed/replicated setup with Glusterfs 3.0.2, that I'm testing on 4 servers, each with access to /mnt/gluster (which consists of all directories /mnt/data01 - data24) on each server. I'm using configs I built from volgen, but every time I access a file (via an 'ls -l') for the first time, I get all of these messages in my logs on each server: [2010-05-04 10:50:30] W
2011 Sep 12
0
cannot access /mnt/glusterfs: Stale NFS file handle
I've mounted my glusterfs share as I always do: mount -t glusterfs `hostname`:/bhl-volume /mnt/glusterfs and I can see it in df: # df -h | tail -n1 clustr-01:/bhl-volume 90T 51T 39T 57% /mnt/glusterfs but I can't change into it, or access any of the files in it: # ls -al /mnt/glusterfs ls: cannot access /mnt/glusterfs: Stale NFS file handle Any idea what could be causing
2002 Jul 15
1
How to setup Winbindd:
Thanks for any information and your time!!! I have been working on getting my samba 2.2.5 server to work with my 2K domain in (native mode). Setup is on a RH 7.3 system with two NIC's one on a Internet network the other is for the LAN. What I need is to get the XP/2K/4.0 systems to see the samba shares and us them based on the users and groups that are on the domain. This is a 2K AD
2019 Nov 19
2
RFC: Moving toward Discord and Discourse for LLVM's discussions
David, I'm glad you mentioned Discord's T&Cs. I'm not generally concerned about these kinds of things, but Discord's seems particularly aggressive. Particularly the phrase "perpetual, nonexclusive, transferable, royalty-free, sublicensable, and worldwide license" is... a lot. Since LLVM is a permissively licensed project I assume many of our contributors care about
2011 Sep 17
1
Video capture on CentOS (6)
Hello, I need to do some analog video capture and I was wondering what is the status of this in CentOS 6. The last information I could find was here (obviously for CentOS 5): http://lists.centos.org/pipermail/centos/2009-September/082521.html Could anybody recommend a not-too-expensive video capture cards (PCI, USB, fireWire, ...) which would be well supported (drivers easily available in base,
2002 Jul 12
0
Winbind and Samba:
Thanks for any information and your time!!! I have been working on getting my samba 2.2.5 server to work with my 2K domain in (native mode). Setup is on a RH 7.3 system with two NIC's one on a Internet network the other is for the LAN. What I need is to get the XP/2K/4.0 systems to see the samba shares and us them based on the users and groups that are on the domain. This is a 2K AD
2006 May 14
16
lustre clustre file system and xen 3
Hi, I am setting up a xen 3 enviroment that has a file backend and 2 application servers with live emegration between the 2 application servers. --------- --------- | app 1 | | app 2 | --------- --------- \ / \ / \ / ---------------- | file backend | ---------------- I am planing on using lustre clustre file system on the file backend. Are there
2015 Feb 02
3
Fileserver Failover with AD and Gluster
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Am 02.02.2015 um 13:30 schrieb Sven Schwedas: > On 2015-02-02 12:56, Lars Hanke wrote: >> I currently plan to move my storage to Gluster. One of the >> anticipated advantages is to have Gluster replicate data among >> physical nodes, i.e. if one node dies the file service can live >> on. >> >> AD for
2006 Apr 09
1
Table creation failed
Hello, I come to you beacause i have something that i dont understand : i m using udev on a debian sid with 2.6.15.1 kernel. I have created an deprecated raid at /dev/md0 when i tried doing mkfs.ext3 /dev/md0 i have got : mke2fs 1.39-WIP (29-Mar-2006) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 4643968 inodes, 9277344 blocks 463867 blocks (5.00%)
2008 Jun 27
8
Boot from OCFS2
Dear List, I''m thinking about using xen productive in our datacenter, i''m still testing around with it. Now I got some questions, just for basic understanding, we got for example this environment: 2 Nodes 1 SCSI Pool server (Connected via scsi to both nodes) Now I want to build a "cluster" so i would like to make this: Node 1 -> Primary -| | --> domU
2015 Feb 02
1
Fileserver Failover with AD and Gluster
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi Lars, I have writen a Howto in German f?r CTDB with GlusterFS BUT there is still a problem. If you try to set the Filessystempermissions via Windows it is not working. You can't delete any of the permissions. If you wan't I can send it to you. I also writing a Howto for Samba CTDB with a Corosync, Pacemaker and OCFS2 Cluster. If you try it
2010 Apr 09
1
[Gluster-devel] Gluster health/status
Gluster devs, I found the message below in the archives. glfs-health.sh is not included in the v3.0.3 sources - is there any plan to add this to the "extras" directory? What's its status? Ian == snip == Raghavendra G Mon, 22 Feb 2010 20:20:33 -0800 Hi all, Here is some work related to Health monitoring. glfs-health.sh is a shell script to check the health of glusterfs.
2015 Feb 02
2
Fileserver Failover with AD and Gluster
I currently plan to move my storage to Gluster. One of the anticipated advantages is to have Gluster replicate data among physical nodes, i.e. if one node dies the file service can live on. AD for authentication also replicates nice on distinct physical nodes. So the remaining single point of failure is the samba file service. Is there something more intelligent than: if not \\severA\share
2007 Mar 01
1
whoops, corrupted my filesystem
Hi all- I corrupted my filesystem by not doing a RTFM first... I got an automated email that the process monitoring the SMART data from my hard drive detected a bad sector. Not thinking (or RTFMing), I did a fsck on my partition- which is the main partition. Now it appears that I've ruined the superblock. I am running Fedora Core 6. I am booting off the Fedora Core 6 Rescue CD in
2020 May 22
2
Clients, not always connecting since about 2.4.1, 2.4.2.
Hi Philip, I'll do more testing and logging over the weekend and see how I go. At the moment I have 2.4.4 Win32 running on port 9000 @ http://radioinvercargill.nz:9000/ I'm not sure how good this will be from overseas as the queue is only about 8 seconds, burst about 4 seconds. It's adequate for xDSL/Fibre and 4G mobile over here and is on a 450Mbps upstream fibre line. I've
2005 Feb 07
2
mke2fs options for very large filesystems
Wow, it takes a really long time to make a 2TB ext2fs. Are there better-than-default options that could be used for a large filesystem? mke2fs 1.34 (25-Jul-2003) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 244203520 inodes, 488382016 blocks 24419100 blocks (5.00%) reserved for the super user First data block=0 14905 block groups 32768 blocks per group,
2013 Aug 30
2
Strange fsck.ext3 behavior - infinite loop
Greetings! Need your help fellow penguins! Strange behavior with fsck.ext3: how to remove a long orphaned inode list? After copying data over from one old RAID to another new RAID with rsync, the dump command would not complete because of filesystem errors on the new RAID. So I ran fsck.ext3 with the -y option and it would just run in an infinite loop restarting itself and then trying to correct
2006 Aug 23
2
question on mounting a partition that is in a disk image
How do I mount a partition that is in an image file? I have a file called centos.img that has 3 partitions in the file. I need to copy data to the third partition on that image file. I have seen things about a loop back device (which is fine) but then it talked about an offset parameter and I dont know what that is or more importantly what number to use. I hope I'm on the right track. How