Displaying 20 results from an estimated 6000 matches similar to: "dealing with mke2fs -T option"
2005 Feb 07
2
mke2fs options for very large filesystems
Wow, it takes a really long time to make a 2TB ext2fs. Are there
better-than-default options that could be used for a large filesystem?
mke2fs 1.34 (25-Jul-2003)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
244203520 inodes, 488382016 blocks
24419100 blocks (5.00%) reserved for the super user
First data block=0
14905 block groups
32768 blocks per group,
2001 Dec 05
2
owner of a smbd thread
Hi,
I've set up a Samba server accessed by NTWS 4.
When I do a "ps aux" on the server, i can see a line
for each user connected with a "smbd -D" command.
The owners of these commands are mostly root but
sometimes the users themselves.
How to force the owner of the command in order to be
available to get a list of connected users with an "ps
aux" and an
2005 May 19
1
mke2fs options for very large filesystems
>Yes, if you are creating larger files. By default e2fsck assumes the average
>file size is 8kB and allocates a corresponding number of inodes there. If,
>for example, you are storing lots of larger files there (digital photos, MP3s,
>etc) that are in the MB range you can use "-t largefile" or "-t largefile4"
>to specify an average file size of 1MB or 4MB
2001 Dec 07
1
"smbd -D" gone wild ...
Hi,
I've a huge problem with Samba.
It's the PDC of the school LAN.
Clients are NTWS 4.
Samba has been up and running for 11 days and now,
since 2 days, force me to reboot the whole server
everyday to work.
I explain : when doing an "ps aux" or "smbstatus", i
saw A LOT of "smbd -D" started for the same user 10
times in about 2 minutes.
When I try to kill
2006 Nov 09
1
Ext3 - which blocksize for small files?
Hi,
I want to use an ext3 Partition (~1TB) for Mail Storage, this means tons of
small files.
Has anyone recommendations about blocksize, inodes, etc. for mkfs.ext3 ?
Thanks in advance,
David
--
View this message in context: http://www.nabble.com/Ext3----which-blocksize-for-small-files--tf2601442.html#a7257363
Sent from the Ext3 - User mailing list archive at Nabble.com.
2001 Feb 23
7
Samba and VPN
Hi!
I have printers, files shared to all windoze platforms in my group. Now I
want to also access the samba box via a VPN. So configuration:
NT----PPTP---NT/RAS-----Linux 7
Any experience with this type of config? Quick test shows the samba host but
when I try to access it the network path is not found.
Thanks!
++Dirk
2001 Dec 18
1
mounting NFS or SMBFS ?
Hi,
I've users connecting to a samba server via WinNTWS
4.0 (home directories are /home/%u)
While workstations have both Windows and Linux
installed, I want to allow users to connect to their
/home directory when logging into Linux.
Do you think it's better to automount the folders
using an NFS mount or to use an SMBFS mount ?
What are advantages and drawbacks of each methods ?
2018 Apr 17
5
Getting glusterfs to expand volume size to brick size
pylon:/var/lib/glusterd/vols/dev_apkmirror_data # ack shared-brick-count
dev_apkmirror_data.pylon.mnt-pylon_block3-dev_apkmirror_data.vol
3: option shared-brick-count 3
dev_apkmirror_data.pylon.mnt-pylon_block2-dev_apkmirror_data.vol
3: option shared-brick-count 3
dev_apkmirror_data.pylon.mnt-pylon_block1-dev_apkmirror_data.vol
3: option shared-brick-count 3
Sincerely,
Artem
--
2016 May 25
1
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
On 2016-05-25 19:13, Kelly Lesperance wrote:
> Hdparm didn?t get far:
>
> [root at r1k1 ~] # hdparm -tT /dev/sda
>
> /dev/sda:
> Timing cached reads: Alarm clock
> [root at r1k1 ~] #
Hi Kelly,
Try running 'iostat -xdmc 1'. Look for a single drive that has
substantially greater await than ~10msec. If all the drives
except one are taking 6-8msec, but one is very
2010 Sep 13
3
Proper procedure when device names have changed
I am running zfs-fuse on an Ubuntu 10.04 box. I have a dual mirrored pool:
mirror sdd sde mirror sdf sdg
Recently the device names shifted on my box and the devices are now sdc sdd sde and sdf. The pool is of course very unhappy about the mirrors are no longer matched up and one device is "missing". What is the proper procedure to deal with this?
-brian
--
This message posted from
2018 Apr 17
0
Getting glusterfs to expand volume size to brick size
Ok, it looks like the same problem.
@Amar, this fix is supposed to be in 4.0.1. Is it possible to regenerate
the volfiles to fix this?
Regards,
Nithya
On 17 April 2018 at 09:57, Artem Russakovskii <archon810 at gmail.com> wrote:
> pylon:/var/lib/glusterd/vols/dev_apkmirror_data # ack shared-brick-count
> dev_apkmirror_data.pylon.mnt-pylon_block3-dev_apkmirror_data.vol
> 3:
2018 Apr 16
2
Getting glusterfs to expand volume size to brick size
Hi Nithya,
I'm on Gluster 4.0.1.
I don't think the bricks were smaller before - if they were, maybe 20GB
because Linode's minimum is 20GB, then I extended them to 25GB, resized
with resize2fs as instructed, and rebooted many times over since. Yet,
gluster refuses to see the full disk size.
Here's the status detail output:
gluster volume status dev_apkmirror_data detail
Status
2016 May 25
6
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
I?ve posted this on the forums at https://www.centos.org/forums/viewtopic.php?f=47&t=57926&p=244614#p244614 - posting to the list in the hopes of getting more eyeballs on it.
We have a cluster of 23 HP DL380p Gen8 hosts running Kafka. Basic specs:
2x E5-2650
128 GB RAM
12 x 4 TB 7200 RPM SATA drives connected to an HP H220 HBA
Dual port 10 GB NIC
The drives are configured as one large
2002 Feb 19
2
a good way of life
Thank you all for your answers.
One told me that in order to allow the students to
upload/download from their home directories, FTP or
something like that is a simple and easy choice.
As domain logon and roaming profil is not needed and
FTP can be logged, I'll use this.
Another question for our brillants candidates :
How to batch the creation of a Samba account ?
I mean, everytime you
2016 May 27
2
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
All of our Kafka clusters are fairly write-heavy. The cluster in question is our second-heaviest ? we haven?t yet upgraded the heaviest, due to the issues we?ve been experiencing in this one.
Here is an iostat example from a host within the same cluster, but without the RAID check running:
[root at r2k1 ~] # iostat -xdmc 1 10
Linux 3.10.0-327.13.1.el7.x86_64 (r2k1) 05/27/16 _x86_64_ (32 CPU)
2018 Apr 14
2
Getting glusterfs to expand volume size to brick size
Hi,
I have a 3-brick replicate volume, but for some reason I can't get it to
expand to the size of the bricks. The bricks are 25GB, but even after
multiple gluster restarts and remounts, the volume is only about 8GB.
I believed I could always extend the bricks (we're using Linode block
storage, which allows extending block devices after they're created), and
gluster would see the
2006 Dec 01
1
maintain 6TB filesystem + fsck
i posted on rhel list about proper creating of 6tb ext3 filesystem and
tuning here.......http://www.redhat.com/archives/nahant-list/2006-November/msg00239.html
i am reading lots of ext3 links like......
http://www.redhat.com/support/wpapers/redhat/ext3/
http://lists.centos.org/pipermail/centos/2005-September/052533.html
http://batleth.sapienti-sat.org/projects/FAQs/ext3-faq.html
............but
2007 May 02
2
Faster mkfs.ext3
I'm currently working with a testing system that involves running
mkfs.ext3 on some pretty large devices on a regular basis. This is
getting fairly painful, and I was wondering if there was some way to
speed this up. Understood that the end result might be a filesystem
that has less of a safety factor (say, fewer superblock backups) but
the tradeoff might be worth it in this case.
David
2010 Sep 26
1
hotplug Backup-hdd
Hi,
i have a system with
/dev/sda - System Hard Drive
/dev/md0 - SoftwareRaid 5 for Data
with
/dev/sdb
/dev/sdc
/dev/sdd
Now i have one more in a removeable frame for Backup
/dev/sde
/dev/md0 is forwarded to an Samba-Domain for Data service in the network.
What''s the best way to sync the data from /dev/md0 to /dev/sde ?
is a domain hotplug able ? So when i plug in /dev/sde,
2018 Apr 17
1
Getting glusterfs to expand volume size to brick size
I just remembered that I didn't run
https://docs.gluster.org/en/v3/Upgrade-Guide/op_version/ for this test
volume/box like I did for the main production gluster, and one of these ops
- either heal or the op-version, resolved the issue.
I'm now seeing:
pylon:/var/lib/glusterd/vols/dev_apkmirror_data # ack shared-brick-count
dev_apkmirror_data.pylon.mnt-pylon_block3-dev_apkmirror_data.vol