similar to: "Subsystem 'sftp' already defined" error in openssh-9 when using Include

Displaying 20 results from an estimated 900 matches similar to: ""Subsystem 'sftp' already defined" error in openssh-9 when using Include"

2018 Apr 18
0
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Btw, I've now noticed at least 5 variations in toggling binary option values. Are they all interchangeable, or will using the wrong value not work in some cases? yes/no true/false True/False on/off enable/disable It's quite a confusing/inconsistent practice, especially given that many options will accept any value without erroring out/validation. Sincerely, Artem -- Founder, Android
2018 Apr 17
0
Getting glusterfs to expand volume size to brick size
To clarify, I was on 3.13.2 previously, recently updated to 4.0.1, and the bug seems to persist in 4.0.1. Sincerely, Artem -- Founder, Android Police <http://www.androidpolice.com>, APK Mirror <http://www.apkmirror.com/>, Illogical Robot LLC beerpla.net | +ArtemRussakovskii <https://plus.google.com/+ArtemRussakovskii> | @ArtemR <http://twitter.com/ArtemR> On Mon, Apr
2018 Apr 18
1
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
On 04/18/2018 11:59 AM, Artem Russakovskii wrote: > Btw, I've now noticed at least 5 variations in toggling binary option > values. Are they all interchangeable, or will using the wrong value > not work in some cases? > > yes/no > true/false > True/False > on/off > enable/disable > > It's quite a confusing/inconsistent practice, especially given that
2018 Apr 17
1
Getting glusterfs to expand volume size to brick size
That might be the reason. Perhaps the volfiles were not regenerated after upgrading to the version with the fix. There is a workaround detailed in [2] for the time being (you will need to copy the shell script into the correct directory for your Gluster release). [2] https://bugzilla.redhat.com/show_bug.cgi?id=1517260#c19 On 17 April 2018 at 09:58, Artem Russakovskii <archon810 at
2018 Apr 18
2
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Thanks for the link. Looking at the status of that doc, it isn't quite ready yet, and there's no mention of the option. Does it mean that whatever is ready now in 4.0.1 is incomplete but can be enabled via granular-entry-heal=on, and when it is complete, it'll become the default and the flag will simply go away? Is there any risk enabling the option now in 4.0.1? Sincerely, Artem
2018 Apr 17
1
Getting glusterfs to expand volume size to brick size
I just remembered that I didn't run https://docs.gluster.org/en/v3/Upgrade-Guide/op_version/ for this test volume/box like I did for the main production gluster, and one of these ops - either heal or the op-version, resolved the issue. I'm now seeing: pylon:/var/lib/glusterd/vols/dev_apkmirror_data # ack shared-brick-count dev_apkmirror_data.pylon.mnt-pylon_block3-dev_apkmirror_data.vol
2018 Apr 18
2
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Hi Ravi, Could you please expand on how these would help? By forcing full here, we move the logic from the CPU to network, thus decreasing CPU utilization, is that right? This is assuming the CPU and disk utilization are caused by the differ and not by lstat and other calls or something. > Option: cluster.data-self-heal-algorithm > Default Value: (null) > Description: Select between
2018 Apr 17
0
Getting glusterfs to expand volume size to brick size
Ok, it looks like the same problem. @Amar, this fix is supposed to be in 4.0.1. Is it possible to regenerate the volfiles to fix this? Regards, Nithya On 17 April 2018 at 09:57, Artem Russakovskii <archon810 at gmail.com> wrote: > pylon:/var/lib/glusterd/vols/dev_apkmirror_data # ack shared-brick-count > dev_apkmirror_data.pylon.mnt-pylon_block3-dev_apkmirror_data.vol > 3:
2018 Apr 18
0
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
On 04/18/2018 10:35 AM, Artem Russakovskii wrote: > Hi Ravi, > > Could you please expand on how these would help? > > By forcing full here, we move the logic from the CPU to network, thus > decreasing CPU utilization, is that right? Yes, 'diff' employs the rchecksum FOP which does a sha256? checksum which can consume CPU. So yes it is sort of shifting the load from CPU
2018 Apr 17
5
Getting glusterfs to expand volume size to brick size
pylon:/var/lib/glusterd/vols/dev_apkmirror_data # ack shared-brick-count dev_apkmirror_data.pylon.mnt-pylon_block3-dev_apkmirror_data.vol 3: option shared-brick-count 3 dev_apkmirror_data.pylon.mnt-pylon_block2-dev_apkmirror_data.vol 3: option shared-brick-count 3 dev_apkmirror_data.pylon.mnt-pylon_block1-dev_apkmirror_data.vol 3: option shared-brick-count 3 Sincerely, Artem --
2018 Apr 18
3
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Following up here on a related and very serious for us issue. I took down one of the 4 replicate gluster servers for maintenance today. There are 2 gluster volumes totaling about 600GB. Not that much data. After the server comes back online, it starts auto healing and pretty much all operations on gluster freeze for many minutes. For example, I was trying to run an ls -alrt in a folder with 7300
2023 Aug 02
1
"Subsystem 'sftp' already defined" error in openssh-9 when using Include
Am Mi., 2. Aug. 2023 um 23:27 Uhr schrieb Artem Russakovskii <archon810 at gmail.com>: > For the last several releases (perhaps with the release of openssh 9?), > upgrading each version of openssh started wiping the current sshd_config > and replacing it with the default config, at least on OpenSUSE 15.4 via > zypper/yast. Where do you get your sshd from? The default
2018 Apr 18
0
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
On 04/18/2018 10:14 AM, Artem Russakovskii wrote: > Following up here on a related and very serious for us issue. > > I took down one of the 4 replicate gluster servers for maintenance > today. There are 2 gluster volumes totaling about 600GB. Not that much > data. After the server comes back online, it starts auto healing and > pretty much all operations on gluster freeze for
2018 Apr 10
0
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Hi Vlad, I actually saw that post already and even asked a question 4 days ago ( https://serverfault.com/questions/517775/glusterfs-direct-i-o-mode#comment1172497_540917). The accepted answer also seems to go against your suggestion to enable direct-io-mode as it says it should be disabled for better performance when used just for file accesses. It'd be great if someone from the Gluster team
2018 Apr 05
2
[dht-selfheal.c:2328:dht_selfheal_directory] 0-data-dht: Directory selfheal failed: Unable to form layout for directory /
Hi, I noticed when I run gluster volume heal data info, the follow message shows up in the log, along with other stuff: [dht-selfheal.c:2328:dht_selfheal_directory] 0-data-dht: Directory selfheal > failed: Unable to form layout for directory / I'm seeing it on Gluster 4.0.1 and 3.13.2. Here's the full log after running heal info:
2018 Apr 10
2
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Wish I knew or was able to get detailed description of those options myself. here is direct-io-mode https://serverfault.com/questions/517775/glusterfs-direct-i-o-mode Same as you I ran tests on a large volume of files, finding that main delays are in attribute calls, ending up with those mount options to add performance. I discovered those options through basically googling this user list with
2018 Apr 10
0
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Hi Vlad, I'm using only localhost: mounts. Can you please explain what effect each option has on performance issues shown in my posts? "negative-timeout=10,attribute-timeout=30,fopen- keep-cache,direct-io-mode=enable,fetch-attempts=5" From what I remember, direct-io-mode=enable didn't make a difference in my tests, but I suppose I can try again. The explanations about
2018 Apr 03
3
cluster.readdir-optimize and disappearing files/dirs bug
Hi, As many of you know, gluster suffers from pretty bad performance issues when there are lots of files. One way to at least attempt to improve performance is setting cluster.readdir-optimize to on. However, based on my recent tests (using Gluster 3.13.2), as well as tests of many others (like http://lists.gluster.org/pipermail/gluster-devel/2016-November/051417.html), there's a bug that
2018 Apr 16
2
Getting glusterfs to expand volume size to brick size
Hi Nithya, I'm on Gluster 4.0.1. I don't think the bricks were smaller before - if they were, maybe 20GB because Linode's minimum is 20GB, then I extended them to 25GB, resized with resize2fs as instructed, and rebooted many times over since. Yet, gluster refuses to see the full disk size. Here's the status detail output: gluster volume status dev_apkmirror_data detail Status
2018 Apr 10
2
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
you definitely need mount options to /etc/fstab use ones from here http://lists.gluster.org/pipermail/gluster-users/2018-April/033811.html I went on with using local mounts to achieve performance as well Also, 3.12 or 3.10 branches would be preferable for production On Fri, Apr 6, 2018 at 4:12 AM, Artem Russakovskii <archon810 at gmail.com> wrote: > Hi again, > > I'd like to