Damien Miller
2021-Sep-22 07:19 UTC
Howto log multiple sftpd instances with their chroot shared via NFS
On Tue, 21 Sep 2021, Hildegard Meier wrote:> OpenSSH 5.9p1 + 7.6p1 > > syslog-ng 3.3.4 + 3.13.2 > > Hello, having an Ubuntu server with sftpd running where /var/data/chroot/ is an NFS mount from a remote central NFS server, > and each sftpd user's chroot home is /var/data/chroot/<username>/ > and every user has a log device /var/data/chroot/<username>/dev/log which I read in successfully with syslog-ng: > > source s_chroot_<username> { unix-stream("/var/data/chroot/<username>/dev/log" optional(yes) ); }; > destination d_sftp_<username> { file("/var/log/sftp/<username>.log"); }; > log { source(s_chroot_<username>); destination(d_sftp_<username>); }; > > Now I have a second sftpd server in parallel, with the same user database and also mounts /var/data/chroot/ via NFS, and has the same syslog-ng config, > so every user can login on the one server or on the other. This is for high availability. This works so far. > > What is not working now is the sftpd logging: The sftp user's log is only available on one sftp server exclusively, and that is the one where syslog-ng was started least, > because as I understand it takes the exclusive unix socket file lock for each user's /dev/log. > > So, if a user logs in on the first server, where syslog-ng was started least, the user's sftp activity is logged on the first server. > But if the user logs in on the second server, it's sftp activity is not logged, neither on the second nor on the first server. > > If the syslog-ng is then restarted on the second server, the sftp user's activity is exclusively logged only on the second server and only for logins on the second server. > > How can I get the sftp user's activity be logged on each sftp server, when a user logs in to that server, while the user's home is shared on both servers via NFS?Right now there is no solution for this inside OpenSSH. There have been some proposals for post-auth logging to be proxied via the priviledged sshd monitor process but we haven't pursued them yet. Maybe someone with more Linux/NFS wit could suggest an OS-side solution for you? -d
David Newall
2021-Sep-22 09:18 UTC
Howto log multiple sftpd instances with their chroot shared via NFS
Hi Hildegard, On Tue, 21 Sep 2021, Hildegard Meier wrote:> Now I have a second sftpd server in parallel, with the same user > database and also mounts /var/data/chroot/ via NFS, and has the same > syslog-ng config, > so every user can login on the one server or on the other. This is for > high availability. This works so far. > > What is not working now is the sftpd logging: The sftp user's log is > only available on one sftp server exclusively, and that is the one > where syslog-ng was started least, because as I understand it takes > the exclusive unix socket file lock for each user's /dev/log. > > So, if a user logs in on the first server, where syslog-ng was started > least, the user's sftp activity is logged on the first server. > But if the user logs in on the second server, it's sftp activity is > not logged, neither on the second nor on the first server.Forward the log entries on both machines to a log host.? E.g. destination d_tcp { network("log_host" port(1999)); }; Regards, David
Hildegard Meier
2021-Sep-28 15:00 UTC
Howto log multiple sftpd instances with their chroot shared via NFS
Hallo all, thank You all very much for your answers and suggestions. Since the discussion has been shared between several mails now, I would like to try to summarize all up in one mail again here and fill in the informations that where missing before. We have 800 (eight hundred) sftp customers, each sftp customer has the same simple local Linux account on both sftp servers (simple entry in /etc/passwd and /etc/shadow etc.). (Before we had only one sftp server, but for higher availability we want now to run two or more sftp servers parallel, accessed via TCP (sftp) load balancer) Each customer account is in the group "sftp-cust", and only members of that group are allowed to login via sftp. Each customer has it's chrooted home dir in /var/data/chroot/<username>/. Here the (relevant part) of the sftpd config: --------------------------------------------------------------- AllowGroups sftp-cust Subsystem sftp internal-sftp -f LOCAL5 -l INFO Match Group sftp-cust ChrootDirectory %h ForceCommand internal-sftp -u 0002 -f LOCAL5 -l INFO AllowTcpForwarding no --------------------------------------------------------------- Each sftp customer's sftp activity needs to be available in a dedicated log file for each sftp customer, for our support to be able to look into it. Here is an example of this log file: Sep 28 15:54:25 myhostname internal-sftp[1618]: session opened for local user <username> from [1.2.3.4] Sep 28 15:55:52 myhostname internal-sftp[27918]: remove name "/in/file.dat" Sep 28 15:55:52 myhostname internal-sftp[27918]: sent status No such file Sep 28 15:55:52 myhostname internal-sftp[27918]: open "/in/file.dat" flags WRITE,CREATE,TRUNCATE mode 0666 Sep 28 15:55:52 myhostname internal-sftp[27918]: close "/in/file.dat" bytes read 0 written 2966 Sep 28 15:55:52 myhostname internal-sftp[27918]: set "/in/file.dat" modtime 20210928-15:55:46 If one does not use the /dev/log in the chroot environment (that is /var/data/chroot/<username>/dev/log absolute), you have a global sftpd log (I think in /var/log/messages on the server or something like that). So a solution for the log problem would be, to not use the chroot logging, but to parse the global sftpd log (which is available on both sftpd servers locally for all local logins separated). But I think this is not trivial to make this reliable and robust, since we need to parse the process id (sftpd session), check which username that session belongs to and write that log lines in the username specific log file. Of course, process IDs are reused for different sessions, we have overlapping sessions etc. But if there is no other solution, than we need to try that. Maybe someone has already written such a session tracking parser. Does somebody know of a log analyzer that can do this? A problem would be e.g. if the user never logs out, and the log file rotates daily, so after one day, the parser could not find any "session opened for local user xxx" log lines anymore. If we want to try to keep using the available session chroot logging, man (8) sftp-server says: "For logging to work, sftp-server must be able to access /dev/log. Use of sftp-server in a chroot configuration therefore requires that syslogd(8) establish a logging socket inside the chroot directory." As Peter has written here https://lists.mindrot.org/pipermail/openssh-unix-dev/2021-September/039669.html as I understand it is hard coded in the system C library that the log devices name is /dev/log, so this cannot be changed (e.g. to log to chrooted /dev/log_hostname1 on hostname1 and chrooted /dev/log_hostname2 on hostname2). With the syslog-ng we use, we only need to create the directory /var/data/chroot/<username>/dev for each user once we create a new account. We have the following syslog-ng config snippet file for each sftp user: source s_chroot_<username> { unix-stream("/var/data/chroot/<username>/dev/log" optional(yes) ); }; destination d_sftp_<username> { file("/var/log/app/sftp/<username>.log"); }; log { source(s_chroot_<username>); destination(d_sftp_<username>); }; When starting syslog-ng the unix stream file /var/data/chroot/<username>/dev/log is created automatically by syslog-ng. If you delete it it gets recreated upon syslog-ng restart. But the problem is that the last started syslog-ng aquires the lock for the NFS shared /var/data/chroot/<username>/dev/log so the other server cannot read it anymore and so there is no sftp log for the sessions on the other server. I guess this is because syslog-ng creates /dev/log in the user's chroot directories as Unix stream socket (see https://www.syslog-ng.com/technical-documents/doc/syslog-ng-open-source-edition/3.22/administration-guide/28#TOPIC-1209171). This seems also to be called IPC socket (inter-process communication socket) or AF_UNIX socket. "It is used in POSIX operating systems for inter-process communication. The correct standard POSIX term is POSIX Local IPC Sockets. Unix domain connections appear as byte streams, much like network connections, but all data _remains within the local computer_." "It means that if you create a AF_UNIX socket on a NFS disk which is shared between two machines A and B, you cannot have a process on A writing data to the unix socket and a process on B reading data from that socket. The communication happens at kernel level, and you can only transfer data among processes sitting in the same kernel." (see source: https://stackoverflow.com/questions/783134/af-unix-domain-why-use-local-file-names-only ) Since we have 800 users, it would be impractical unrobust to use user-specifc e.g. bind) mounts (e.g. 800 bind-over-mounts). To keep it simple, clear and coherent, all user's homes must be on the same one singular NFS-Share. We need to stick with Ubuntu Linux as we have established management process only for this operating systems. I hope I did not forget some information that was missing before. Thanks Hildegard> Gesendet: Mittwoch, 22. September 2021 um 09:19 Uhr > Von: "Damien Miller" <djm at mindrot.org> > An: "Hildegard Meier" <daku8938 at gmx.de> > Cc: openssh-unix-dev at mindrot.org > Betreff: Re: Howto log multiple sftpd instances with their chroot shared via NFS > > On Tue, 21 Sep 2021, Hildegard Meier wrote: > > > OpenSSH 5.9p1 + 7.6p1 > > > > syslog-ng 3.3.4 + 3.13.2 > > > > Hello, having an Ubuntu server with sftpd running where /var/data/chroot/ is an NFS mount from a remote central NFS server, > > and each sftpd user's chroot home is /var/data/chroot/<username>/ > > and every user has a log device /var/data/chroot/<username>/dev/log which I read in successfully with syslog-ng: > > > > source s_chroot_<username> { unix-stream("/var/data/chroot/<username>/dev/log" optional(yes) ); }; > > destination d_sftp_<username> { file("/var/log/sftp/<username>.log"); }; > > log { source(s_chroot_<username>); destination(d_sftp_<username>); }; > > > > Now I have a second sftpd server in parallel, with the same user database and also mounts /var/data/chroot/ via NFS, and has the same syslog-ng config, > > so every user can login on the one server or on the other. This is for high availability. This works so far. > > > > What is not working now is the sftpd logging: The sftp user's log is only available on one sftp server exclusively, and that is the one where syslog-ng was started least, > > because as I understand it takes the exclusive unix socket file lock for each user's /dev/log. > > > > So, if a user logs in on the first server, where syslog-ng was started least, the user's sftp activity is logged on the first server. > > But if the user logs in on the second server, it's sftp activity is not logged, neither on the second nor on the first server. > > > > If the syslog-ng is then restarted on the second server, the sftp user's activity is exclusively logged only on the second server and only for logins on the second server. > > > > How can I get the sftp user's activity be logged on each sftp server, when a user logs in to that server, while the user's home is shared on both servers via NFS? > > Right now there is no solution for this inside OpenSSH. There have been > some proposals for post-auth logging to be proxied via the priviledged > sshd monitor process but we haven't pursued them yet. > > Maybe someone with more Linux/NFS wit could suggest an OS-side solution > for you? > > -d >
Hildegard Meier
2021-Oct-01 05:14 UTC
Howto log multiple sftpd instances with their chroot shared via NFS
Hello all, first let me thank you all for your input and suggestions. I think I have found now a workaround for the problem.> Since we have 800 users, it would be impractical unrobust to use user-specifc e.g. bind) mounts (e.g. 800 bind-over-mounts). To keep it simple, clear and coherent, all user's homes must be on the same one singular NFS-Share.It seems, that I cannot uphold this requirement of not having user-specific bind-over-mounts. (while the requirement that there is only exactly one nfs mount is still true). My workaround is the following: sudo mkdir /var/data/dev # Create a directroy under which user subdirectories are created For very username <username> and the user's primary group <groupname> do the following: sudo mkdir /var/data/dev/<username> sudo chmod 550 /var/data/dev/<username> # This restrictive permission is a requirement I think sudo chgrp <groupname> /var/data/dev/<username> # so the user can read the directory So the new directory is exactly the same as the existing /var/data/chroot/<username>/dev directory (which is on the nfs mount /var/data/chroot/). Then do mount --bind /var/data/dev/<username> /var/data/chroot/<username>/dev so /var/data/chroot/<username>/dev is now effectively local on the sftp server, not anymore on nfs mount. Then change the syslog-ng config from source s_chroot_<username> { unix-stream("/var/data/chroot/<username>/dev/log" optional(yes) ); }; to source s_chroot_<username> { unix-stream("/var/data/dev/<username>/log" optional(yes) ); }; (this is not strictly needed, but I think it's nice having syslog-ng definitely now only reading from local file, guaranteed not from nfs mount anymore) I have tested this successfully with one user so far - whether the user logs in on the one sftp server or the other, syslog-ng can now log the sftp session on the affected sftp server. While it is still a mess to have 800 (or in the future maybe 2000 or 3000) bind mounts which I very much would like to avoid, I think this workaround is acceptable because the impact is limited to the logging functionality and not the sftp service itself. Speaking, if there are problems with the bind mounts, the impact is only that there is no sftp session logging. Also, this is very simple, uniform and clear. And one has 100% reliable sftp session logging. No need to change anything else of the well established production system. I think I will create systemd unit files for every bind mount (I experienced systemd dependency cycles with bind mounts in /etc/fstab on Ubuntu18 boot before anyway, so systemd units are needed anyway). Maybe one can define a systemd dependency so that when you unmount the nfs mount /var/data/chroot/, all the bind mounts under it are automatically unmounted before (sftpd needs to be stopped first of course, otherwise mount would be busy), and also the bind-mounts automatically be mounted after /var/dat/chroot is mounted again. So there should not be much hassle with all those simple uniform bind mounts. I think I will implement a nagios monitoring check that checks if for every username /var/data/chroot/<username>/dev is a mountpoint, so logging is assured. According to https://serverfault.com/questions/102588/maximum-numbers-for-file-system-mounts-in-linux/927860#927860 the Linux Kernel mount maximum is 100 000 so hopefully 2000 or 3000 will not be a problem. -- Possible alternative I guess (not tested) an alternative could be to hardlink (so needs to be on the same (nfs) file system) every /var/data/chroot/<username>/dev/log to e.g. /var/data/chroot/general_dev_log (which would need access permission for all users of course) and do mount --bind /var/data/general_dev_log /var/data/chroot/general_dev_log so every sftp session would log to the one and only log device (unix socket), again using the local bind mount. So you could write all the sftp session logs of all users into one log file with syslog-ng. Advantage: - only one bind mount needed - only one syslog-ng unix-stream configuration needed Disadvantage: You need to parse and filter the big one session log to create user-specific log files which only conatin the sessions for that user. I have no experience with such tools and I think this filtering can not be 100% reliable, there will always be some cases where the filter will fail and log entries will be missing. Best regards