similar to: shadow_copy2

Displaying 20 results from an estimated 1000 matches similar to: "shadow_copy2"

2024 Dec 18
1
shadow_copy2
Hi Ralph, Am 18.12.24 um 18:59 schrieb Ralph Boehme via samba: > Hi Stefan > > On 12/18/24 5:40 PM, Stefan Kania via samba wrote: >> skania at cluster01:~$ ls -ld /glusterfs/admin-share/daten1/.snaps >> drwxr-xr-x 2 root root 4096? 1. Jan 1970 /glusterfs/admin-share/ >> daten1/.snaps > > what are the permissions of *each* path component > > # ls -ld /
2024 Dec 18
1
shadow_copy2
Hi Stefan On 12/18/24 5:40 PM, Stefan Kania via samba wrote: > skania at cluster01:~$ ls -ld /glusterfs/admin-share/daten1/.snaps > drwxr-xr-x 2 root root 4096? 1. Jan 1970 /glusterfs/admin-share/ > daten1/.snaps what are the permissions of *each* path component # ls -ld / # ls -ld /glusterfs # ls -ld /glusterfs/admin-share/ # ls -ld /glusterfs/admin-share/daten1/ -- SerNet Samba
2024 Dec 18
1
shadow_copy2
On 12/18/24 7:16 PM, Stefan Kania via samba wrote: > I would say that's fine :-) hm... can you kania$ cd /glusterfs/admin-share/daten1/ ?
2012 Jan 03
7
Low performance
Hi! I do a rsync between 2 machines. The throughput is only 2 MByte/Sec. Each machine is a Supermicro server with 2 x 8 Core Opteron 6128 64 GByte of ECC RAM 1 LSI MegaRAID SAS 9280-24i4e 24 x 2TByte SATA Disks as a RAID6 2 Intel Corporation 82599EB 10-Gigabit SFI/SFP+ Network-cards Both run Ubuntu 11.04 64Bit. Both use rsync version 3.0.7 protocol version 30 There are no
2017 Jan 04
0
shadow_copy and glusterfs not working
On Tue, 2017-01-03 at 15:16 +0100, Stefan Kania via samba wrote: > Hello, > > we are trying to configure a CTDB-Cluster with Glusterfs. We are using > Samba 4.5 together with gluster 3.9. We set up a lvm2 thin-provisioned > volume to use gluster-snapshots. > Then we configured the first share without using shadow_copy2 and > everything was working fine. > > Then we
2023 May 22
2
vfs_shadow_copy2 cannot read/find snapshots
Hi Alexander # net conf delparm projects shadow:snapprefix does not change a thing. The error persists. (I killed my smb session before trying again). log still says: [2023/05/22 15:23:23.324179,? 1] ../../source3/modules/vfs_shadow_copy2.c:2222(shadow_copy2_get_shadow_copy_data) ? shadow_copy2_get_shadow_copy_data: SMB_VFS_NEXT_OPEN failed for
2017 Jan 03
2
shadow_copy and glusterfs not working
Hello, we are trying to configure a CTDB-Cluster with Glusterfs. We are using Samba 4.5 together with gluster 3.9. We set up a lvm2 thin-provisioned volume to use gluster-snapshots. Then we configured the first share without using shadow_copy2 and everything was working fine. Then we added the shadow_copy2 parameters, when we did a "smbclient" we got the following message: root at
2023 May 22
1
vfs_shadow_copy2 cannot read/find snapshots
Hi Sebastian, why are you using shadow:snapprefix if this is just ?snap?? Does it work using ONLY shadow:format = snap_GMT-%Y.%m.%d-%H.%M.%S ? If you use snapprefix you also need to use shadow:delimiter (in your case this would be ?_?). However, I never managed to get it working with snapprefix on my machines. Alexander > On Monday, May 22, 2023 at 2:52 PM, Sebastian Neustein via samba
2010 Jul 05
1
question concerning VGAM
Hello everyone, using the VGAM package and the following code library(VGAM) bp1 <- vglm(cbind(daten$anzahl_b, daten$deckung_b) ~ ., binom2.rho, data=daten1) summary(bp1) coef(bp1, matrix=TRUE) produced this error message: error in object$coefficients : $ operator not defined for this S4 class I am bit confused because some day ago this error message did not show up and
2023 May 22
1
vfs_shadow_copy2 cannot read/find snapshots
The gluster side looks like this: root at B741:~# gluster volume get glvol_samba features.show-snapshot-directory features.show-snapshot-directory???????? on root at B741:~# gluster volume get glvol_samba features.uss features.uss???????????????????????????? enable I found an error when the samba client is mounting the gluster volume in the gluster logs
2023 May 22
3
vfs_shadow_copy2 cannot read/find snapshots
Hi I am trying to get shadow_copy2 to read gluster snapshots and provide the users with previous versions of their files. Here is my smb.conf: [global] ??????? security = ADS ??????? workgroup = AD ??????? realm = AD.XXX.XX ??????? netbios name = A32X ??????? log file = /var/log/samba/%m ??????? log level = 1 ??????? idmap config * : backend = tdb ??????? idmap config * : range =
2008 Oct 13
0
Re : using predict() or fitted() from a model with offset; unsolved, included reproducible code
Thanks for your reply Mark, but no, using predict on the new data.frame does not help here. ? I had first thought that the probelm was due?the?explanatory variable (age)?and?the offset one (date) being?very similar (highly?correlated, I am trying to tease their effect apart, and hoped offset would help in this since I know the relationship with age already). But this appears not to be the case.
2023 Jul 05
1
remove_me files building up
Hi Strahil, This is the output from the commands: root at uk3-prod-gfs-arb-01:~# du -h -x -d 1 /data/glusterfs/gv1/brick1/brick 2.2G /data/glusterfs/gv1/brick1/brick/.glusterfs 24M /data/glusterfs/gv1/brick1/brick/scalelite-recordings 16K /data/glusterfs/gv1/brick1/brick/mytute 18M /data/glusterfs/gv1/brick1/brick/.shard 0
2023 Jun 30
1
remove_me files building up
Hi, We're running a cluster with two data nodes and one arbiter, and have sharding enabled. We had an issue a while back where one of the server's crashed, we got the server back up and running and ensured that all healing entries cleared, and also increased the server spec (CPU/Mem) as this seemed to be the potential cause. Since then however, we've seen some strange behaviour,
2023 Jul 04
1
remove_me files building up
Hi, Thanks for your response, please find the xfs_info for each brick on the arbiter below: root at uk3-prod-gfs-arb-01:~# xfs_info /data/glusterfs/gv1/brick1 meta-data=/dev/sdc1 isize=512 agcount=31, agsize=131007 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 =
2023 Jul 04
1
remove_me files building up
Thanks for the clarification. That behaviour is quite weird as arbiter bricks should hold?only metadata. What does the following show on host?uk3-prod-gfs-arb-01: du -h -x -d 1?/data/glusterfs/gv1/brick1/brickdu -h -x -d 1?/data/glusterfs/gv1/brick3/brickdu -h -x -d 1 /data/glusterfs/gv1/brick2/brick If indeed the shards are taking space -?that is a really strange situation.From which version
2023 Jul 04
1
remove_me files building up
Hi Strahil, We're using gluster to act as a share for an application to temporarily process and store files, before they're then archived off over night. The issue we're seeing isn't with the inodes running out of space, but the actual disk space on the arb server running low. This is the df -h? output for the bricks on the arb server: /dev/sdd1 15G 12G 3.3G 79%
2023 Jul 04
1
remove_me files building up
Hi Liam, I saw that your XFS uses ?imaxpct=25? which for an arbiter brick is a little bit low. If you have free space on the bricks, increase the maxpct to a bigger value, like:xfs_growfs -m 80 /path/to/brickThat will set 80% of the Filesystem for inodes, which you can verify with df -i /brick/path (compare before and after).?This way?you won?t run out of inodes in the future. Of course, always
2023 Jul 03
1
remove_me files building up
Hi, you mentioned that the arbiter bricks run out of inodes.Are you using XFS ?Can you provide the xfs_info of each brick ? Best Regards,Strahil Nikolov? On Sat, Jul 1, 2023 at 19:41, Liam Smith<liam.smith at ek.co> wrote: Hi, We're running a cluster with two data nodes and one arbiter, and have sharding enabled. We had an issue a while back where one of the server's
2019 Nov 14
6
get_share_mode_lock: get_static_share_mode_data failed: NT_STATUS_NO_MEMORY
Upgraded to Samba 4.11.2 and I?ve now too started seeing the message: get_share_mode_lock: get_static_share_mode_data failed: NT_STATUS_NO_MEMORY A lot. I modified the source in source3/locking/share_mode_lock.c a bit in order to print out the values of the service path, smb_fname & old_write_time when it fails and it seems they are all NULL? [2019/11/14 14:24:23.358441, 0]