Displaying 20 results from an estimated 1000 matches similar to: "shadow_copy2"
2024 Dec 18
1
shadow_copy2
Hi Ralph,
Am 18.12.24 um 18:59 schrieb Ralph Boehme via samba:
> Hi Stefan
>
> On 12/18/24 5:40 PM, Stefan Kania via samba wrote:
>> skania at cluster01:~$ ls -ld /glusterfs/admin-share/daten1/.snaps
>> drwxr-xr-x 2 root root 4096? 1. Jan 1970 /glusterfs/admin-share/
>> daten1/.snaps
>
> what are the permissions of *each* path component
>
> # ls -ld /
2024 Dec 18
1
shadow_copy2
Am 18.12.24 um 19:42 schrieb Ralph Boehme:
> On 12/18/24 7:16 PM, Stefan Kania via samba wrote:
>> I would say that's fine :-)
>
> hm... can you
>
> kania$ cd /glusterfs/admin-share/daten1/
>
> ?
Yes :-)
skania at cluster01:~$ cd /glusterfs/admin-share/daten1/
skania at cluster01:/glusterfs/admin-share/daten1$ ls -l
insgesamt 1
-rwxrwx---+ 1 skania domain users
2024 Dec 18
1
shadow_copy2
Hi Stefan
On 12/18/24 5:40 PM, Stefan Kania via samba wrote:
> skania at cluster01:~$ ls -ld /glusterfs/admin-share/daten1/.snaps
> drwxr-xr-x 2 root root 4096? 1. Jan 1970 /glusterfs/admin-share/
> daten1/.snaps
what are the permissions of *each* path component
# ls -ld /
# ls -ld /glusterfs
# ls -ld /glusterfs/admin-share/
# ls -ld /glusterfs/admin-share/daten1/
--
SerNet Samba
2025 Jan 04
0
Users can't access snapshot
Hello,
I create a volume in LVM2 (thin) to use snaphots, so normal users can
recover files from the snapshot. I set some parameters for samba and for
the snapshots so normal users can access the snapshot and the snapshots
will be listed in Windows explorer. Here my volume info
------------------------
root at cluster01:~# gluster v info
Volume Name: gv1
Type: Replicate
Volume ID:
2024 Dec 18
1
shadow_copy2
On 12/18/24 7:16 PM, Stefan Kania via samba wrote:
> I would say that's fine :-)
hm... can you
kania$ cd /glusterfs/admin-share/daten1/
?
2024 Dec 20
1
smbclient and Kerberos authentication
Hi to all,
I''m just writing the next version of the german Samba-book and I'm just
testing smbclient so when I'm do:
---------------------
root at dc01:~# smbclient -L cluster
Password for [EXAMPLE\root]:
Anonymous login successful
Sharename Type Comment
--------- ---- -------
IPC$ IPC IPC Service (Samba
2024 Dec 20
1
smbclient and Kerberos authentication
On Fri, 20 Dec 2024 20:16:21 +0100
Stefan Kania via samba <samba at lists.samba.org> wrote:
> Hi to all,
>
> I''m just writing the next version of the german Samba-book and I'm
> just testing smbclient so when I'm do:
> ---------------------
> root at dc01:~# smbclient -L cluster
> Password for [EXAMPLE\root]:
> Anonymous login successful
>
>
2012 Jan 03
7
Low performance
Hi!
I do a rsync between 2 machines. The throughput is only 2 MByte/Sec.
Each machine is a Supermicro server with
2 x 8 Core Opteron 6128
64 GByte of ECC RAM
1 LSI MegaRAID SAS 9280-24i4e
24 x 2TByte SATA Disks as a RAID6
2 Intel Corporation 82599EB 10-Gigabit SFI/SFP+ Network-cards
Both run Ubuntu 11.04 64Bit.
Both use rsync version 3.0.7 protocol version 30
There are no
2017 Jan 04
0
shadow_copy and glusterfs not working
On Tue, 2017-01-03 at 15:16 +0100, Stefan Kania via samba wrote:
> Hello,
>
> we are trying to configure a CTDB-Cluster with Glusterfs. We are using
> Samba 4.5 together with gluster 3.9. We set up a lvm2 thin-provisioned
> volume to use gluster-snapshots.
> Then we configured the first share without using shadow_copy2 and
> everything was working fine.
>
> Then we
2023 May 22
2
vfs_shadow_copy2 cannot read/find snapshots
Hi Alexander
# net conf delparm projects shadow:snapprefix
does not change a thing. The error persists. (I killed my smb session
before trying again).
log still says:
[2023/05/22 15:23:23.324179,? 1]
../../source3/modules/vfs_shadow_copy2.c:2222(shadow_copy2_get_shadow_copy_data)
? shadow_copy2_get_shadow_copy_data: SMB_VFS_NEXT_OPEN failed for
2017 Jan 03
2
shadow_copy and glusterfs not working
Hello,
we are trying to configure a CTDB-Cluster with Glusterfs. We are using
Samba 4.5 together with gluster 3.9. We set up a lvm2 thin-provisioned
volume to use gluster-snapshots.
Then we configured the first share without using shadow_copy2 and
everything was working fine.
Then we added the shadow_copy2 parameters, when we did a "smbclient" we
got the following message:
root at
2023 May 22
1
vfs_shadow_copy2 cannot read/find snapshots
Hi Sebastian,
why are you using shadow:snapprefix if this is just ?snap??
Does it work using ONLY shadow:format = snap_GMT-%Y.%m.%d-%H.%M.%S ?
If you use snapprefix you also need to use shadow:delimiter (in your case this would be ?_?). However, I never managed to get it working with snapprefix on my machines.
Alexander
> On Monday, May 22, 2023 at 2:52 PM, Sebastian Neustein via samba
2023 May 22
1
vfs_shadow_copy2 cannot read/find snapshots
The gluster side looks like this:
root at B741:~# gluster volume get glvol_samba features.show-snapshot-directory
features.show-snapshot-directory???????? on
root at B741:~# gluster volume get glvol_samba features.uss
features.uss???????????????????????????? enable
I found an error when the samba client is mounting the gluster volume in
the gluster logs
2010 Jul 05
1
question concerning VGAM
Hello everyone,
using the VGAM package and the following code
library(VGAM)
bp1 <- vglm(cbind(daten$anzahl_b, daten$deckung_b) ~ ., binom2.rho,
data=daten1)
summary(bp1)
coef(bp1, matrix=TRUE)
produced this error message:
error in object$coefficients : $ operator not defined for this S4 class
I am bit confused because some day ago this error message did not show up
and
2023 May 22
3
vfs_shadow_copy2 cannot read/find snapshots
Hi
I am trying to get shadow_copy2 to read gluster snapshots and provide
the users with previous versions of their files.
Here is my smb.conf:
[global]
??????? security = ADS
??????? workgroup = AD
??????? realm = AD.XXX.XX
??????? netbios name = A32X
??????? log file = /var/log/samba/%m
??????? log level = 1
??????? idmap config * : backend = tdb
??????? idmap config * : range =
2023 Jul 05
1
remove_me files building up
Hi Strahil,
This is the output from the commands:
root at uk3-prod-gfs-arb-01:~# du -h -x -d 1 /data/glusterfs/gv1/brick1/brick
2.2G /data/glusterfs/gv1/brick1/brick/.glusterfs
24M /data/glusterfs/gv1/brick1/brick/scalelite-recordings
16K /data/glusterfs/gv1/brick1/brick/mytute
18M /data/glusterfs/gv1/brick1/brick/.shard
0
2023 Jun 30
1
remove_me files building up
Hi,
We're running a cluster with two data nodes and one arbiter, and have sharding enabled.
We had an issue a while back where one of the server's crashed, we got the server back up and running and ensured that all healing entries cleared, and also increased the server spec (CPU/Mem) as this seemed to be the potential cause.
Since then however, we've seen some strange behaviour,
2023 Jul 04
1
remove_me files building up
Hi,
Thanks for your response, please find the xfs_info for each brick on the arbiter below:
root at uk3-prod-gfs-arb-01:~# xfs_info /data/glusterfs/gv1/brick1
meta-data=/dev/sdc1 isize=512 agcount=31, agsize=131007 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
=
2023 Jul 04
1
remove_me files building up
Thanks for the clarification.
That behaviour is quite weird as arbiter bricks should hold?only metadata.
What does the following show on host?uk3-prod-gfs-arb-01:
du -h -x -d 1?/data/glusterfs/gv1/brick1/brickdu -h -x -d 1?/data/glusterfs/gv1/brick3/brickdu -h -x -d 1 /data/glusterfs/gv1/brick2/brick
If indeed the shards are taking space -?that is a really strange situation.From which version
2023 Jul 04
1
remove_me files building up
Hi Strahil,
We're using gluster to act as a share for an application to temporarily process and store files, before they're then archived off over night.
The issue we're seeing isn't with the inodes running out of space, but the actual disk space on the arb server running low.
This is the df -h? output for the bricks on the arb server:
/dev/sdd1 15G 12G 3.3G 79%