Displaying 20 results from an estimated 400 matches similar to: "Users can't access snapshot"
2024 Dec 18
1
shadow_copy2
Hello,
I'm lost :-(
I got a share:
------------------------
[global]
workgroup = example
netbios name = cluster
security = ads
realm = EXAMPLE.NET
idmap config *:range = 10000-19999
idmap config example:backend = rid
idmap config example:range = 1000000-1999999
map acl inherit = yes
winbind use default domain =
2024 Dec 18
1
shadow_copy2
Hi Ralph,
Am 18.12.24 um 18:59 schrieb Ralph Boehme via samba:
> Hi Stefan
>
> On 12/18/24 5:40 PM, Stefan Kania via samba wrote:
>> skania at cluster01:~$ ls -ld /glusterfs/admin-share/daten1/.snaps
>> drwxr-xr-x 2 root root 4096? 1. Jan 1970 /glusterfs/admin-share/
>> daten1/.snaps
>
> what are the permissions of *each* path component
>
> # ls -ld /
2024 Dec 18
1
shadow_copy2
Am 18.12.24 um 19:42 schrieb Ralph Boehme:
> On 12/18/24 7:16 PM, Stefan Kania via samba wrote:
>> I would say that's fine :-)
>
> hm... can you
>
> kania$ cd /glusterfs/admin-share/daten1/
>
> ?
Yes :-)
skania at cluster01:~$ cd /glusterfs/admin-share/daten1/
skania at cluster01:/glusterfs/admin-share/daten1$ ls -l
insgesamt 1
-rwxrwx---+ 1 skania domain users
2024 Dec 18
1
shadow_copy2
Hi Stefan
On 12/18/24 5:40 PM, Stefan Kania via samba wrote:
> skania at cluster01:~$ ls -ld /glusterfs/admin-share/daten1/.snaps
> drwxr-xr-x 2 root root 4096? 1. Jan 1970 /glusterfs/admin-share/
> daten1/.snaps
what are the permissions of *each* path component
# ls -ld /
# ls -ld /glusterfs
# ls -ld /glusterfs/admin-share/
# ls -ld /glusterfs/admin-share/daten1/
--
SerNet Samba
2024 Dec 18
1
shadow_copy2
On 12/18/24 7:16 PM, Stefan Kania via samba wrote:
> I would say that's fine :-)
hm... can you
kania$ cd /glusterfs/admin-share/daten1/
?
2025 Jan 04
1
net offline domain join
Hi
I try to user the offline domain join. As the manpage of net told me in
an example I tried it with:
root at cluster01:~# net offlinejoin provision -U administrator
domain=example.net machine_name=WINCLIENT11a dcname=dc01
savefile=winclient11a.txt
But all I got was:
ads_print_error: AD LDAP ERROR: 19 (Constraint violation): 0000202F:
samldb: spn[HOST/cluster.example.net] would cause a
2017 Jan 04
0
shadow_copy and glusterfs not working
On Tue, 2017-01-03 at 15:16 +0100, Stefan Kania via samba wrote:
> Hello,
>
> we are trying to configure a CTDB-Cluster with Glusterfs. We are using
> Samba 4.5 together with gluster 3.9. We set up a lvm2 thin-provisioned
> volume to use gluster-snapshots.
> Then we configured the first share without using shadow_copy2 and
> everything was working fine.
>
> Then we
2025 Jan 04
1
net offline domain join
Am 04.01.25 um 18:59 schrieb Stefan Kania via samba:
> Hi
>
> I try to user the offline domain join. As the manpage of net told me in
> an example I tried it with:
>
> root at cluster01:~#? net offlinejoin provision -U administrator
> domain=example.net machine_name=WINCLIENT11a dcname=dc01
> savefile=winclient11a.txt
>
> But all I got was:
>
>
2017 Jan 03
2
shadow_copy and glusterfs not working
Hello,
we are trying to configure a CTDB-Cluster with Glusterfs. We are using
Samba 4.5 together with gluster 3.9. We set up a lvm2 thin-provisioned
volume to use gluster-snapshots.
Then we configured the first share without using shadow_copy2 and
everything was working fine.
Then we added the shadow_copy2 parameters, when we did a "smbclient" we
got the following message:
root at
2024 Dec 20
1
smbclient and Kerberos authentication
Hi to all,
I''m just writing the next version of the german Samba-book and I'm just
testing smbclient so when I'm do:
---------------------
root at dc01:~# smbclient -L cluster
Password for [EXAMPLE\root]:
Anonymous login successful
Sharename Type Comment
--------- ---- -------
IPC$ IPC IPC Service (Samba
2023 Jul 05
1
remove_me files building up
Hi Strahil,
This is the output from the commands:
root at uk3-prod-gfs-arb-01:~# du -h -x -d 1 /data/glusterfs/gv1/brick1/brick
2.2G /data/glusterfs/gv1/brick1/brick/.glusterfs
24M /data/glusterfs/gv1/brick1/brick/scalelite-recordings
16K /data/glusterfs/gv1/brick1/brick/mytute
18M /data/glusterfs/gv1/brick1/brick/.shard
0
2024 Dec 20
1
smbclient and Kerberos authentication
On Fri, 20 Dec 2024 20:16:21 +0100
Stefan Kania via samba <samba at lists.samba.org> wrote:
> Hi to all,
>
> I''m just writing the next version of the german Samba-book and I'm
> just testing smbclient so when I'm do:
> ---------------------
> root at dc01:~# smbclient -L cluster
> Password for [EXAMPLE\root]:
> Anonymous login successful
>
>
2023 Jun 30
1
remove_me files building up
Hi,
We're running a cluster with two data nodes and one arbiter, and have sharding enabled.
We had an issue a while back where one of the server's crashed, we got the server back up and running and ensured that all healing entries cleared, and also increased the server spec (CPU/Mem) as this seemed to be the potential cause.
Since then however, we've seen some strange behaviour,
2023 Jul 04
1
remove_me files building up
Hi,
Thanks for your response, please find the xfs_info for each brick on the arbiter below:
root at uk3-prod-gfs-arb-01:~# xfs_info /data/glusterfs/gv1/brick1
meta-data=/dev/sdc1 isize=512 agcount=31, agsize=131007 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
=
2023 Jul 04
1
remove_me files building up
Thanks for the clarification.
That behaviour is quite weird as arbiter bricks should hold?only metadata.
What does the following show on host?uk3-prod-gfs-arb-01:
du -h -x -d 1?/data/glusterfs/gv1/brick1/brickdu -h -x -d 1?/data/glusterfs/gv1/brick3/brickdu -h -x -d 1 /data/glusterfs/gv1/brick2/brick
If indeed the shards are taking space -?that is a really strange situation.From which version
2023 Jul 04
1
remove_me files building up
Hi Strahil,
We're using gluster to act as a share for an application to temporarily process and store files, before they're then archived off over night.
The issue we're seeing isn't with the inodes running out of space, but the actual disk space on the arb server running low.
This is the df -h? output for the bricks on the arb server:
/dev/sdd1 15G 12G 3.3G 79%
2008 Oct 13
0
Re : using predict() or fitted() from a model with offset; unsolved, included reproducible code
Thanks for your reply Mark,
but no, using predict on the new data.frame does not help here.
?
I had first thought that the probelm was due?the?explanatory variable (age)?and?the offset one (date) being?very similar (highly?correlated, I am trying to tease their effect apart, and hoped offset would help in this since I know the relationship with age already). But this appears not to be the case.
2023 Jul 04
1
remove_me files building up
Hi Liam,
I saw that your XFS uses ?imaxpct=25? which for an arbiter brick is a little bit low.
If you have free space on the bricks, increase the maxpct to a bigger value, like:xfs_growfs -m 80 /path/to/brickThat will set 80% of the Filesystem for inodes, which you can verify with df -i /brick/path (compare before and after).?This way?you won?t run out of inodes in the future.
Of course, always
2023 Jul 03
1
remove_me files building up
Hi,
you mentioned that the arbiter bricks run out of inodes.Are you using XFS ?Can you provide the xfs_info of each brick ?
Best Regards,Strahil Nikolov?
On Sat, Jul 1, 2023 at 19:41, Liam Smith<liam.smith at ek.co> wrote: Hi,
We're running a cluster with two data nodes and one arbiter, and have sharding enabled.
We had an issue a while back where one of the server's
2018 Feb 07
0
Fwd: Troubleshooting glusterfs
Hello Nithya! Thank you for your help on figuring this out!
We changed our configuration and after having a successful test yesterday
we have run into new issue today.
The test including moderate read/write (~20-30 Mb/s) and scaling the
storage was running about 3 hours and at some moment system got stuck:
On the user level there are such errors when trying to work with filesystem:
OSError: