Displaying 20 results from an estimated 3000 matches similar to: "NT4 Terminal Server & samba-3.0.3-5 +"
2017 Jul 11
2
Public file share Samba 4.6.5
I am trying to configure a public file share on \\fs1\vol1
From a Windows 7 command prompt, I enter: dir \\fs1\vol1
Windows says: Logon failure: unknown user name or bad password.
Where am I going wrong?
Error log says: " SPNEGO login failed: NT_STATUS_NO_SUCH_USER" - that
must have something to do with this, but I thought that was the point of
"map to guest = Bad User"
2017 Jul 11
0
Public file share Samba 4.6.5
On Tue, 11 Jul 2017 06:50:42 -0500
John Schmerold via samba <samba at lists.samba.org> wrote:
> I am trying to configure a public file share on \\fs1\vol1
>
> From a Windows 7 command prompt, I enter: dir \\fs1\vol1
> Windows says: Logon failure: unknown user name or bad password.
>
> Where am I going wrong?
>
> Error log says: " SPNEGO login failed:
2007 Jul 13
0
Cross-VPN Browsing
Hey all,
I'm having a bit of a problem with cross-subnet browsing where one of
the subnets is managed by an OpenVPN server.
My network is set up with a central wireless router running OpenWRT.
192.168.10.x is the subnet for wired hosts and 192.168.20.x is the
subnet for wireless hosts. To allow cross-subnet browsing, the OpenWRT
router is running as a WINS server (samba).
Now, I've
2013 Jan 03
0
Resolve brick failed in restore
Hi,
I have a lab with 10 machines acting as storage servers for some compute
machines, using glusterfs to distribute the data as two volumes.
Created using:
gluster volume create vol1 192.168.10.{221..230}:/data/vol1
gluster volume create vol2 replica 2 192.168.10.{221..230}:/data/vol2
and mounted on the client and server machines using:
mount -t glusterfs 192.168.10.221:/vol1 /mnt/vol1
mount
2005 Jan 04
0
read only share access after upgrade to 3.0.10
Hello Samba gurus.
I'm in upgrade hell after upgrading my backup rh9 server and fc2 linux
box to 3.0.10 from 3.0.7. rh9 rpm package was from the samba site and
the fc2 rpms from redhat.
I now have a system where the win xp and win98se machines on the network
can read/write to the backup share but my fc2 box only has read only
access to the share - it could write before the upgrade.
I tried
2003 Jul 17
1
2 GB Limit when writing to smbfs filesystems
I'm running RedHat 8.0 with samba-2.2.7-5.8.0 (installed from RedHat
distribution)
When I use cpio to write a backup (> 2GB) to a smbfs filesystem, I get the
error: File size limit exceeded
I get the same error when I linux copy (cp) a file (> 2GB) from a Linux ext3
filesystem to the smbfs filesystem.
The smbfs filesystem is mounted from a Windows 2000 Professional
workstation.
After
2009 Jan 28
2
ZFS+NFS+refquota: full filesystems still return EDQUOT for unlink()
We have been using ZFS for user home directories for a good while now.
When we discovered the problem with full filesystems not allowing
deletes over NFS, we became very anxious to fix this; our users fill
their quotas on a fairly regular basis, so it''s important that they
have a simple recourse to fix this (e.g., rm). I played around with
this on my OpenSolaris box at home, read around
2004 Jul 30
1
Problem related to time-stamp
Hi,
I m facing problem in "rsync" rtelated to the
time-stamp of the files.
Im using rsync for transfering the file from my m/c
(OS :jaluna-linux) to a remote m/c(OS:jaluna-linux)
and even if there was no change in the files on my
m/c, when i rsync them to remote m/c the time-stamp of
the file on remote m/c (which i transfered from my
m/c) will change.
my file name is bigfile and it is
2003 Aug 05
1
Zhone Zplex 10 units
Mine has been working well, but the only problem is that it doesn't
support callerid (from the POTS side).
> -----Original Message-----
> From: John Schmerold [mailto:john@katy.com]
> Sent: Tuesday, 5 August 2003 12:37 PM
> To: asterisk-users@lists.digium.com
> Subject: Re: [Asterisk-Users] Zhone Zplex 10 units
>
> Thanks for the Zplex heads up.
>
> Steven
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
On 28 February 2018 at 18:28, Jose V. Carri?n <jocarbur at gmail.com> wrote:
> Hi Nithya,
>
> I applied the workarround for this bug and now df shows the right size:
>
> That is good to hear.
> [root at stor1 ~]# df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
> /dev/sdc1
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
On 28 February 2018 at 22:31, Jose V. Carri?n <jocarbur at gmail.com> wrote:
> Hi Nithya,
>
> My initial setup was composed of 2 similar nodes: stor1data and stor2data.
> A month ago I expanded both volumes with a new node: stor3data (2 bricks
> per volume).
> Of course, then to add the new peer with the bricks I did the 'balance
> force' operation.
2007 Nov 27
1
Syncing to multiple servers
Helle everyone,
Let's say we have 3 servers, 2 of them have the latest (stable) version
of rsyncd running (2.6.9)
<Server1> ==> I N T E R N E T ==> <Server2 (rsyncd running)> ==> LAN
==> <Server3 (rsyncd running)>
Suppose I want to send a big file (bigfile.big) from Server1 to both
Server2 and Server3. It would be a good idea to send first from Server1
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
I'm sorry for my last incomplete message.
Below the output of both volumes:
[root at stor1t ~]# gluster volume rebalance volumedisk1 status
Node Rebalanced-files size
scanned failures skipped status run time in
h:m:s
--------- ----------- -----------
----------- -----------
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
I applied the workarround for this bug and now df shows the right size:
[root at stor1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
/dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1
stor1data:/volumedisk0
101T 3,3T 97T 4% /volumedisk0
stor1data:/volumedisk1
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
My initial setup was composed of 2 similar nodes: stor1data and stor2data.
A month ago I expanded both volumes with a new node: stor3data (2 bricks
per volume).
Of course, then to add the new peer with the bricks I did the 'balance
force' operation. This task finished successfully (you can see info below)
and number of files on the 3 nodes were very similar .
For volumedisk1 I
2018 Mar 01
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
Below the output of both volumes:
[root at stor1t ~]# gluster volume rebalance volumedisk1 status
Node Rebalanced-files size
scanned failures skipped status run time in
h:m:s
--------- ----------- -----------
----------- ----------- ----------- ------------
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
There is a known issue with gluster 3.12.x builds (see [1]) so you may be
running into this.
The "shared-brick-count" values seem fine on stor1. Please send us "grep -n
"share" /var/lib/glusterd/vols/volumedisk1/*" results for the other nodes
so we can check if they are the cause.
Regards,
Nithya
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1517260
2009 Feb 19
4
[Bug 1558] New: Sftp client does not correctly process server response messages after write error
https://bugzilla.mindrot.org/show_bug.cgi?id=1558
Summary: Sftp client does not correctly process server response
messages after write error
Product: Portable OpenSSH
Version: 4.3p2
Platform: amd64
OS/Version: Linux
Status: NEW
Severity: normal
Priority: P2
Component: sftp
2007 Mar 02
1
--delete --force Won't Remove Directories With Dotnames
--delete --force Won't Remove Directories With Dotnames
rsync 2.6.9
Me, personally, I reckon this to be an irritant ... but perhaps (and having
thought about this a bit I decided it's a good chance) this is an
intentional and useful behaviour. But it's a nuisance if you call
your --partial-dir .partial, as I happen to do, since now if you remove a
directory which was aborted in
2017 Jun 15
1
How to expand Replicated Volume
Hi Nag Pavan Chilakam
Can I use this command "gluster vol add-brick vol1 replica 2
file01g:/brick3/data/vol1 file02g:/brick4/data/vol1" in both file server 01
and 02 exited without add new servers. Is it ok for expanding volume?
Thanks for your support
Regards,
Giang
2017-06-14 22:26 GMT+07:00 Nag Pavan Chilakam <nag.chilakam at gmail.com>:
> Hi,
> You can use add-brick