Displaying 14 results from an estimated 14 matches for "24t".
Did you mean:
24
2013 Jul 02
1
Centos 6.4, bnx2 in promiscuous mode does not see packets
...meone can help me, I cannot seem to get a system's ethernet
interface to correctly work in promiscuous mode...
I have a Centos 6.4 system with 2 bnx2 interfaces on it.
I have set up eth1 in promiscuous mode and am sending traffic to it
using the port mirroring configuration on a Nortel 3510-24T switch.
The switch reports that it is sending a fair amount of traffic to the
mirror port.
However, within Centos 6.4, I only see broadcast traffic from the
switch:
[root at host eth1]# ifconfig eth1
eth1 Link encap:Ethernet HWaddr 00:19:B9:E2:30:AE
UP BROADCAST RUNNING PROMISC...
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...3% /mnt/glusterfs/vol1
stor2data:/volumedisk0
101T 3,3T 97T 4% /volumedisk0
stor2data:/volumedisk1
197T 61T 136T 31% /volumedisk1
[root at stor3 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 25T 638G 24T 3% /mnt/disk_b1/glusterfs/vol0
/dev/sdb2 25T 654G 24T 3% /mnt/disk_b2/glusterfs/vol0
/dev/sdc1 50T 15T 35T 30% /mnt/disk_c/glusterfs/vol1
/dev/sdd1 50T 15T 35T 30% /mnt/disk_d/glusterfs/vol1
stor3data:/volumedisk0
101T 3...
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...medisk0
> 101T 3,3T 97T 4% /volumedisk0
> stor2data:/volumedisk1
> 197T 61T 136T 31% /volumedisk1
>
>
> [root at stor3 ~]# df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sdb1 25T 638G 24T 3% /mnt/disk_b1/glusterfs/vol0
> /dev/sdb2 25T 654G 24T 3% /mnt/disk_b2/glusterfs/vol0
> /dev/sdc1 50T 15T 35T 30% /mnt/disk_c/glusterfs/vol1
> /dev/sdd1 50T 15T 35T 30% /mnt/disk_d/glusterfs/vol1
> stor3data:/volumedisk0
>...
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...101T 3,3T 97T 4% /volumedisk0
>> stor2data:/volumedisk1
>> 197T 61T 136T 31% /volumedisk1
>>
>>
>> [root at stor3 ~]# df -h
>> Filesystem Size Used Avail Use% Mounted on
>> /dev/sdb1 25T 638G 24T 3% /mnt/disk_b1/glusterfs/vol0
>> /dev/sdb2 25T 654G 24T 3% /mnt/disk_b2/glusterfs/vol0
>> /dev/sdc1 50T 15T 35T 30% /mnt/disk_c/glusterfs/vol1
>> /dev/sdd1 50T 15T 35T 30% /mnt/disk_d/glusterfs/vol1
>> stor3data:/volum...
2024 Aug 12
0
Creating a large pre-allocated qemu-img raw image takes too long and fails on fuse
Thanks for the work on gluster.
We have a situation where we need a very large virtual machine image. We use a simple raw image but it can be up to 40T in size in some cases. For this experiment we?ll call it 24T.
When creating the image on fuse with qemu-img, using falloc preallocation, the qemu-img create fails and a fuse error results. This happens after around 3 hours.
I created a simple C program using gfapi that does the fallocate of 10T and it to 1.25 hours. I didn?t run tests at larger than that a...
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
.../volumedisk0
>>> stor2data:/volumedisk1
>>> 197T 61T 136T 31% /volumedisk1
>>>
>>>
>>> [root at stor3 ~]# df -h
>>> Filesystem Size Used Avail Use% Mounted on
>>> /dev/sdb1 25T 638G 24T 3% /mnt/disk_b1/glusterfs/vol0
>>> /dev/sdb2 25T 654G 24T 3% /mnt/disk_b2/glusterfs/vol0
>>> /dev/sdc1 50T 15T 35T 30% /mnt/disk_c/glusterfs/vol1
>>> /dev/sdd1 50T 15T 35T 30% /mnt/disk_d/glusterfs/vol1
>>>...
2018 Mar 01
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...t; stor2data:/volumedisk1
>>>> 197T 61T 136T 31% /volumedisk1
>>>>
>>>>
>>>> [root at stor3 ~]# df -h
>>>> Filesystem Size Used Avail Use% Mounted on
>>>> /dev/sdb1 25T 638G 24T 3% /mnt/disk_b1/glusterfs/vol0
>>>> /dev/sdb2 25T 654G 24T 3% /mnt/disk_b2/glusterfs/vol0
>>>> /dev/sdc1 50T 15T 35T 30% /mnt/disk_c/glusterfs/vol1
>>>> /dev/sdd1 50T 15T 35T 30% /mnt/disk_d/glusterfs/vol1
&...
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...1
>>>>> 197T 61T 136T 31% /volumedisk1
>>>>>
>>>>>
>>>>> [root at stor3 ~]# df -h
>>>>> Filesystem Size Used Avail Use% Mounted on
>>>>> /dev/sdb1 25T 638G 24T 3% /mnt/disk_b1/glusterfs/vol0
>>>>> /dev/sdb2 25T 654G 24T 3% /mnt/disk_b2/glusterfs/vol0
>>>>> /dev/sdc1 50T 15T 35T 30% /mnt/disk_c/glusterfs/vol1
>>>>> /dev/sdd1 50T 15T 35T 30% /mnt/disk_d/glus...
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
There is a known issue with gluster 3.12.x builds (see [1]) so you may be
running into this.
The "shared-brick-count" values seem fine on stor1. Please send us "grep -n
"share" /var/lib/glusterd/vols/volumedisk1/*" results for the other nodes
so we can check if they are the cause.
Regards,
Nithya
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1517260
2011 Jun 25
1
Quota (and disk usage) is incorrectly reported on nfs client mounting XFS filesystem
...Tue May 31 13:22:04 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
NFS client:
Linux nx8.priv 2.6.18-238.12.1.el5 #1 SMP Tue May 31 13:22:04 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
The NFS server is exporting a XFS filesystem:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vgXX-lvXX 24T 16T 8.2T 66% /export
User foo (for anonymity) added ~3TB of data in the last 2-3 days.
On the NFS server, her quota is reported as ~5.8 TB:
Disk quotas for user foo (uid 1314):
Filesystem blocks quota limit grace files quota limit grace
/dev/mapper/vgXXX-lvXXX...
2018 Feb 27
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi,
Some days ago all my glusterfs configuration was working fine. Today I
realized that the total size reported by df command was changed and is
smaller than the aggregated capacity of all the bricks in the volume.
I checked that all the volumes status are fine, all the glusterd daemons
are running, there is no error in logs, however df shows a bad total size.
My configuration for one volume:
2010 Jun 18
25
Erratic behavior on 24T zpool
Well, I''ve searched my brains out and I can''t seem to find a reason for this.
I''m getting bad to medium performance with my new test storage device. I''ve got 24 1.5T disks with 2 SSDs configured as a zil log device. I''m using the Areca raid controller, the driver being arcmsr. Quad core AMD with 16 gig of RAM OpenSolaris upgraded to snv_134.
The zpool
2003 Aug 16
1
update: Win2kPro's TCP/IP Stack is crippled!
...nsfers from a Samba
2.2.8a server.
Samba server: P4/2.2GHz, ServerWorks chipset, SCSI UW2 disk subsystem
(Bonnie++ tested to 35MB/sec), 3Com (acenic) gigabit ethernet
Win2kPro: P3/700, 3Com Vortex 100mbit network card
Win2kServer: P3/800, 3Com Vortex 100mbit network card
Switch: Baystack 350-24T with fiber gigabit module
No registry hacking done to either client (and in previous testing, no
amount of TCP/IP hacking on Win2kPro helped)
Samba config file changes:
socket options = TCP_NODELAY SO_SNDBUF=65536 SO_RCVBUF=65536
max xmit = 65536
read size = 65536
getwd cache = Yes...
2003 Oct 14
4
Printing Issues with NT type Clients.
Hi. To begin with, I have a freshly built RedHat Linux 8.0 box running samba 2.2.8a. The kernel version is 2.4.18-14. I downloaded and compiled samba from source. I am using LPRng-3.8.9-6 as my printing system. The attached printer is a Lexmark Z22 printer and it is attached to the parralel port.
Problem:
For the life of me, I can't get NT type clients, NT4, 2K and XP to print to samba.