similar to: Virtual Network Interfaces showing all packets as dropped

Displaying 20 results from an estimated 100 matches similar to: "Virtual Network Interfaces showing all packets as dropped"

2007 Nov 11
2
Xen 3.1 + CentOS 5 + HP ProLiant G5 = DomU packet drops
Hi all, I''ve a Xen server (Xen 3.1 compiled from sources [stable downloaded last wed...], CentOS 5 as the host OS, HP Proliant G5 6Gb ram with Broadcom NetXtreme 2) with 4 guests, each running Windows 2003 just installed, with nothing else. Network is configured with the default bridging script, no changes. I''m using 1 network card only for the moment. All guests run apparently
2008 Jan 03
0
what is the Net1 item in xm top.
I have for every VM in "xm top" a Net1 item. And there is a lot of drops on these packets. So what is this Net1 item, do i need to disable something?   Do i need this? vmdebianamd64 --b---       1107    0.1     260576    6.2     262144       6.3     1    2   815618   829664    1       17   288 253   754981    0 Net0 RX: 849569343bytes  4083414pkts        0err       42drop  TX:
2006 Mar 16
0
Dropped packets on DomU interfaces
When I run "xm top" and look at the network statistics I don''t see any dropped packets on the vif interfaces in dom0, but I do see dropped packets on the RX side of my domU interfaces. Here''s an example: Net0 RX: 391757339bytes 464015pkts 0err 406drop TX: 95774401bytes 352651pkts 0err 0drop I''m running Xen on FC4 from testing
2013 Mar 05
1
memory leak in 3.3.1 rebalance?
I started rebalancing my 25x2 distributed-replicate volume two days ago. Since then, the memory usage of the rebalance processes has been steadily climbing by 1-2 megabytes per minute. Following http://gluster.org/community/documentation/index.php/High_Memory_Usage, I tried "echo 2 > /proc/sys/vm/drop_caches". This had no effect on the processes' memory usage. Some of the
2017 Jul 07
2
Rebalance task fails
Hello everyone, I have problem rebalancing Gluster volume. Gluster version is 3.7.3. My 1x3 replicated volume become full, so I've added three more bricks to make it 2x3 and wanted to rebalance. But every time I start rebalancing, it fails immediately. Rebooting Gluster nodes doesn't help. # gluster volume rebalance gsae_artifactory_cluster_storage start volume rebalance:
2017 Jul 10
2
Rebalance task fails
Hi Nithya, the files were sent to priv to avoid spamming the list with large attachments. Could someone explain what is index in Gluster? Unfortunately index is popular word, so googling is not very helpful. Best regards, Szymon Miotk On Sun, Jul 9, 2017 at 6:37 PM, Nithya Balachandran <nbalacha at redhat.com> wrote: > > On 7 July 2017 at 15:42, Szymon Miotk <szymon.miotk at
2017 Jul 09
0
Rebalance task fails
On 7 July 2017 at 15:42, Szymon Miotk <szymon.miotk at gmail.com> wrote: > Hello everyone, > > > I have problem rebalancing Gluster volume. > Gluster version is 3.7.3. > My 1x3 replicated volume become full, so I've added three more bricks > to make it 2x3 and wanted to rebalance. > But every time I start rebalancing, it fails immediately. > Rebooting Gluster
2017 Jul 13
2
Rebalance task fails
Hi Nithya, I see index in context: [2017-07-07 10:07:18.230202] E [MSGID: 106062] [glusterd-utils.c:7997:glusterd_volume_rebalance_use_rsp_dict] 0-glusterd: failed to get index I wonder if there is anything I can do to fix it. I was trying to strace gluster process but still have no clue what exactly is gluster index. Best regards, Szymon Miotk On Thu, Jul 13, 2017 at 10:12 AM, Nithya
2017 Jul 13
0
Rebalance task fails
Hi Szymon, I have received the files and will take a look and get back to you. In what context are you seeing index? Thanks, Nithya On 11 July 2017 at 01:15, Szymon Miotk <szymon.miotk at gmail.com> wrote: > Hi Nithya, > > the files were sent to priv to avoid spamming the list with large > attachments. > Could someone explain what is index in Gluster? > Unfortunately
2008 Dec 14
1
Is that iozone result normal?
5-nodes server and 1 node client are connected by gigabits Ethernet. #] iozone -r 32k -r 512k -s 8G KB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread 8388608 32 10559 9792 62435 62260 8388608 512 63012 63409 63409 63138 It seems 32k write/rewrite performance are very
2005 Mar 23
1
slim server for moh
Hello, I have installed SlimServer for Windows on my desktop and Asterisk on a Red Hat Linux machine. I am able to play mp3's for music on hold when mp3s are on the Linux server, and to play streaming mp3's with Windows Media Player and Winamp on Windows using the slim server. I also have mpg123 on my Linux, apparently installed correctly, since it works for local moh. I put the
2017 Aug 03
0
Hot Tier
Hi, We will look into the " failed to get index" error. It shouldn't affect the normal working. Do let us know if you face any other issues. Regards, Hari. On 02-Aug-2017 11:55 PM, "Dmitri Chebotarov" <4dimach at gmail.com> wrote: Hello I reattached hot tier to a new empty EC volume and started to copy data to the volume. Good news is I can see files now on SSD
2017 Jul 31
1
Hot Tier
Hi At this point I already detached Hot Tier volume to run rebalance. Many volume settings only take effect for the new data (or rebalance), so I thought may this was the case with Hot Tier as well. Once rebalance finishes, I'll re-attache hot tier. cluster.write-freq-threshold and cluster.read-freq-threshold control number of times data is read/write before it moved to hot tier. In my case
2017 Aug 01
0
Hot Tier
Hi, You have missed the log files. Can you attach them? On Mon, Jul 31, 2017 at 7:22 PM, Dmitri Chebotarov <4dimach at gmail.com> wrote: > Hi > > At this point I already detached Hot Tier volume to run rebalance. Many > volume settings only take effect for the new data (or rebalance), so I > thought may this was the case with Hot Tier as well. Once rebalance > finishes,
2013 Nov 06
0
remove-brick very slow for (distributed-)replicated volumes?
We have a gigabit ethernet lan on which there is no other traffic and I am getting the following numbers when I do a remove-brick. The sequence of steps is that I create a 2 way replicated volume, populate it with 300 files totalling 100MB. I then add a pair of bricks to the volume and then a remove-brick on the original two bricks. Is this the expected speed for the operation or could there be
2012 Nov 30
2
"layout is NULL", "Failed to get node-uuid for [...] and other errors during rebalancing in 3.3.1
I started rebalancing my volume after updating from 3.2.7 to 3.3.1. After a few hours, I noticed a large number of failures in the rebalance status: > Node Rebalanced-files size scanned failures > status > --------- ----------- ----------- ----------- ----------- > ------------ > localhost 0 0Bytes 4288805
2007 May 17
2
IPCLASSIFY - patch based on IPMARK
Hello everybody! Some time ago I''ve decided that using the MARK property of the Linux IP packet structure for the needs of traffic control is not very useful. So I wrote an iptables patch called IPCLASSIFY. It is fully based on IPMARK but it uses the PRIORITY field instead of MARK. The relation between IPCLASSIFY<->CLASSIFY is the same as IPMARK<->MARK. By using
2012 Apr 19
1
Question about glusterfs quotas on debian wheezy?
Hello list, I'm experimenting with a little GlusterFS cluster on debian wheezy: === snip === muzzy:~# cat /etc/debian_version wheezy/sid muzzy:~# dpkg -l | grep gluster ii glusterfs-client 3.2.6-1 clustered file-system (client package) ii glusterfs-common 3.2.6-1 GlusterFS common libraries and translator modules ii glusterfs-server 3.2.6-1 clustered file-system (server package) === snip
2018 Feb 02
1
How to trigger a resync of a newly replaced empty brick in replicate config ?
Hi, I simplified the config in my first email, but I actually have 2x4 servers in replicate-distribute with each 4 bricks for 6 of them and 2 bricks for the remaining 2. Full healing will just take ages... for a just single brick to resync ! > gluster v status home volume status home Status of volume: home Gluster process TCP Port RDMA Port Online Pid
2013 Nov 09
2
Failed rebalance - lost files, inaccessible files, permission issues
I'm starting a new thread on this, because I have more concrete information than I did the first time around. The full rebalance log from the machine where I started the rebalance can be found at the following link. It is slightly redacted - one search/replace was made to replace an identifying word with REDACTED. https://dl.dropboxusercontent.com/u/97770508/mdfs-rebalance-redacted.zip