similar to: Lost audio on forwarded calls

Displaying 20 results from an estimated 30000 matches similar to: "Lost audio on forwarded calls"

2009 Dec 07
3
ConVirt 2.0 Beta is now available
Hi    We are very pleased to announce immediate availability of ConVirt 2.0 Beta!  Built on a brand new, 3-tier, repository-based architecture, ConVirt 2.0 incorporates many of our users’ most wanted and anticipated features. For more information, please visit us at    http://www.convirture.com/blog/2009/announcements/convirt-2-0-beta-now-available/ ConVirt Team.
2012 May 12
2
NUT for Windows + Eaton/PW 5110
Nut for Windows 2.6.3-3 + Eaton/PW 5110 (103004256-5591). I had to manually install libusb driver and I'm using bcmxcp_usb. upsd.exe is reporting to me "Out of memory". It will run, but once anything connects to the daemon, it dies with that message. I don't have much experience with NUT yet so I'm not sure what the next course of action is. I do have NUT for Windows
2017 Oct 27
0
gluster tiering errors
Herb, I'm trying to weed out issues here. So, I can see quota turned *on* and would like you to check the quota settings and test to see system behavior *if quota is turned off*. Although the file size that failed migration was 29K, I'm being a bit paranoid while weeding out issues. Are you still facing tiering errors ? I can see your response to Alex with the disk space consumption and
2017 Oct 22
1
gluster tiering errors
There are several messages "no space left on device". I would check first that free disk space is available for the volume. On Oct 22, 2017 18:42, "Milind Changire" <mchangir at redhat.com> wrote: > Herb, > What are the high and low watermarks for the tier set at ? > > # gluster volume get <vol> cluster.watermark-hi > > # gluster volume get
2017 Oct 22
0
gluster tiering errors
Herb, What are the high and low watermarks for the tier set at ? # gluster volume get <vol> cluster.watermark-hi # gluster volume get <vol> cluster.watermark-low What is the size of the file that failed to migrate as per the following tierd log: [2017-10-19 17:52:07.519614] I [MSGID: 109038] [tier.c:1169:tier_migrate_using_query_file] 0-<vol>-tier-dht: Promotion failed for
2018 Mar 04
1
tiering
Hi, Have a glusterfs 3.10.10 (tried 3.12.6 as well) volume on Ubuntu 16.04 with a 3 ssd tier where one ssd is bad. Status of volume: labgreenbin Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Hot Bricks: Brick labgfs81:/gfs/p1-tier/mount 49156 0 Y 4217 Brick
2017 Oct 24
2
gluster tiering errors
Milind - Thank you for the response.. >> What are the high and low watermarks for the tier set at ? # gluster volume get <vol> cluster.watermark-hi Option Value ------ ----- cluster.watermark-hi 90 # gluster volume get <vol> cluster.watermark-low Option
2017 Oct 19
3
gluster tiering errors
All, I am new to gluster and have some questions/concerns about some tiering errors that I see in the log files. OS: CentOs 7.3.1611 Gluster version: 3.10.5 Samba version: 4.6.2 I see the following (scrubbed): Node 1 /var/log/glusterfs/tier/<vol>/tierd.log: [2017-10-19 17:52:07.519614] I [MSGID: 109038] [tier.c:1169:tier_migrate_using_query_file] 0-<vol>-tier-dht: Promotion failed
2018 Feb 27
1
On sharded tiered volume, only first shard of new file goes on hot tier.
Does anyone have any ideas about how to fix, or to work-around the following issue? Thanks! Bug 1549714 - On sharded tiered volume, only first shard of new file goes on hot tier. https://bugzilla.redhat.com/show_bug.cgi?id=1549714 On sharded tiered volume, only first shard of new file goes on hot tier. On a sharded tiered volume, only the first shard of a new file goes on the hot tier, the rest
2018 Jan 18
0
Blocking IO when hot tier promotion daemon runs
Thanks for the info, Hari. Sorry about the bad gluster volume info, I grabbed that from a file not realizing it was out of date. Here's a current configuration showing the active hot tier: [root at pod-sjc1-gluster1 ~]# gluster volume info Volume Name: gv0 Type: Tier Volume ID: d490a9ec-f9c8-4f10-a7f3-e1b6d3ced196 Status: Started Snapshot Count: 13 Number of Bricks: 8 Transport-type: tcp Hot
2018 Mar 05
0
Why files goes to hot tier and cold tier at same time
Hi, The actual data will be in the hot tier only till demotion. The file that you see on the cold tier is just a linkto file of the file on the hot tier. These linkto file are necessary for the internal working of the tier. On Mon, Mar 5, 2018 at 1:16 PM, Sherin George <allmyforums at outlook.in> wrote: > Hi Guys > > Got a quick question regarding hot tier and cold tier. > I
2018 Jan 09
2
Blocking IO when hot tier promotion daemon runs
I've recently enabled an SSD backed 2 TB hot tier on my 150 TB 2 server / 3 bricks per server distributed replicated volume. I'm seeing IO get blocked across all client FUSE threads for 10 to 15 seconds while the promotion daemon runs. I see the 'glustertierpro' thread jump to 99% CPU usage on both boxes when these delays occur and they happen every 25 minutes (my
2018 Mar 05
2
Why files goes to hot tier and cold tier at same time
Hi Guys Got a quick question regarding hot tier and cold tier. I got a gluster volume with 1 x 3 hot tier and 1 x 3 cold tier. watermark-low is 75 and watermark-hi is 90. usage of volume is very less. My files always go to hot tier and cold tier at same time As I understand, data should go to hot tier only until demoted. Could someone please shed some light into this? Thanks in advance. --
2018 Jan 10
0
Blocking IO when hot tier promotion daemon runs
I should add that additional testing has shown that only accessing files is held up, IO is not interrupted for existing transfers. I think this points to the heat metadata in the sqlite DB for the tier, is it possible that a table is temporarily locked while the promotion daemon runs so the calls to update the access count on files are blocked? On Wed, Jan 10, 2018 at 10:17 AM, Tom Fite
2018 Jan 10
0
Blocking IO when hot tier promotion daemon runs
Hi, Can you send the volume info, and volume status output and the tier logs. And I need to know the size of the files that are being stored. On Tue, Jan 9, 2018 at 9:51 PM, Tom Fite <tomfite at gmail.com> wrote: > I've recently enabled an SSD backed 2 TB hot tier on my 150 TB 2 server / 3 > bricks per server distributed replicated volume. > > I'm seeing IO get blocked
2018 Jan 18
2
Blocking IO when hot tier promotion daemon runs
Hi Tom, The volume info doesn't show the hot bricks. I think you have took the volume info output before attaching the hot tier. Can you send the volume info of the current setup where you see this issue. The logs you sent are from a later point in time. The issue is hit earlier than the logs what is available in the log. I need the logs from an earlier time. And along with the entire tier
2017 Dec 18
0
Testing sharding on tiered volume
----- Original Message ----- > From: "Viktor Nosov" <vnosov at stonefly.com> > To: gluster-users at gluster.org > Cc: vnosov at stonefly.com > Sent: Friday, December 8, 2017 5:45:25 PM > Subject: [Gluster-users] Testing sharding on tiered volume > > Hi, > > I'm looking to use sharding on tiered volume. This is very attractive > feature that could
2018 Jan 10
2
Blocking IO when hot tier promotion daemon runs
The sizes of the files are extremely varied, there are millions of small (<1 MB) files and thousands of files larger than 1 GB. Attached is the tier log for gluster1 and gluster2. These are full of "demotion failed" messages, which is also shown in the status: [root at pod-sjc1-gluster1 gv0]# gluster volume tier gv0 status Node Promoted files Demoted files
2017 Jul 31
2
Hot Tier
Hi, If it was just reads then the tier daemon won't migrate the files to hot tier. If you create a file or write to a file that file will be made available on the hot tier. On Mon, Jul 31, 2017 at 11:06 AM, Nithya Balachandran <nbalacha at redhat.com> wrote: > Milind and Hari, > > Can you please take a look at this? > > Thanks, > Nithya > > On 31 July 2017 at
2017 Jul 31
1
Hot Tier
Hi At this point I already detached Hot Tier volume to run rebalance. Many volume settings only take effect for the new data (or rebalance), so I thought may this was the case with Hot Tier as well. Once rebalance finishes, I'll re-attache hot tier. cluster.write-freq-threshold and cluster.read-freq-threshold control number of times data is read/write before it moved to hot tier. In my case