search for: sr

Displaying 20 results from an estimated 3846 matches for "sr".

Did you mean: str
2017 Jul 06
2
Very slow performance on Sharded GlusterFS
Hi Krutika, I also did one more test. I re-created another volume (single volume. Old one destroyed-deleted) then do 2 dd tests. One for 1GB other for 2GB. Both are 32MB shard and eager-lock off. Samples: sr:~# gluster volume profile testvol start Starting volume profile on testvol has been successful sr:~# dd if=/dev/zero of=/testvol/dtestfil0xb bs=1G count=1 1+0 records in 1+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 12.2708 s, 87.5 MB/s sr:~# gluster volume profile testvol info &...
2017 Jul 10
0
Very slow performance on Sharded GlusterFS
...rg> Subject: Re: [Gluster-users] Very slow performance on Sharded GlusterFS Hi Krutika, I also did one more test. I re-created another volume (single volume. Old one destroyed-deleted) then do 2 dd tests. One for 1GB other for 2GB. Both are 32MB shard and eager-lock off. Samples: sr:~# gluster volume profile testvol start Starting volume profile on testvol has been successful sr:~# dd if=/dev/zero of=/testvol/dtestfil0xb bs=1G count=1 1+0 records in 1+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 12.2708 s, 87.5 MB/s sr:~# gluster volume profile testvol info &...
2017 Jul 12
1
Very slow performance on Sharded GlusterFS
...erFS > > > > Hi Krutika, > > > > I also did one more test. I re-created another volume (single volume. Old > one destroyed-deleted) then do 2 dd tests. One for 1GB other for 2GB. Both > are 32MB shard and eager-lock off. > > > > Samples: > > > > sr:~# gluster volume profile testvol start > > Starting volume profile on testvol has been successful > > sr:~# dd if=/dev/zero of=/testvol/dtestfil0xb bs=1G count=1 > > 1+0 records in > > 1+0 records out > > 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 12.2708 s, 87.5 MB/s...
2017 Jul 04
2
Very slow performance on Sharded GlusterFS
...And others; > > > > No. I use only one volume. When I tested sharded and striped volumes, I > manually stopped volume, deleted volume, purged data (data inside of > bricks/disks) and re-create by using this command: > > > > sudo gluster volume create testvol replica 2 sr-09-loc-50-14-18:/bricks/brick1 > sr-10-loc-50-14-18:/bricks/brick1 sr-09-loc-50-14-18:/bricks/brick2 > sr-10-loc-50-14-18:/bricks/brick2 sr-09-loc-50-14-18:/bricks/brick3 > sr-10-loc-50-14-18:/bricks/brick3 sr-09-loc-50-14-18:/bricks/brick4 > sr-10-loc-50-14-18:/bricks/brick4 sr-09-loc-...
2017 Jun 30
3
Very slow performance on Sharded GlusterFS
Hi Krutika, Sure, here is volume info: root at sr-09-loc-50-14-18:/# gluster volume info testvol Volume Name: testvol Type: Distributed-Replicate Volume ID: 30426017-59d5-4091-b6bc-279a905b704a Status: Started Snapshot Count: 0 Number of Bricks: 10 x 2 = 20 Transport-type: tcp Bricks: Brick1: sr-09-loc-50-14-18:/bricks/brick1 Brick2:...
2017 Jul 04
2
Very slow performance on Sharded GlusterFS
...ttach the volume-profile output file that you saved at a temporary > location in step 3. > > -Krutika > > > > On Fri, Jun 30, 2017 at 5:33 PM, <gencer at gencgiyen.com> wrote: > > Hi Krutika, > > > > Sure, here is volume info: > > > > root at sr-09-loc-50-14-18:/# gluster volume info testvol > > > > Volume Name: testvol > > Type: Distributed-Replicate > > Volume ID: 30426017-59d5-4091-b6bc-279a905b704a > > Status: Started > > Snapshot Count: 0 > > Number of Bricks: 10 x 2 = 20 > > Transport-t...
2017 Jul 04
0
Very slow performance on Sharded GlusterFS
...bricks. 2. Hm.. This is really weird. And others; No. I use only one volume. When I tested sharded and striped volumes, I manually stopped volume, deleted volume, purged data (data inside of bricks/disks) and re-create by using this command: sudo gluster volume create testvol replica 2 sr-09-loc-50-14-18:/bricks/brick1 sr-10-loc-50-14-18:/bricks/brick1 sr-09-loc-50-14-18:/bricks/brick2 sr-10-loc-50-14-18:/bricks/brick2 sr-09-loc-50-14-18:/bricks/brick3 sr-10-loc-50-14-18:/bricks/brick3 sr-09-loc-50-14-18:/bricks/brick4 sr-10-loc-50-14-18:/bricks/brick4 sr-09-loc-50-14-18:/bricks/bri...
2017 Jul 06
2
Very slow performance on Sharded GlusterFS
...bricks. 2. Hm.. This is really weird. And others; No. I use only one volume. When I tested sharded and striped volumes, I manually stopped volume, deleted volume, purged data (data inside of bricks/disks) and re-create by using this command: sudo gluster volume create testvol replica 2 sr-09-loc-50-14-18:/bricks/brick1 sr-10-loc-50-14-18:/bricks/brick1 sr-09-loc-50-14-18:/bricks/brick2 sr-10-loc-50-14-18:/bricks/brick2 sr-09-loc-50-14-18:/bricks/brick3 sr-10-loc-50-14-18:/bricks/brick3 sr-09-loc-50-14-18:/bricks/brick4 sr-10-loc-50-14-18:/bricks/brick4 sr-09-loc-50-14-18:/bricks/bri...
2017 Jul 06
0
Very slow performance on Sharded GlusterFS
...bricks. 2. Hm.. This is really weird. And others; No. I use only one volume. When I tested sharded and striped volumes, I manually stopped volume, deleted volume, purged data (data inside of bricks/disks) and re-create by using this command: sudo gluster volume create testvol replica 2 sr-09-loc-50-14-18:/bricks/brick1 sr-10-loc-50-14-18:/bricks/brick1 sr-09-loc-50-14-18:/bricks/brick2 sr-10-loc-50-14-18:/bricks/brick2 sr-09-loc-50-14-18:/bricks/brick3 sr-10-loc-50-14-18:/bricks/brick3 sr-09-loc-50-14-18:/bricks/brick4 sr-10-loc-50-14-18:/bricks/brick4 sr-09-loc-50-14-18:/bricks/bri...
2010 Nov 20
2
Merge two ggplots
Hello everyone. I am using ggplot and I need some help to merge these two plots into one. plot_CR<-function(x,y,agentid,CRagent){   library(ggplot2)     agent<-CRagent[[agentid]] # To make following expression shorter   ggplot((data.frame(x=CRX,y=CRY,sr=agent$sr)))+   geom_point(aes(x,y,colour=cut(sr,c(0,-10,-20,-30,-40,-50,-60,-70,-80))))+   geom_text(aes(x,y,color=cut(sr, c(0,-10,-20,-30,-40,-50,-60,-70,-80)), label=round(sr,3)),vjust=1,legend=FALSE)+labs(colour="CRagents[[i]]$sr") } plot_shad_f<-function(f){   library(ggplot2)  ...
2010 Mar 31
2
Simplifying particular piece of code
Hello, everyone I have a piece of code that looks like this: mrets <- merge(mrets, BMM.SR=apply(mrets, 1, MyFunc, ret="BMM.AV120", stdev="BMM.SD120")) mrets <- merge(mrets, GM1.SR=apply(mrets, 1, MyFunc, ret="GM1.AV120", stdev="GM1.SD120")) mrets <- merge(mrets, IYC.SR=apply(mrets, 1, MyFunc, ret="IYC.AV120", stdev="IYC.SD120&...
2017 Jul 06
0
Very slow performance on Sharded GlusterFS
...No. I use only one volume. When I tested sharded and striped volumes, I >> manually stopped volume, deleted volume, purged data (data inside of >> bricks/disks) and re-create by using this command: >> >> >> >> sudo gluster volume create testvol replica 2 >> sr-09-loc-50-14-18:/bricks/brick1 sr-10-loc-50-14-18:/bricks/brick1 >> sr-09-loc-50-14-18:/bricks/brick2 sr-10-loc-50-14-18:/bricks/brick2 >> sr-09-loc-50-14-18:/bricks/brick3 sr-10-loc-50-14-18:/bricks/brick3 >> sr-09-loc-50-14-18:/bricks/brick4 sr-10-loc-50-14-18:/bricks/brick4 &gt...
2017 Jun 30
1
Very slow performance on Sharded GlusterFS
...lume profile <VOL> stop And attach the volume-profile output file that you saved at a temporary location in step 3. -Krutika On Fri, Jun 30, 2017 at 5:33 PM, <gencer at gencgiyen.com> wrote: > Hi Krutika, > > > > Sure, here is volume info: > > > > root at sr-09-loc-50-14-18:/# gluster volume info testvol > > > > Volume Name: testvol > > Type: Distributed-Replicate > > Volume ID: 30426017-59d5-4091-b6bc-279a905b704a > > Status: Started > > Snapshot Count: 0 > > Number of Bricks: 10 x 2 = 20 > > Transport-t...
2017 Jul 03
0
Very slow performance on Sharded GlusterFS
...profile <VOL> stop And attach the volume-profile output file that you saved at a temporary location in step 3. -Krutika On Fri, Jun 30, 2017 at 5:33 PM, <gencer at gencgiyen.com <mailto:gencer at gencgiyen.com> > wrote: Hi Krutika, Sure, here is volume info: root at sr-09-loc-50-14-18:/# gluster volume info testvol Volume Name: testvol Type: Distributed-Replicate Volume ID: 30426017-59d5-4091-b6bc-279a905b704a Status: Started Snapshot Count: 0 Number of Bricks: 10 x 2 = 20 Transport-type: tcp Bricks: Brick1: sr-09-loc-50-14-18:/bricks/brick1 Brick2:...
2011 Jan 25
0
Map an Area to another
Dear All, I would like to ask you help with the following: Assume that I have an area of 36 cells (or sub-areas) sr<-matrix(data=seq(from=1,to=36),nrow=6,ncol=6,byrow=TRUE) > sr [,1] [,2] [,3] [,4] [,5] [,6] [1,] 1 2 3 4 5 6 [2,] 7 8 9 10 11 12 [3,] 13 14 15 16 17 18 [4,] 19 20 21 22 23 24 [5,] 25 26 27 28 29 30 [6,] 31 32 33...
2023 Jun 20
1
[libnbd PATCH v4 4/4] internal: Refactor layout of replies in sbuf
...> > offset simple structured > +------------------------------------------------------------+ > | union sbuf | > | +---------------------+------------------------------+ | > | | struct simple_reply | struct sr | | > | | +-----------------+ | +--------------------------+ | | > | | | | | | struct structured_reply | | | > | | | | | | +----------------------+ | | | > | 0 | | uint32_t magic | | | | uint32_t magic | | | | >...
2017 Nov 28
5
[RFC] virtio-net: help live migrate SR-IOV devices
Hi, I'd like to get some feedback on a proposal to enhance virtio-net to ease configuration of a VM and that would enable live migration of passthrough network SR-IOV devices. Today we have SR-IOV network devices (VFs) that can be passed into a VM in order to enable high performance networking direct within the VM. The problem I am trying to address is that this configuration is generally difficult to live-migrate. There is documentation [1] indicating th...
2017 Nov 28
5
[RFC] virtio-net: help live migrate SR-IOV devices
Hi, I'd like to get some feedback on a proposal to enhance virtio-net to ease configuration of a VM and that would enable live migration of passthrough network SR-IOV devices. Today we have SR-IOV network devices (VFs) that can be passed into a VM in order to enable high performance networking direct within the VM. The problem I am trying to address is that this configuration is generally difficult to live-migrate. There is documentation [1] indicating th...
2011 May 01
2
The SR operation cannot be performed because a device underlying the SR is in use by the host.
My attempt to add local Storage hard drive........ [root@iDEAL0510XEN1 ~]# xe host-list uuid ( RO) : 516c8c44-5f93-4177-9ee0-02f0a6efe976 name-label ( RW): iDEAL0510XEN1 name-description ( RW): Default install of XenServer [root@iDEAL0510XEN1 ~]# xe sr-create host-uuid=516c8c44-5f93-4177-9ee0-02f0a6efe976 content-type=user type=lvm device-config:device=/dev/sdc shared=false name-label="Local Storage 4 2TB" The SR operation cannot be performed because a device underlying the SR is in use by the host. Help! Bobbyd ____________________...
2010 Nov 24
6
about sr-iov
Hi, I got a problem when prepared to pass-thru 82576 ethernet card with SR-IOV support. The 82576 card is build-in on the board. When loading igb driver for the card, dmesg show some errors like: pci 0000:05:00.0: BAR 10: can''t allocate mem resource [0xfbf00000-0xfbefffff] . igb 0000:05:00.0: not enough MMIO resources for SR-IOV igb 0000:05:00.0: Fail...