search for: 5g

Displaying 20 results from an estimated 190 matches for "5g".

Did you mean: g5
2010 Jun 25
1
Supporting APC 5G UPSs
Hi Guys, I'd like to submit a patch to enable NUT to communicate with APC 5G UPSs. The current usbhid-ups driver recognises APC UPSs by the Vendor ID and Product ID of 0x051D and 0x0002. This needs to be amended with the new Product ID of 0x0003. (See attached file: apc-hid.patch) Secondly, when Nut polls the UPS via the interrupt channel, these will time out, resulting...
2019 May 20
2
Re: [nbdkit PATCH v2] Introduce cacheextents filter
...dd (extents, err); >> + } >> + >> + nbdkit_debug ("cacheextents: cache miss"); >> + int r = next_ops->extents (nxdata, count, offset, flags, extents, err); > >This is a bit pessimistic. Observe: > request A (offset=0, count=1G) populates extents (0-.5G data, .5G-1.5G >hole) > request B (offset=.5G, count=1G) serviced from the cache >compared to: > request A (offset=0, count=1G) populates extents (0-.5G data, .5G-1.0G >hole) > request B (offset=.5G, count=1G) treated as cache miss > >It should be possible to note that reque...
2010 Aug 11
1
[PATCH] udev rules for APC 5G
Hi, I tried the patch recently added to support 5G ups by APC. Connecting to the ups failed because of wrong permissions of the usb device node. I had to add the new usb id to the udev rules to make it work. I attached my patch to this mail. Please consider applying. Thanks. Kind regards, Gerd -- Address (better: trap) for people I really do...
2020 Apr 08
6
Parallel transfers with sftp (call for testing / advice)
...mber of extra channels. There is one main ssh channel, and n extra channels. The main ssh channel does everything, except the put and get commands. Put and get commands are parallelized on the n extra channels. Thanks to this, when the customer uses "-n 5", he can transfer his files up to 5Gb/s. There is no server side change. Everything is made on the client side. 3. Some details Each extra channel has its own ssh channel, and its own thread. Orders are sent by the main channel to the threads via a queue. When the user sends a get or put request, the main channel checks what to do....
2013 Aug 01
1
trouble with setting individual quota values for multiple namespaces
...ied to get some help several times on the list but did not find/get a solution. I am still struggling to setup different quotas for namespaces. In addition to the default "INBOX" namespace I have created a namespace called "MailArchive" which should have its own quota value of 5G per user. At first I configured quota2 like this: quota2 = maildir:MailArchive quota:ns=MailArchive/ quota2_rule = *:storage=5G and this seemd to work quite well. Users, accessing the MailArchive namespace can see the 5G limit in their mail client, unfortunately in the mail.err log file, err...
2019 May 16
0
Re: [nbdkit PATCH v2] Introduce cacheextents filter
...; + return cacheextents_add (extents, err); > + } > + > + nbdkit_debug ("cacheextents: cache miss"); > + int r = next_ops->extents (nxdata, count, offset, flags, extents, err); This is a bit pessimistic. Observe: request A (offset=0, count=1G) populates extents (0-.5G data, .5G-1.5G hole) request B (offset=.5G, count=1G) serviced from the cache compared to: request A (offset=0, count=1G) populates extents (0-.5G data, .5G-1.0G hole) request B (offset=.5G, count=1G) treated as cache miss It should be possible to note that request B overlaps with the cache, an...
2017 Oct 27
5
Poor gluster performance on large files.
...ta from server 1 to server 2 just like I've been doing for the last decade. :( If anyone can please help me understand where I might be going wrong it would be absolutely wonderful! Server 1: Single E5-1620 v2 Ubuntu 14.04 glusterfs 3.10.5 16GB Ram 24 drive array on LSI raid Sustained >1.5GB/s to XFS (77TB) Server 2: Single E5-2620 v3 Ubuntu 16.04 glusterfs 3.10.5 32GB Ram 36 drive array on LSI raid Sustained >2.5GB/s to XFS (164TB) Speed tests are done with local with single thread (dd) or 4 threads (iozone) using my standard 64k io size to 20G or 5G files (20G for local drive...
2010 Jul 20
2
APC 5G UPS APC-HID
I've grabbed a snapshot of NUT from just a couple days ago. Glad to see APC's new 5G UPSes added to the list (though you probably want to add it to the UDEV 52-nut-usbups.rules file as well...). My issues with new APC Smart-UPS units (SMT1000) : Powering-down the unit, whether with upsdrvctl or upscmd will not obey the specified delay in offdelay, or ups.delay.shutdown. Instea...
2020 May 05
7
Parallel transfers with sftp (call for testing / advice)
Peter Stuge wrote: > > Matthieu Hautreux wrote: >> The change proposed by Cyril in sftp is a very pragmatic approach to >> deal with parallelism at the file transfer level. It leverages the >> already existing sftp protocol and its capability to write/read file >> content at specified offsets. This enables to speed up sftp transfers >> significantly by
2019 May 20
0
Re: [nbdkit PATCH v2] Introduce cacheextents filter
...gt;> + >>> +  nbdkit_debug ("cacheextents: cache miss"); >>> +  int r = next_ops->extents (nxdata, count, offset, flags, extents, >>> err); >> >> This is a bit pessimistic.  Observe: >> request A (offset=0, count=1G) populates extents (0-.5G data, .5G-1.5G >> hole) >> request B (offset=.5G, count=1G) serviced from the cache >> compared to: >> request A (offset=0, count=1G) populates extents (0-.5G data, .5G-1.0G >> hole) >> request B (offset=.5G, count=1G) treated as cache miss >> >> It s...
2014 Aug 22
2
Re: libguest-test-tool error report
Hello Rich, I figured out how to stop the VB services. I then deleted the /dev/kvm. Then I ran the libguestfs-test-tool and it said 'OK'. Then I ran virt-resize and it was unhappy that /dev/kvm no longer existed. I have attached the debug output for your review. Thanks, Mark Thanks, Mark Husted 770-236-1242 -----Original Message----- From: Richard W.M. Jones
2017 Oct 30
0
Poor gluster performance on large files.
...n doing for the last decade. :( > > If anyone can please help me understand where I might be going wrong it > would be absolutely wonderful! > > Server 1: > Single E5-1620 v2 > Ubuntu 14.04 > glusterfs 3.10.5 > 16GB Ram > 24 drive array on LSI raid > Sustained >1.5GB/s to XFS (77TB) > > Server 2: > Single E5-2620 v3 > Ubuntu 16.04 > glusterfs 3.10.5 > 32GB Ram > 36 drive array on LSI raid > Sustained >2.5GB/s to XFS (164TB) > > Speed tests are done with local with single thread (dd) or 4 threads > (iozone) using my standard...
2017 Oct 09
4
[PATCH v16 3/5] virtio-balloon: VIRTIO_BALLOON_F_SG
On Sat, Sep 30, 2017 at 12:05:52PM +0800, Wei Wang wrote: > +static inline void xb_set_page(struct virtio_balloon *vb, > + struct page *page, > + unsigned long *pfn_min, > + unsigned long *pfn_max) > +{ > + unsigned long pfn = page_to_pfn(page); > + > + *pfn_min = min(pfn, *pfn_min); > + *pfn_max = max(pfn, *pfn_max); > +
2017 Oct 09
4
[PATCH v16 3/5] virtio-balloon: VIRTIO_BALLOON_F_SG
On Sat, Sep 30, 2017 at 12:05:52PM +0800, Wei Wang wrote: > +static inline void xb_set_page(struct virtio_balloon *vb, > + struct page *page, > + unsigned long *pfn_min, > + unsigned long *pfn_max) > +{ > + unsigned long pfn = page_to_pfn(page); > + > + *pfn_min = min(pfn, *pfn_min); > + *pfn_max = max(pfn, *pfn_max); > +
2017 Oct 27
0
Poor gluster performance on large files.
...en doing for the last decade. :( > > If anyone can please help me understand where I might be going wrong it would be absolutely wonderful! > > Server 1: > Single E5-1620 v2 > Ubuntu 14.04 > glusterfs 3.10.5 > 16GB Ram > 24 drive array on LSI raid > Sustained >1.5GB/s to XFS (77TB) > > Server 2: > Single E5-2620 v3 > Ubuntu 16.04 > glusterfs 3.10.5 > 32GB Ram > 36 drive array on LSI raid > Sustained >2.5GB/s to XFS (164TB) > > Speed tests are done with local with single thread (dd) or 4 threads (iozone) using my standard 6...
2017 Oct 10
0
[PATCH v16 3/5] virtio-balloon: VIRTIO_BALLOON_F_SG
...ld >> remove the big balloon lock: >> >> It would not be necessary to have the inflating and deflating run at the >> same time. >> For example, 1st request to inflate 7G RAM, when 1GB has been given to >> the host (so 6G left), the >> 2nd request to deflate 5G is received. Instead of waiting for the 1st >> request to inflate 6G and then >> continuing with the 2nd request to deflate 5G, we can do a diff (6G to >> inflate - 5G to deflate) immediately, >> and got 1G to inflate. In this way, all that driver will do is to simply >&g...
2012 Mar 23
0
Dovecot v2.1.3 (f30437ed63dc) Auth/Login Issues
...h: Debug: client in: AUTH#0111#011PLAIN#011service=imap#011secured#011lip=188.138.0.199#011rip=80.187.102.243#011lport=143#011rport=62388#011resp=<hidden> Mar 23 10:25:45 spectre dovecot: auth: Debug: cache(tlx at leuxner.net,80.187.102.243): hit: <hidden>#011userdb_quota_rule=*:storage=5G#011userdb_acl_groups=PublicMailboxAdmins Mar 23 10:25:45 spectre dovecot: auth: Debug: client out: OK#0111#011user=tlx at leuxner.net Mar 23 10:25:45 spectre dovecot: auth: Debug: master in: REQUEST#0113958898689#0117266#0111#011bfc44f32051961b909e2b458440d645f Mar 23 10:25:45 spectre dovecot: auth...
2013 Jul 12
1
getting quota error when accessing private namespace
...r mailbox "archived mails" { auto = subscribe driver = special_use = \Archive } prefix = Archives/ separator = / subscriptions = yes type = private } plugin { quota = maildir:User quota:ns= quota2 = maildir:Archives quota:ns=Archives/ quota2_rule = *:storage=5G quota_rule = *:storage=1G quota_rule2 = Trash:storage=+200M } I can access the new namespace without any problems but every time a folder in this namespace is accessed, I get the following error messages in mail.err log: dovecot: imap(testuser): Error: quota: Unknown namespace: Archives/...
2019 May 15
6
[nbdkit PATCH v2] Introduce cacheextents filter
This filter caches the last result of the extents() call and offers a nice speed-up for clients that only support req_on=1 in combination with plugins like vddk, which has no overhead for returning information for multiple extents in one call, but that call is very time-consuming. Quick test showed that on a fast connection and a sparsely allocated 16G disk with a OS installed `qemu-img map` runs
2011 Jul 22
4
VM backup problem
Hai, I use following steps for LV backup. * lvcreate -L 5G -s -n lv_snapshot /dev/VG_XenStorage-7b010600-3920-5526-b3ec-6f7b0f610f3c/VHD-a2db885c-9ad0-46c3-b2c3-a30cb71d83f8 lv_snapshot created* This command worked properly Then issue kpartx command kpartx -av */dev/VG_XenStorage-7b010600-3920-5526-b3e...