search for: apks

Displaying 20 results from an estimated 158 matches for "apks".

Did you mean: apis
2016 Jan 26
1
[PATCH] customize: Add support for the APK (Alpine Linux) package manager.
--- customize/customize_run.ml | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/customize/customize_run.ml b/customize/customize_run.ml index ed3c818..48475af 100644 --- a/customize/customize_run.ml +++ b/customize/customize_run.ml @@ -97,6 +97,11 @@ exec >>%s 2>&1 let guest_install_command packages = let quoted_args = String.concat " " (List.map
2015 Aug 12
0
[PATCH 2/2] inspect: support the APK package manager and its format
Associate the Alpine Linux distribution with it. --- generator/actions.ml | 4 ++-- src/guestfs-internal.h | 2 ++ src/inspect-apps.c | 1 + src/inspect-fs.c | 10 ++++++++-- src/inspect.c | 2 ++ 5 files changed, 15 insertions(+), 4 deletions(-) diff --git a/generator/actions.ml b/generator/actions.ml index 26cc0da..d0d6a21 100644 --- a/generator/actions.ml +++
2016 Mar 07
1
[PATCH] inspect: list applications with APK
Implement the helper function for guestfs_inspect_list_applications2 to be able to parse the list of installed applications with the APK package manager (used on Alpine Linux). --- src/inspect-apps.c | 121 +++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 121 insertions(+) diff --git a/src/inspect-apps.c b/src/inspect-apps.c index b54cf07..78c32bf 100644 ---
2006 May 17
1
Response to query re: calculating intraclass correlations
Karl, If you use one of the specialized packages to calculate your ICC, make sure that you know what you're getting. (I haven't checked the packages out myself, so I don't know either.) You might want to read David Futrell's article in the May 1995 issue of Quality Progress where he describes six different ways to calculate ICCs from the same data set, all with different
2018 Apr 18
2
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Thanks for the link. Looking at the status of that doc, it isn't quite ready yet, and there's no mention of the option. Does it mean that whatever is ready now in 4.0.1 is incomplete but can be enabled via granular-entry-heal=on, and when it is complete, it'll become the default and the flag will simply go away? Is there any risk enabling the option now in 4.0.1? Sincerely, Artem
2018 Apr 18
1
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
On 04/18/2018 11:59 AM, Artem Russakovskii wrote: > Btw, I've now noticed at least 5 variations in toggling binary option > values. Are they all interchangeable, or will using the wrong value > not work in some cases? > > yes/no > true/false > True/False > on/off > enable/disable > > It's quite a confusing/inconsistent practice, especially given that
2018 Apr 06
1
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
I restarted rsync, and this has been sitting there for almost a minute, barely moved several bytes in that time: 2014/11/545b06baa3d98/com.google.android.apps.inputmethod.zhuyin-2.1.0.79226761-armeabi-v7a-175-minAPI14.apk 6,389,760 45% 18.76kB/s 0:06:50 I straced each of the 3 processes rsync created and saw this (note: every time there were several seconds of no output, I
2018 Apr 18
0
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Btw, I've now noticed at least 5 variations in toggling binary option values. Are they all interchangeable, or will using the wrong value not work in some cases? yes/no true/false True/False on/off enable/disable It's quite a confusing/inconsistent practice, especially given that many options will accept any value without erroring out/validation. Sincerely, Artem -- Founder, Android
2018 Apr 18
3
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Following up here on a related and very serious for us issue. I took down one of the 4 replicate gluster servers for maintenance today. There are 2 gluster volumes totaling about 600GB. Not that much data. After the server comes back online, it starts auto healing and pretty much all operations on gluster freeze for many minutes. For example, I was trying to run an ls -alrt in a folder with 7300
2018 Apr 17
5
Getting glusterfs to expand volume size to brick size
pylon:/var/lib/glusterd/vols/dev_apkmirror_data # ack shared-brick-count dev_apkmirror_data.pylon.mnt-pylon_block3-dev_apkmirror_data.vol 3: option shared-brick-count 3 dev_apkmirror_data.pylon.mnt-pylon_block2-dev_apkmirror_data.vol 3: option shared-brick-count 3 dev_apkmirror_data.pylon.mnt-pylon_block1-dev_apkmirror_data.vol 3: option shared-brick-count 3 Sincerely, Artem --
2018 Apr 06
0
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Hi again, I'd like to expand on the performance issues and plead for help. Here's one case which shows these odd hiccups: https://i.imgur.com/CXBPjTK.gifv. In this GIF where I switch back and forth between copy operations on 2 servers, I'm copying a 10GB dir full of .apk and image files. On server "hive" I'm copying straight from the main disk to an attached volume
2018 Apr 06
3
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Hi, I'm trying to squeeze performance out of gluster on 4 80GB RAM 20-CPU machines where Gluster runs on attached block storage (Linode) in (4 replicate bricks), and so far everything I tried results in sub-optimal performance. There are many files - mostly images, several million - and many operations take minutes, copying multiple files (even if they're small) suddenly freezes up for
2018 Apr 10
2
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Wish I knew or was able to get detailed description of those options myself. here is direct-io-mode https://serverfault.com/questions/517775/glusterfs-direct-i-o-mode Same as you I ran tests on a large volume of files, finding that main delays are in attribute calls, ending up with those mount options to add performance. I discovered those options through basically googling this user list with
2018 Apr 18
2
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Hi Ravi, Could you please expand on how these would help? By forcing full here, we move the logic from the CPU to network, thus decreasing CPU utilization, is that right? This is assuming the CPU and disk utilization are caused by the differ and not by lstat and other calls or something. > Option: cluster.data-self-heal-algorithm > Default Value: (null) > Description: Select between
2018 Apr 10
0
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Hi Vlad, I actually saw that post already and even asked a question 4 days ago ( https://serverfault.com/questions/517775/glusterfs-direct-i-o-mode#comment1172497_540917). The accepted answer also seems to go against your suggestion to enable direct-io-mode as it says it should be disabled for better performance when used just for file accesses. It'd be great if someone from the Gluster team
2018 Apr 17
0
Getting glusterfs to expand volume size to brick size
To clarify, I was on 3.13.2 previously, recently updated to 4.0.1, and the bug seems to persist in 4.0.1. Sincerely, Artem -- Founder, Android Police <http://www.androidpolice.com>, APK Mirror <http://www.apkmirror.com/>, Illogical Robot LLC beerpla.net | +ArtemRussakovskii <https://plus.google.com/+ArtemRussakovskii> | @ArtemR <http://twitter.com/ArtemR> On Mon, Apr
2019 Oct 04
2
Sieve redirect is broken in 2.3.7.2 - signal 11
Hi, If we use sieve redirect under dovecot 2.3.7.2 we end up with Oct 04 03:30:31 dockerhost docker[12154]: 2019-10-04T03:30:31 53ac2ae27650 postfix: 0605F207B0F36: to=<xxxx at xxxx.xx>, relay=127.0.0.1[127.0.0.1]:10024, delay=1.5, delays=0.36/0/0/1.1, dsn=2.0.0, status=sent (250 2.0.0 from MTA(smtp:[127.0.0.1]:10025): 250 2.0.0 Ok: queued as 6FC89207B0F38) Oct 04 03:30:31 dockerhost
2018 Apr 18
0
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
On 04/18/2018 10:35 AM, Artem Russakovskii wrote: > Hi Ravi, > > Could you please expand on how these would help? > > By forcing full here, we move the logic from the CPU to network, thus > decreasing CPU utilization, is that right? Yes, 'diff' employs the rchecksum FOP which does a sha256? checksum which can consume CPU. So yes it is sort of shifting the load from CPU
2018 Apr 10
2
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
you definitely need mount options to /etc/fstab use ones from here http://lists.gluster.org/pipermail/gluster-users/2018-April/033811.html I went on with using local mounts to achieve performance as well Also, 3.12 or 3.10 branches would be preferable for production On Fri, Apr 6, 2018 at 4:12 AM, Artem Russakovskii <archon810 at gmail.com> wrote: > Hi again, > > I'd like to
2018 Apr 17
1
Getting glusterfs to expand volume size to brick size
That might be the reason. Perhaps the volfiles were not regenerated after upgrading to the version with the fix. There is a workaround detailed in [2] for the time being (you will need to copy the shell script into the correct directory for your Gluster release). [2] https://bugzilla.redhat.com/show_bug.cgi?id=1517260#c19 On 17 April 2018 at 09:58, Artem Russakovskii <archon810 at