search for: ec60

Displaying 7 results from an estimated 7 matches for "ec60".

Did you mean: e860
2012 Feb 13
1
multi-regression with more than 50 independent variables
...;- lm(formula = CaV ~ SHG + TrD+ CrH+ SPAD+ FlN+ FrN+ YT+ LA+ LDMP+ B+Cu+ Zn+ Mn + Fe+ K + P+ N +Clay30 +Silt30 +Sand30 +Clay60 +Silt60 +Sand60 +ESP30 +NaEx30+ CEC30+Cl30+ SAR30 +KSol30+ NaSol30 +CaMgSol3 +ZnAv30 +FeAv30 +OC30 +PAv30 +KAv30 +TNV30+ pH30+ EC30 +SP30 +ESP60 +NaEx60 +CEC60 +Cl60 +SAR60 +KSol60 +NaSol60 +CaMgSol6 +ZnAv60+FeAv60 +OC60 +PAv60 +KAv60 +TNV60 +pH60 + EC60 +SP60, data=mlr.data) summary (mlr.output) Regards, Reza -------------- next part -------------- CaV SHG TrD CrH SPAD FlN FrN YT LA LDMP B Cu Zn Mn Fe K P N Clay30 Silt30 Sand30 Clay60 Silt60 Sand6...
2018 Aug 20
0
[PATCH 1/2] v2v: rhv-upload-plugin: Handle send send failures
...upload-plugin.py: pwrite: error: ('%s: %d %s: %r', 'could not write sector offset 1841154048 size 1024', 403, 'Forbidden', b'{"explanation": "Access was denied to this resource.", "code": 403, "detail": "Ticket u\'6071e16f-ec60-4ff9-a594-10b0faae3617\' expired", "title": "Forbidden"}') --- v2v/rhv-upload-plugin.py | 22 ++++++++++++++++------ 1 file changed, 16 insertions(+), 6 deletions(-) diff --git a/v2v/rhv-upload-plugin.py b/v2v/rhv-upload-plugin.py index 2d686c2da..7327ea4c5 100644...
2018 Aug 20
3
[PATCH 0/2] v2v: rhv-upload-plugin: Improve error handling
These patches improve error handling when PUT request fail, including the error response from oVirt server. This will make it easier to debug issue when oVirt server logs have been rotated. Nir Soffer (2): v2v: rhv-upload-plugin: Handle send send failures v2v: rhv-upload-plugin: Fix error formatting v2v/rhv-upload-plugin.py | 24 +++++++++++++++++------- 1 file changed, 17 insertions(+), 7
2012 May 23
4
Bug#674161: xcp-xapi: 'the device disappeared from xenstore' message during vbd-plug (vm-start)
Package: xcp-xapi Version: 1.3.2-6 Severity: normal Tags: upstream vbd plug to PV domain cause following error: The server failed to handle your request, due to an internal error. The given message may give details useful for debugging the problem. message: the device disappeared from xenstore (frontend (domid=4 | kind=vbd | devid=51760); backend (domid=0 | kind=vbd | devid=51760)) (same error
2017 Oct 26
0
not healing one file
Hey Richard, Could you share the following informations please? 1. gluster volume info <volname> 2. getfattr output of that file from all the bricks getfattr -d -e hex -m . <brickpath/filepath> 3. glustershd & glfsheal logs Regards, Karthik On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote: > On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it does diagnose any issues in setup. Currently you may have to run it in all the three machines. On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote: > Thanks for this report. This week many of the developers are at Gluster > Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
...ECC0004C4F568EDC58F469412 (e5612730-d65c-45d4-ba0d-a17d4752b1a1) on home-client-2 [2017-10-25 10:14:12.270263] W [MSGID: 108015] [afr-self-heal-entry.c:56:afr_selfheal_entry_delete] 0-home-replicate-0: expunging file a3f5a769-8859-48e3-96ca-60a988eb9358/A266C42BFF5CA09E744EA4417219217C55D7A246 (ca1eec60-0709-4eeb-a611-04637ac6b4f6) on home-client-2 [2017-10-25 10:14:12.290510] W [MSGID: 108015] [afr-self-heal-entry.c:56:afr_selfheal_entry_delete] 0-home-replicate-0: expunging file a3f5a769-8859-48e3-96ca-60a988eb9358/9571F97CDA584D59D516B269323521CC3936D496 (e02a8b9b-6b93-4f12-abdf-cf716e2bc652) o...