Displaying 7 results from an estimated 7 matches for "problemtyp".
Did you mean:
problemtype
2009 Oct 10
2
[Bug 447586] [NEW] megatec_usb does not work anymore with a Trust PW-4130M UPS
...d from hardy to intrepid to jaunty. After
> upgrading to Intrepid the megatec_usb stopped recognizing my Trust PW-
> 4130M ups, it still does not work on Jaunty. It worked without any
> problems in hardy.
>
> I tried the megatec_usb driver from hardy and it detects the ups.
>
> ProblemType: Bug
> Architecture: amd64
> DistroRelease: Ubuntu 9.04
> Package: nut 2.4.1-2ubuntu4 [modified: usr/share/nut/driver.list
> lib/nut/megatec_usb]
> SourcePackage: nut
> Uname: Linux 2.6.28-15-server x86_64
>
> ** Affects: nut (Ubuntu)
> Importance: Undecided
>...
2010 Oct 18
0
[Bug 662435] [NEW] megatec_usb driver stopped working after upgrade from 8.04 to 10.04
...teractive VI 1400"
> serial = "73712C00008"
>
> Also tested the subdrivers without success.
>
> I now downgraded the nut package to nut_2.2.2-6ubuntu1_amd64.deb and the
> driver works again. PLEASE MIND, THAT THE BUG IS WITH VERSION 2.4.3 not
> 2.2.2.
>
> ProblemType: Bug
> DistroRelease: Ubuntu 10.04
> Package: nut 2.2.2-6ubuntu1
> ProcVersionSignature: Ubuntu 2.6.32-25.44-server 2.6.32.21+drm33.7
> Uname: Linux 2.6.32-25-server x86_64
> Architecture: amd64
> Date: Mon Oct 18 01:29:57 2010
> ProcEnviron:
> LANG=de_DE.UTF-8
> SHELL...
2024 May 23
8
[Bug 1752] New: iptables-save not showing default chains
...:POSTROUTING ACCEPT [253:33027]
COMMIT
# Completed on Sun May 12 05:21:20 2024
# Generated by iptables-save v1.8.4 on Sun May 12 05:21:20 2024
*nat
:PREROUTING ACCEPT [2:552]
:INPUT ACCEPT [1:64]
:POSTROUTING ACCEPT [52:5283]
:OUTPUT ACCEPT [52:5283]
COMMIT
# Completed on Sun May 12 05:21:20 2024
ProblemType: Bug
DistroRelease: Ubuntu 24.04
Package: iptables 1.8.10-3ubuntu2
ProcVersionSignature: Ubuntu 6.8.0-31.31-generic 6.8.1
Uname: Linux 6.8.0-31-generic x86_64
ApportVersion: 2.28.1-0ubuntu2
Architecture: amd64
CasperMD5CheckResult: unknown
CloudArchitecture: x86_64
CloudID: nocloud
CloudName: unkn...
2017 Jun 29
1
setting gfid on .trashcan/... failed - total outage
...s not a core dump: File
format not recognised
(gdb)
[ 14:48:30 ] - root at gl-master-03 ~ $file
/var/crash/_usr_sbin_glusterfsd.0.crash
/var/crash/_usr_sbin_glusterfsd.0.crash: ASCII text, with very long lines
[ 14:48:37 ] - root at gl-master-03 ~ $head
/var/crash/_usr_sbin_glusterfsd.0.crash
ProblemType: Crash
Architecture: amd64
Date: Fri Jun 23 16:35:13 2017
DistroRelease: Ubuntu 16.04
ExecutablePath: /usr/sbin/glusterfsd
ExecutableTimestamp: 1481112595
ProcCmdline: /usr/sbin/glusterfsd -s gl-master-03-int --volfile-id
mvol1.gl-master-03-int.brick1-mvol1 -p
/var/lib/glusterd/vols/mvol1/run/gl...
2017 Jun 29
0
setting gfid on .trashcan/... failed - total outage
On Wed, 2017-06-28 at 14:42 +0200, Dietmar Putz wrote:
> Hello,
>
> recently we had two times a partial gluster outage followed by a total?
> outage of all four nodes. Looking into the gluster mailing list i found?
> a very similar case in?
> http://lists.gluster.org/pipermail/gluster-users/2016-June/027124.html
If you are talking about a crash happening on bricks, were you
2015 Mar 14
3
[Bug 2365] New: openssh client ignores -o Tunnel=ethernet option, creating an IP tunnel device instead of an ethernet tap device
...500
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
root at andersen-Presario-F500-GF606UA-ABA:~#
I'd do a bisect search if I had a way to build the results of each test
into a package that I can cleanly remove from my system (so I can go
back to a working version so I can get things done :)
ProblemType: Bug
DistroRelease: Ubuntu 14.04
Package: openssh-client 1:6.6p1-2ubuntu1
ProcVersionSignature: Ubuntu 3.13.0-24.46-generic 3.13.9
Uname: Linux 3.13.0-24-generic x86_64
NonfreeKernelModules: nvidia zfs zunicode zavl zcommon znvpair
ApportVersion: 2.14.1-0ubuntu3
Architecture: amd64
Date: Sun May 4...
2017 Jun 28
2
setting gfid on .trashcan/... failed - total outage
Hello,
recently we had two times a partial gluster outage followed by a total
outage of all four nodes. Looking into the gluster mailing list i found
a very similar case in
http://lists.gluster.org/pipermail/gluster-users/2016-June/027124.html
but i'm not sure if this issue is fixed...
even this outage happened on glusterfs 3.7.18 which gets no more updates
since ~.20 i would kindly ask