Displaying 5 results from an estimated 5 matches for "2b50".
Did you mean:
250
2014 Feb 09
2
GeForce 6100 (NV4E) & nouveau regression in 3.12
...00-2006 Netfilter Core Team
[ 25.634912] nf_conntrack version 0.5.0 (3420 buckets, 13680 max)
[ 25.688528] ip_tables: (C) 2000-2006 Netfilter Core Team
[ 28.201103] NET: Registered protocol family 17
[ 33.914653] SFW2-INext-DROP-DEFLT IN=eth0 OUT= MAC= SRC=fe80:0000:0000:0000:0213:8fff:fe78:2b50 DST=ff02:0000:0000:0000:0000:0000:0000:00fb LEN=456 TC=0 HOPLIMIT=255 FLOWLBL=0 PROTO=UDP SPT=5353 DPT=5353 LEN=416
[ 46.282711] SFW2-INext-DROP-DEFLT IN=eth0 OUT= MAC= SRC=fe80:0000:0000:0000:0213:8fff:fe78:2b50 DST=ff02:0000:0000:0000:0000:0000:0000:00fb LEN=84 TC=0 HOPLIMIT=255 FLOWLBL=0 PROT...
2018 Dec 06
0
// RESEND // 7.6: Software RAID1 fails the only meaningful test
...sosreport.txt
>
> [??? 0.000000] localhost kernel: Command line:
> BOOT_IMAGE=/boot/vmlinuz-0-rescue-4456807582104f8ab12eb6411a80b31a
> root=UUID=1b0d6168-50f1-4ceb-b6ac-85e55206e2d4 ro crashkernel=auto
> rd.md.uuid=ea5fede7:dc339c3b:81817fc4:aba0bd89
> rd.md.uuid=7a90faed:4e5a2b50:9baa8249:21a6c3da rhgb quiet
> /dev/sda1: UUID="6f900f10-d951-2ae7-712c-a5710d8d7316"
> UUID_SUB="541c8849-58bd-8309-96fd-b45faf0d40bb" LABEL="localhost:home"
> TYPE="linux_raid_member"
> /dev/sda2: UUID="ea5fede7-dc33-9c3b-8181-7fc4aba0bd...
2018 Dec 05
2
// RESEND // 7.6: Software RAID1 fails the only meaningful test
(Resend: message didn't show, was my original message too big? Posted one of
the output files to a website to see)
The point of RAID1 is to allow for continued uptime in a failure scenario.
When I assemble servers with RAID1, I set up two HDDs to mirror each other,
and test by booting from each drive individually to verify that it works. For
the OS partitions, I use simple partitions and
2007 Dec 09
8
zpool kernel panics.
Hi Folks,
I''ve got a 3.9 Tb zpool, and it is casing kernel panics on my Solaris
10 280r (SPARC) server.
The message I get on panic is this:
panic[cpu1]/thread=2a100a95cc0: zfs: freeing free segment
(offset=423713792 size=1024)
This seems to come about when the zpool is being used or being
scrubbed - about twice a day at the moment. After the reboot, the
scrub seems to have
2012 Jun 24
0
nouveau _BIOS method
...b10: b0 00 01 02 47 01 b4 00 b4 00 01 02 47 01 b8 00 ....G.......G...
2b20: b8 00 01 02 47 01 bc 00 bc 00 01 02 47 01 d0 04 ....G.......G...
2b30: d0 04 01 02 22 04 00 79 00 5b 82 25 4d 41 54 48 ...."..y.[.%MATH
2b40: 08 5f 48 49 44 0c 41 d0 0c 04 08 5f 43 52 53 11 ._HID.A...._CRS.
2b50: 10 0a 0d 47 01 f0 00 f0 00 01 01 22 00 20 79 00 ...G.......". y.
2b60: 5b 82 43 0c 4c 44 52 43 08 5f 48 49 44 0c 41 d0 [.C.LDRC._HID.A.
2b70: 0c 02 08 5f 55 49 44 0a 02 08 5f 43 52 53 11 46 ..._UID..._CRS.F
2b80: 0a 0a a2 47 01 2e 00 2e 00 01 02 47 01 4e 00 4e ...G.......G.N.N
2b9...