similar to: Unable to boot CentOS 6 - Segmentation Erorr

Displaying 20 results from an estimated 3000 matches similar to: "Unable to boot CentOS 6 - Segmentation Erorr"

2016 May 29
0
Unable to boot CentOS 6 - Segmentation Erorr
Also, the last message in /var/log/messages before the crash was: ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
2006 Jul 10
3
Kernel-utils stupidities (readahead and cpuspeed)
Hi all, I think I've spotted a few stupidities (bugs) in the current version of kernel-utils (kernel-utils-2.4-13.1.80). I'm sure these are all propagated from upstream, but I hope someone could have a quick look to verify this and see if we either can push complaints upwards, or provide local fixes. The kernel-utils package provides several 'kernel-type' functions -
2009 Jul 07
1
Sysctl on Kernel 2.6.18-128.1.16.el5
Sysctl Values ------------------------------------------- net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.ipv4.tcp_window_scaling = 1 # vm.max-readahead = ? # vm.min-readahead = ? # HW Controler Off # max-readahead = 1024 # min-readahead = 256 # Memory over-commit # vm.overcommit_memory=2 # Memory to
2010 Mar 02
2
crash when using the cp command to copy files off a striped gluster dir but not when using rsync
Hi, I've got this strange problem where a striped endpoint will crash when I try to use cp to copy files off of it but not when I use rsync to copy files off: [user at gluster5 user]$ cp -r Python-2.6.4/ ~/tmp/ cp: reading `Python-2.6.4/Lib/lib2to3/tests/data/fixers/myfixes/__init__.py': Software caused connection abort cp: closing
2019 Sep 20
4
Re: [PATCH v4 07/12] v2v: nbdkit: Add the readahead filter unconditionally if it is available.
On Fri, Sep 20, 2019 at 10:28:18AM +0100, Richard W.M. Jones wrote: >The readahead filter is a self-configuring filter that makes >sequential reads faster when the plugin is slow (and all of the >plugins we use here are always slow). > >I observed the behaviour of the readahead filter with our qcow2 >overlay when converting a guest from a vCenter source. Even when >doing
2010 Jul 28
6
Read ahead / prefetching
Hi, I am trying to educate myself on prefetching/readahead algorithm for Lustre''s read. For a starter I only have two simple questions. 1 - Does Lustre detect linear or random I/O pattern or it always triggers readahead? 2 - If readahead is triggered, how many pages are read in addition to what is necessary? Thanks, Arifa.
2019 Apr 01
1
Readahead in the nbdkit curl plugin
I'm trying to replicate the features of the qemu curl plugin in nbdkit's curl plugin, in order that we can use nbdkit in virt-v2v to access VMware servers. I've implemented everything else so far [not posted yet] except for readahead. To my surprise actually, qemu's curl driver implements readahead itself. I thought it was a curl feature. I'm not completely clear _how_ it
2020 Jun 19
1
Re: [PATCH nbdkit] v2v: Disable readahead for VMware curl sources too (RHBZ#1848862).
On 6/19/20 7:47 AM, Richard W.M. Jones wrote: > This appears to be the cause of timeouts during the conversion step > where VMware VCenter server's Tomcat HTTPS server stops responding to > requests (or rather, responds only with 503 errors). The server later > recovers and in fact because of the retry filter the conversion > usually succeeds, but I found that we can avoid the
2007 Jun 22
1
Nagging performance issues with Vista
Hi All, I've got some performance issues with Samba and Vista that I just can't seem to figure out. Googling and fiddling has all been in vain up until now, so I'm not sure what I can do other than wait for Samba 4, but maybe someone here can find something I've missed. First off, the server is an Athlon 64 1.8GHz running Gentoo 2006.1, tested with both Samba 3.0.24 and
2016 Oct 30
7
Power Cut
Dear All I am using a centos server for cdr billing and mediation device on a remote network. I am experiencing problem that I am suspicious it comes from main supply power cut at the remote site. The power supply to the remote site comes from battery charger that will be automatically switched in circuit under main supply power cut but cannot provide adequate power for more than 2 hours . I am
2019 Apr 11
3
nbdkit, VDDK, extents, readahead, etc
As I've spent really too long today investigating this, I want to document this in a public email, even though there's nothing really that interesting here. One thing you find from search for VDD 6.7 / VixDiskLib_QueryAllocatedBlocks issues with Google is that we must be one of the very few users out there. And the other thing is that it's quite broken. All testing was done using
2007 Oct 18
1
Vista performance (uggh)
Issue: Vista reads slowly from a samba server. This appears to pop up periodically here and elsewhere. My samba.conf file has: [homes] ... vfs objects = readahead As suggested elsewhere. Writes are approximately 17-18MB/s which is acceptable. Reads are in the 8MB/s range which is appalingly slow. Using linux smbclient and windows XP clients I can read at 25+MB/s. I've enabled vfs
2011 Jan 04
16
[PATCH v2 0/5] add new ioctls to do metadata readahead in btrfs
Hi, We have file readahead to do asyn file read, but has no metadata readahead. For a list of files, their metadata is stored in fragmented disk space and metadata read is a sync operation, which impacts the efficiency of readahead much. The patches try to add meatadata readahead for btrfs. In btrfs, metadata is stored in btree_inode. Ideally, if we could hook the inode to a fd so we could use
2011 Jun 17
0
[LLVMdev] LLVM-based address sanity checker
On 17 June 2011 09:14, Kostya Serebryany <kcc at google.com> wrote: > Maybe the fallback code should just use a function call. Much simpler for > documentation purposes. Sounds good. On 32-bit, the shadow region is: > [0x28000000, 0x3fffffff] HighShadow [0x24000000, 0x27ffffff] ShadowGap [0x20000000, > 0x23ffffff] LowShadow > > This is 0.5G total. So, I mmap all these
2016 Sep 06
5
[SOLVED] Re: Feature Request: what about "core stop panic" ?
On Tue, Sep 6, 2016 at 1:55 AM, Olivier <oza.4h07 at gmail.com> wrote: > Hello, > > After testing "pkill -SEGV -f /usr/sbin/asterisk" on Debian Jessie > platform, I've got several questions : > > > 1. When I issue a "cd /tmp; asterisk -cvvvvvvvvvvvg -U asterisk -G > asterisk" command, and then issue a "pkill -SEGV asterisk" command,
2019 Jan 25
2
C 7 and gssproxy
Ok, folks, I brought this up some time ago, and got no replies. We have a good number of systems - > 100 - and we use sssd. On the C 7 boxen, which is most of them, gssproxy *frequently* (like once a day or so) dies with a SEGV. It restarts fine. Dies again eventually. ARE other people seeing this? If so, I guess we get to file a bug report with upstream. Speaking as an old C
2017 Jun 07
5
C7, systemd, say what?!
I just updated a system - as in minutes ago, and log back in after it reboots, and this is in dmesg: [ 88.202272] systemd-readahead[484]: open(/var/tmp/dracut.fP4yj1/initramfs/usr/bin/loginctl) failed: Too many levels of symbolic links [ 88.202515] systemd-readahead[484]: open(/var/tmp/dracut.fP4yj1/initramfs/usr/lib/systemd/system/dracut-emergency.service) failed: Too many levels of symbolic
2008 Dec 12
2
Really slow performance
I am seeing extremely slow performance with glusterfs. OS: CentOS 5 glusterfs version: glusterfs-1.3.9-1 Server configuration: ############################################## ### GlusterFS Server Volume Specification ## ############################################## #### CONFIG FILE RULES: ### "#" is comment character. ### - Config file is case sensitive ### - Options within a
2011 Jun 10
6
[PATCH v2 0/6] btrfs: generic readeahead interface
This series introduces a generic readahead interface for btrfs trees. The intention is to use it to speed up scrub in a first run, but balance is another hot candidate. In general, every tree walk could be accompanied by a readahead. Deletion of large files comes to mind, where the fetching of the csums takes most of the time. Also the initial build-ups of free-space-caches and
2020 Jun 19
2
[PATCH nbdkit] v2v: Disable readahead for VMware curl sources too (RHBZ#1848862).
I'm still testing this fix, so let's hold off the review for the moment. Also it may be better to specifically identify problematic servers rather than disabling this for every curl source. eg. I suspect that the problem is the Java server used by VCenter, so we might think about only disabling readahead for that single case. Rich.