Displaying 20 results from an estimated 2000 matches similar to: "rsync differences between Fedora/Ubuntu?"
2011 Mar 11
1
run-init in tmpfs
Dear Sirs,
I've a question belonging to the run-init utility.
I'm trying to boot a full linux system from ram.
Therefore I provide a kernel and initrd from a tftp server.
The full rootfs is provided through a nfs-server and is at time a
cpio-archive. That archive shall be copied to the local client and
mounted in a tmpfs partition. After that, I want replace the oldroot bei
the root
2015 Feb 04
2
[LLVMdev] Question on Machine Combiner Pass
Ping
From: Mandeep Singh Grang [mailto:mgrang at codeaurora.org]
Sent: Tuesday, February 03, 2015 4:34 PM
To: 'llvmdev at cs.uiuc.edu'
Cc: 'ghoflehner at apple.com'; 'apazos at codeaurora.org'; mgrang at codeaurora.org
Subject: Question on Machine Combiner Pass
Hi,
In the file lib/CodeGen/MachineCombiner.cpp I see that in the function
2008 Oct 26
9
vncviewer reporting connection refused (146)
I''m running snv93.
I''m trying to install a windows hvm domU using the virt-install command:
/usr/bin/virt-install -n windowsts -r 2048 -s 20 -f /data/domU/windowsts --hvm --vnc -c /dev/dsk/c0t0d0s2
The vncviewer is unable to open a connection to the xend VNC server and is failing with the Connection refused (146) error message.
xend is set up with the following properties:
2007 May 09
2
how to create label fo swap space
I just add a partition /dev/sdb3 and use "mkswap
/dev/sdb3" to make it swap space. How can I label the
/dev/sdb3 like "SWAP-sdb3"?
what I want is put on /etc/fstab as entry like:
LABEL=SWAP-sdb2 swap swap
defaults 0 0
LABEL=SWAP-sdb3 swap swap
defaults 0 0
Thanks
2012 Jan 13
5
Can't resize second device in RAID1
Hi,
the situation:
Label: ''RootFS'' uuid: c87975a0-a575-405e-9890-d3f7f25bbd96
Total devices 2 FS bytes used 284.98GB
devid 2 size 311.82GB used 286.51GB path /dev/sdb3
devid 1 size 897.76GB used 286.51GB path /dev/sda3
RootFS created when sda3 was 897.76GB and sdb3 311.82GB.
I have now freed other space on sdb. So I deleted sdb3 and recreated
it occupying all
2011 Feb 08
4
mount the wrong device after system recovery
Hi,
I am recovering a CentOS 5.4 system. I've copied all partitions into the
recovery system. I've installed grub boot loader. However, the original
system is using /dev/sdb1 for root (/), while the recovery system is
using LVM (/dev/vg0/lv1) for root (/). When recovery system boots, I got
the panic error:
* Mounting /dev/sdb1 on /sysroot
* Mount: mounting
2002 Oct 06
1
Ext3 fatal errors with Promise RAID
Today our server had seriuos problems, we had to reinitiate it and run fsck
manually.
Dou you think it's an ext3, kernel (vanilla 2.4.19) or hardware bug?
The errors were (from syslog):
--------------------------------
Oct 6 16:10:25 nou kernel: attempt to access beyond end of device
Oct 6 16:10:25 nou kernel: 72:02: rw=0, want=230266240, limit=4882432
Oct 6 16:10:25 nou kernel: attempt
2008 Jan 14
3
Spot the cyclical relationship
I got the following error, but there''s no "cycle" I commented out
File["/dev/sdb3"] and it works, but of course would choke if I ran it
and the requirement were not met
err: Could not apply complete catalog: Found cycles in the following
relationships: File[/dev/sdb1] => Exec[echo -e "0,290\n,290\n," | sfdisk
/dev/sdb]
Here''s the node:
node
2007 Sep 04
4
RAID + LVM Addition to CentOS 5 Install
Hi All,
I have what I believe to be a pretty basic LVM & RAID setup on my
CentOS 5 machine:
Raid Partitions:
/dev/sda1,sdb1
/dev/sda2,sdb2
/dev/sda3,sdb3
During the install I created a RAID 1 volume md0 out of sda1,sdb1 for
the boot partition and then added sda2,sdb2 to a separate RAID 1
volume as well (md1). I then setup md1 as a LVM physical volume for
volume group 'system'. I
2011 Aug 17
1
RAID5 suddenly broken
Hello,
I have a RAID5 array on my CentOS 5.6 x86_64 workstation which
"suddenly" failed to work (actually after the system could not resume
from a suspend).
I had recently issues after moving the workstation to another office,
where one of the disks got accidently unplugged. But the RAID was
working and it had reconstructed (as far as I can tell) the data.
After I replugged the disk,
2006 Sep 28
1
ramfs to tmpfs
Hello,
I was using a bunch of cpios in initramfs as a working system, and
wondering why the unused files weren't being paged out to swap.
So I reread ramfs-rootfs-initramfs.txt and now I know.
So I wrote the attached utility. It creates a tmpfs, moves all files
on the initramfs, moves / and executes the real init.
It works, even with hardlinks, but it isn't the correct approach. Have
2005 Feb 14
6
Query regarding initramfs
Hi
I had some doubts regarding what all the init application should do:
>> so, that should that application do?
>> - mount /dev/hda1 /new-root
>> - cd /new-root
>> - run-init
1. Of what I understand, before exitting, init should mount the realroot
and execute the init process.
Is realroot the '/' or the empty directory created (in the cpio
archive) ?
2006 Jul 31
1
Issues with cifs mounts following Samba upgrade to 3.0.23a
My LAN includes a server machine running FC4, with several shares mounted with
Samba. Yesterday, I upgraded the packages on the FC4 machine, and these
included Samba, which is now at 3.0.23a. Unfortunately, this seems to have
broken the mounted shares for my Ubuntu 6.06 installation on my Acer 1682WLMI
laptop. The cifs module on Ubuntu reports as version 1.39.
The symptoms are that I can
2013 Nov 19
6
[PATCH] Btrfs: fix very slow inode eviction and fs unmount
The inode eviction can be very slow, because during eviction we
tell the VFS to truncate all of the inode''s pages. This results
in calls to btrfs_invalidatepage() which in turn does calls to
lock_extent_bits() and clear_extent_bit(). These calls result in
too many merges and splits of extent_state structures, which
consume a lot of time and cpu when the inode has many pages. In
some
2013 Sep 22
10
[PATCH] Btrfs: fix sync fs to actually wait for all data to be persisted
Currently the fs sync function (super.c:btrfs_sync_fs()) doesn''t
wait for delayed work to finish before returning success to the
caller. This change fixes this, ensuring that there''s no data loss
if a power failure happens right after fs sync returns success to
the caller and before the next commit happens.
Steps to reproduce the data loss issue:
$ mkfs.btrfs -f /dev/sdb3
$
2017 Sep 11
2
Cannot chainload a formerly working Linux system
Chainloading a Linux system with syslinux fails unexpectedly with the lines:
WARN: No MBR magic, treating disk as raw.
ERR: Unable to find requested disk / partition combination.
boot:
coming after the usual lines
SyslinuxBOOT .... (in UEFI menu)
linuxBLFS (in syslinux menu)
SUMMARY (additional gory details available upon request)
~~~~~~~~~
2005 Aug 09
2
Upgrading Drive, Best Practice?
Hi,
This might sound like a n00b question, but I've honestly never done this
with a Linux machine... (it is running Centos3)
We have a 1U mail server with two 36GB SCSI drives in a hardware mirror
config. There's no more room for any other drives in the case. It's
filling up, so we now have two 74GB drives ready to take their place.
Possible solutions that I've come up
2014 Aug 17
1
/dev/disk/by-uuid missing
hi!
I got a problem with one of my servers where the boot process fails,
because it cannot find its root partition.
My /boot/grub/grub.conf uses to look like
---8<---
title CentOS (2.6.32-431.17.1.el6.x86_64)
root (hd0,0)
kernel /vmlinuz-2.6.32-431.17.1.el6.x86_64 ro root=UUID=8ef1f6cb-5dfc-497e-83d0-8d91cbbe4939 rd_NO_LUKS LANG=en_US.UTF-8 rd_NO_MD quiet
2018 Jan 12
5
[PATCH 1/1] appliance: init: Avoid running degraded md devices
'--no-degraded' flag in the first mdadm call inhibits the startup of array unless all expected drives are present.
This will prevent starting arrays in degraded state.
Second mdadm call (after LVM is scanned) will scan unused yet devices and make an attempt to run all found arrays even they are in degraded state.
Two new tests are added.
This fixes rhbz1527852.
Here is boot-benchmark
2011 May 10
3
Drive recovery?
I have a CentOS 5.6 system (recently installed) that, for some
reason, has decided to mangle one of its drives, specifically /dev/hde1
... No errors anywhere, just rebooted the machine over the weekend and
it's gone. Up till the reboot, the drive was fine, I was writing to it
without a problem.
fdisk tells me:
----------
# fdisk -l /dev/hde
Disk /dev/hde: 160.0 GB, 160041885696