similar to: [PATCH] Fixed unpersisting directories and persisting directories that contain persisted files.

Displaying 20 results from an estimated 400 matches similar to: "[PATCH] Fixed unpersisting directories and persisting directories that contain persisted files."

2010 Mar 23
1
[PATCH] Ensures that persist and unpersist work with relative paths.
As they iterate through their list of arguments, both ovirt_storage_config and remove_config first convert each of them into a fully qualified path before processing. Related: rhbz#576239 Signed-off-by: Darryl L. Pierce <dpierce at redhat.com> --- scripts/ovirt-functions | 80 +++++++++++++++++++++++------------------------ 1 files changed, 39 insertions(+), 41 deletions(-) diff --git
2010 Oct 22
0
[PATCH node] First draft of replacing some of the ovirt-config-* scripts with python equivalents.
Putting these out for feedback and comments. These will eventually support the new newt/python based ui for installation/configuration storage.py functions will be moved under a class for better data portability before final version --- scripts/ovirtfunctions.py | 672 +++++++++++++++++++++++++++++++++++++++++++++ scripts/storage.py | 451 ++++++++++++++++++++++++++++++ 2 files
2009 Jul 21
1
[PATCH node] updated unpersist prompts bz512539
--- scripts/ovirt-functions | 8 ++++++++ 1 files changed, 8 insertions(+), 0 deletions(-) diff --git a/scripts/ovirt-functions b/scripts/ovirt-functions index 404c366..7657bae 100644 --- a/scripts/ovirt-functions +++ b/scripts/ovirt-functions @@ -508,8 +508,16 @@ remove_config() { if [ -f /config$f ]; then # refresh the file in rootfs if it was
2010 Mar 23
1
Resend of one patch, new to follow on...
The first patch in this set was submitted in January but never ACK'd. The following three are follow on patches to fix other issues that have come up.
2011 Jul 20
0
[PATCH] fix ipv4 static/dhcp/disabled networking changes
This fixes networking changes when switching from dhcp/static to disabled. Before the ifcfg scripts would contain old values from the previous configuration. Support for disabled devices is now added and some useless remnant bash->python coding cleaned up --- scripts/network.py | 45 +++++++++++++++++++--------------------- scripts/ovirt-config-setup.py | 34
2010 Oct 25
0
[PATCH node] add network.py script
--- scripts/network.py | 207 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 207 insertions(+), 0 deletions(-) create mode 100644 scripts/network.py diff --git a/scripts/network.py b/scripts/network.py new file mode 100644 index 0000000..28e32f2 --- /dev/null +++ b/scripts/network.py @@ -0,0 +1,207 @@ +#!/usr/bin/python + +from ovirtfunctions import * +import tempfile
2004 Oct 08
1
Multiple-pass overwrite of EXT3 file on a journalled fs
Greetings all, I am curious if anyone knows why utilities such as 'GNU shred' (part of coreutils) and 'wipe' say they are not effective on journalled file systems- especially EXT3. Is it because you can't "guarantee" that the journal has been flushed/wiped (i.e. you have the journal 'between' you and the actual data blocks on the physical disk), or because
2017 Nov 09
0
file shred
On 11/08/2017 11:36 PM, Kingsley Tart wrote: > Hi, > > if we were to use shred to delete a file on a gluster volume, will the > correct blocks be overwritten on the bricks? > > (still using Gluster 3.6.3 as have been too cautious to upgrade a > mission critical live system). When I strace `shred filename`, it just seems to write + fsync random values into the file based on
2011 Sep 14
1
Shredding instead of deleting
Hi, I have a wishlist item. Is there an appropriate place for me to post it? Basically, I would like to know that my email isn't recoverable from the local disk on the mail server after I delete it. So instead of just deleting the file from my Maildir, I'd like the option to exist for Dovecot to shred it.. Ie, overwrite the file with random data and/or null bytes before deletion. In the
2017 May 31
0
CentOS 6.9, shredding a RAID
On Wed, 31 May 2017, m.roth at 5-cent.us wrote: > I've got an old RAID that I attached to a box. LSI card, and the > RAID has 12 drives, for a total RAID size of 9.1TB, I think. I > started shred /dev/sda the Friday before last... and it's still > running. Is this reasonable for it to be taking this long...? Unless you specified non-default options, shred overwrites each
2000 Apr 01
0
space in user dir?
This is obviously not a long term acceptable solution. Could someone please enlighten me as to the permanent solution, or at least see why it doesn't work? This is not my problem, therefore I have not had the opportuntity to test - I don't have a version that old. Let me know if it is solved in a newer version as well. Thanks. It is a workable solution, although the generally accepted
2010 Mar 24
1
[PATCH] Allow persistance of empty config files in ovirt_store_config
This fix enables the persistance of empty configuration files during firstboot, in ovirt_store_config, so configuration files like ssh/ssl keys that are dynamically generated to well known locations can be pre set for persistance. Signed-off-by: Ricardo Marin Matinata <matinata at br.ibm.com> --- scripts/ovirt-functions | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff
2010 Mar 25
1
[PATCH] Allow persistance of empty config files in ovirt_store_config v2
This fix enables the persistance of empty configuration files during firstboot, in ovirt_store_config, so configuration files like ssh/ssl keys that are dynamically generated (i.e. content is not known until the node has booted at least one time) to well known locations can be pre set for persistance. Signed-off-by: Ricardo Marin Matinata <matinata at br.ibm.com> ---
2009 Aug 11
1
[PATCH node] Added support for remote logging with rsyslog-gssapi to node. NOTE: Needs selinux to be set to permissive (setenforce 0) to work.
TODO: Fix selinux :P --- Makefile.am | 1 + ovirt-node.spec.in | 3 ++ scripts/ovirt | 3 ++ scripts/ovirt-managed-rsyslog | 72 +++++++++++++++++++++++++++++++++++++++++ 4 files changed, 79 insertions(+), 0 deletions(-) create mode 100755 scripts/ovirt-managed-rsyslog diff --git a/Makefile.am b/Makefile.am index 0374f07..5201a79 100644
2017 Jun 02
0
CentOS 6.9, shredding a RAID
On 05/31/2017 08:04 AM, m.roth at 5-cent.us wrote: > I've got an old RAID that I attached to a box. LSI card, and the RAID has > 12 drives, for a total RAID size of 9.1TB, I think. I started shred > /dev/sda the Friday before last... and it's still running. Is this > reasonable for it to be taking this long...? Was the system booting from /dev/sda, or were you running any
2010 Dec 26
4
DO NOT REPLY [Bug 7889] New: Add "--backup-deleted"
https://bugzilla.samba.org/show_bug.cgi?id=7889 Summary: Add "--backup-deleted" Product: rsync Version: 3.0.7 Platform: All OS/Version: All Status: NEW Severity: enhancement Priority: P3 Component: core AssignedTo: wayned at samba.org ReportedBy: jik at kamens.brookline.ma.us
2017 Nov 08
2
file shred
Hi, if we were to use shred to delete a file on a gluster volume, will the correct blocks be overwritten on the bricks? (still using Gluster 3.6.3 as have been too cautious to upgrade a mission critical live system). Cheers, Kingsley.
2018 May 09
0
OT: hardware: sanitizing a dead SSD?
On Wed, 9 May 2018, m.roth at 5-cent.us wrote: > Federal contractor here, too. (I'm the OP). For disks that work, shred or > DBAN is what we use. For dead disks, we do the paperwork, and get them > deGaussed. SSD's are a brand new issue. We haven't had to deal with them > yet, but it's surely coming, so we might as well figure it out now. Does anyone use hdparm's
2018 May 09
2
OT: hardware: sanitizing a dead SSD?
James Szinger wrote: > Disclaimer: My $dayjob is with a government contractor, but I am speaking > as private citizen. > > Talk to your organization's computer security people. They will have a > standard procedure for getting rid of dead disks. We on the internet > can't > know what they are. I'm betting it involves some degree of paperwork. > > Around
2013 Jan 08
4
wiping out data on a disk (no physical acess to the machine)
Hi, I need to securely wipe out a disk on a remote machine, but I don't have access to that machine. Therefore I cannot use the LiveCD+shred (or dd) combination. Besides manually shreding known data files, I am wondering if there is a (free) tool that can be used in my case. Thanks.