Displaying 20 results from an estimated 90 matches similar to: "Unable to mount legacy pool in to zone"
2006 Mar 21
3
Rsync 4TB datafiles...?
I need to rsync 4 TB datafiles to remote server and clone to a new oracle
database..I have about 40 drives that contains this 4 TB data. I would like
to do rsync from a directory level by using --files-from=FILE option. But
the problem is what will happen if the network connection fails the whole
rsync will fail right.
rsync -a srchost:/ / --files-from=dbf-list
and dbf-list would contain this:
2009 May 31
1
ZFS rollback, ORA-00322: log 1 of thread 1 is not current copy (???)
Hi.
Using ZFS-FUSE.
$SUBJECT happened 3 out of 5 times while testing, just wanna know if
someone has seen such scenario before.
Steps:
------------------------------------------------------------
root at localhost:/# uname -a
Linux localhost 2.6.24-24-generic #1 SMP Wed Apr 15 15:54:25 UTC 2009
i686 GNU/Linux
root at localhost:/# zpool upgrade -v
This system is currently running ZFS pool
2006 Jul 31
2
NT_STATUS_BAD_NETWORK_NAME
Hello,
I installed samba 3.0.23 on Fedora Cora 4 and I configured it with this smb.conf file:
[global]
dos charset = UTF-8
workgroup = OAF_ADMIN
server string = OAF Samba PDC Server
passdb backend = tdbsam
username map = /etc/samba/smbusers
password level = 1
log level = 13
log file = /var/log/samba/log.%m
max log size = 50
2017 Jun 22
0
Issue with dsync server - copy transaction record copying to wrong destination mailbox.
Hello all,
I am encountering the following issue :
I have 2 dovecot servers with 2 way replication. Everything works fine
except for one specific issue :
When my MUA (thunderbird) filters (junk & manual) refile a mail from my
inbox, I then sometimes find multiple copies of the original message in
the destination folder.
I have seen the following log entries which make me think (but
2006 Apr 10
1
kernel: Page has mapping still set - continues
Been having this problem, and posted about it before. Thinking it was a
memory issue, I've replaced all memory on the server. However, the
problem has continued.
Server is a Proliant DL380 (8GB RAM, 2 Xeon CPU), running CentOS 3.6, all
patches up-to-date. Kernel is 2.4.21-40.ELsmp (problem seems to have
first manifested on kernel 2.4.21-37.0.1.ELsmp). Disk is CCISS hardware
RAID-5,
2017 Aug 24
3
dmarc report faild ?
In the same vein,
I am receiving forensic DMARC reports from mx01.nausch.org.
Whenever I send a message to the mailing list or when my server sends a
DMARC report, I'm getting a DMARC Forensic report.
It's odd, because the actual report tells me both DKIM and SPF (in the
the of a DMARC report) pass...
Here is what I am getting :
This is an authentication failure report for an email
2003 Oct 13
1
OpenSSH_3.7.1p2, Solaris 8: non-interactive authentication meth od prompts for a password
Hi,
The OpenSSH_3.7.1p2, Solaris 8 case: non-interactive authentication method
(publickey) works for root only
----------------------------------------------------------------------------
---------
We installed OpenSSH_3.7.1p2, SSH protocols 1.5/2.0, OpenSSL 0.9.7c
We need to copy a file by SFTP from App server to a DB server
with passwordless method.
[cbfe-dev-app01 (client), user cbfesit]
2015 Jun 16
1
Strange problem with rsync and expect
Version: 3.0.6OS: CentOS 6.6
I met a strange problem when using rsync with expect. I wrote a script backing up using rsync and expect. However when I run the script twice for two different files at same time, the two files on destination path would be deleted automatically before the files closed. The output of inotify_wait was like:
./ MODIFY .redo02.log.dOlbek
./ DELETE .redo02.log.dOlbek
2010 Jul 23
2
ZFS volume turned into a socket - any way to restore data?
I have recently upgraded from NexentaStor 2 to NexentaStor 3 and somehow one of my volumes got corrupted. Its showing up as a socket. Has anyone seen this before? Is there a way to get my data back? It seems like it''s still there, but not recognized as a folder. I ran zpool scrub, but it came back clean.
Attached is the output of #zdb data/rt
2.0K sr-xr-xr-x 17 root root 17 Jul
2010 Sep 21
4
FreeBSD Puppet 2.6.1 odd core-dump
Hi,
I have a couple of FreeBSD-servers that I try to manage using puppet.
I''m just trying it out at the moment and have just deployed 5 new
boxes (from PXE and scripted installation so supposedly they are all
identical except for the name and ip-addresses). On two of the servers
I get the error-messages at the bottom of the post. The first error-
message I get every time I run puppet on
2008 Mar 15
1
current quota in mysql issue
hi all,
i have a problem with storing the current quota in mysql. the
configuration of the dictionary quota mostly looks like the example from
the wiki.
the dirsize quota limit is read correctly from the user_query, but nothing
stored with quotadict in the quota table. i wonder that there is nothing
like a "dict" in the logfile. did i configured anything wrong?
regars
stefan
logfile
2009 May 19
0
File too big for filehash?
Dear R users,
I try to use a very large file (~3 Gib) with the filehash package. The
length of the dataset is around 4,000,000 obs. I get this message from R
while trying to "load" the dataset (named "cc084.csv"):
> dumpDF(read.csv("cc084.csv", header=T), dbName="db01")
Erreur : impossible d'allouer un vecteur de taille 15.6 Mo (French)
Error:
2005 Mar 09
10
mysql vs postgres
I''ve used mysql for quite some time now. Other than crashing when the
partition gets full, I''ve had no problems with it.
But I''ve heard great things about postgres and have seen some people
say it''s much superior to mysql.
So, with a Rails application, is there any reason why I would want to
learn/use another DB besides mysql? Any pragmatic benefits?
2009 Nov 06
0
iscsi connection drop, comes back in seconds, then deadlock in cluster
Greetings ocfs2 folks,
A client is experiencing some random deadlock issues within a cluster,
wondering if anyone can point us in the right direction. The iSCSI
connection seemed to have dropped on one node briefly, ultimately
several hours later landing us in a complete deadlock scenario where
multiple nodes (Node 7 and Node 8) had to be panic'd (by hand - they
didn't ever panic on
2012 May 04
2
Can't import this 4GB DATASET
Dear Experienced R Practitioners,
I have 4GB .txt data called "dataset.txt" and have attempted to use *ff,
bigmemory, filehash and sqldf *packages to import it, but have had no
success. The readLines output of this data is:
readLines("dataset.txt",n=20)
[1] " "
2006 Feb 22
5
Rsync help needed...
Hello,
I was reading your posts about RSYNC. We have a massive Oracle schema lots
of datafiles about 750 GB size. We do rsync datafiles from source to target
server but everytime we cleanup the datafiles on the target server and do
rsync every 2 weeks. On the target side mostly the datafiles will be same
but on source we might have added few datafiles or made some changes in data
and as such the
2006 Jan 15
2
rsync of file list
Hi All,
I would to rsync data spread of many files from remote site. Each file
may exist in total different location - the path for each file may be
different.
My question is:
Can I do it one single rsync command, giving a file containing list of
paths as parameter, or do I need to run rsync for each file.
I did not find any option doing it in the man page. I tried to play with
2009 Jan 13
12
OpenSolaris better Than Solaris10u6 with requards to ARECA Raid Card
Under Solaris 10 u6 , No matter how I configured my ARECA 1261ML Raid card
I got errors on all drives that result from SCSI timeout errors.
yoda:~ # tail -f /var/adm/messages
Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice]
Requested Block: 239683776 Error Block: 239683776
Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice] Vendor:
Seagate
2008 Aug 22
2
zpool autoexpand property - HowTo question
I noted this PSARC thread with interest:
Re: zpool autoexpand property [PSARC/2008/353 Self Review]
because it so happens that during a recent disk upgrade,
on a laptop. I''ve migrated a zpool off of one partition
onto a slightly larger one, and I''d like to somehow tell
zfs to grow the zpool to fill the new partition. So,
what''s the best way to do this? (and is it
2005 Feb 03
1
help troubleshooting inconsistencies in back up sizes
Hello list,
I'll first describe my set up:
server1 : live server
server2 : backup
server3 : backup of the backup
so the data set is copied in this order
server1->server2->server3
they are not done at the same time so there would be no collisions.
I use this shell script to back up:
for i in a b c d e f g h i j k l m n o p q r s t u v w x y z `seq 0 9`; do