Displaying 20 results from an estimated 200 matches similar to: "relation "pg_user" does not exist error when runningpg_dump"
2005 Apr 28
1
relation "pg_user" does not exist error when running pg_dump
Hi,
I'm trying to move a postgres database to another server and I ran
into some weird problems. Pg_dump command was not giving any output
and the logs got filled with some SELinux errors. So I turned off
SELinux completely (and rebooted). Now I'm getting the following error
message:
# pg_dump -U postgres database > database.out
pg_dump: SQL command failed
pg_dump: Error message from
2007 May 11
1
postgres erros on RHEL 4
Hi All,
I set up posgresql on RedHat El4. Below are rpms I installed.
[root at LinuxBox etc]# rpm -qa |grep postgres
postgresql-libs-7.4.16-1.RHEL4.1
postgresql-devel-7.4.16-1.RHEL4.1
postgresql-7.4.16-1.RHEL4.1
postgresql-python-7.4.16-1.RHEL4.1
postgresql-server-7.4.16-1.RHEL4.1
Then, I did below commands to create a user and a database.
[root at box root]# su postgres
bash-3.1$ createuser
2007 Oct 30
4
Postgresql and shell script
I have a shell script (sh) where I create a user and import data to a
postgres database
<snip>
su -c "createuser -A -D -P $PG_user" postgres
su -c "psql -d$PG_database -h localhost -U$PG_user -W -f postgresql.sql "
postgres
</snip>
when the script executes those command, it ask for a password, how could I
do this without have to enter the passwd, I would like that
2005 May 24
7
PostgreSQL/SELinux Error - relation "pg_catalog.pg_user" does not exist
hello everyone,
i'm trying to run a postgresql service on my newly-installed centos4
box. i have been able to recreate my users, set up the permissions,
and restore the database dump. also, i can already log-in to my
databases.
there is, however, one annoying problem. whenever i type \du (or \d or
\l) on the psql prompt, i get the following error:
ERROR: relation
2018 Sep 19
0
[Marketing Mail] Re: LVM and Backups
On Wed, 2018-09-19 at 08:55 +0200, Alessandro Baggi wrote:
> Il 18/09/2018 17:14, Gordon Messmer ha scritto:
> > On 9/17/18 11:38 PM, Alessandro Baggi wrote:
> > > Il 17/09/2018 22:12, Gordon Messmer ha scritto:
> > > > That doesn't look right.? It should look more like 1) stop or
> > > > freeze?
> > > > all of the services (httpd and
2018 Sep 19
3
LVM and Backups
Il 18/09/2018 17:14, Gordon Messmer ha scritto:
> On 9/17/18 11:38 PM, Alessandro Baggi wrote:
>> Il 17/09/2018 22:12, Gordon Messmer ha scritto:
>>> That doesn't look right.? It should look more like 1) stop or freeze
>>> all of the services (httpd and database), 2) make the snapshot, 3)
>>> start or thaw all of the services, 4) mount the snapshot, 5)
2018 Sep 18
0
LVM and Backups
On 9/17/18 11:38 PM, Alessandro Baggi wrote:
> Il 17/09/2018 22:12, Gordon Messmer ha scritto:
>> That doesn't look right.? It should look more like 1) stop or freeze
>> all of the services (httpd and database), 2) make the snapshot, 3)
>> start or thaw all of the services, 4) mount the snapshot, 5) back up
>> the data, 6) remove the snapshot.
>
> About
2017 Aug 27
2
Connect to postgreSQL
I am using RStudio Version1.0.143 on a Windows 7 machine. R version 3.4.0
I am trying to connect to a postgreSQL database with the following
command, but I receive an error message that says my password is
incorrect. Since I saved my password in a file, I think I remember it.
I searched for a solution online, but cannot figure out what to do. If I
have to change my password, please provide
2014 Jul 02
3
block level changes at the file system level?
I'm trying to streamline a backup system using ZFS. In our situation,
we're writing pg_dump files repeatedly, each file being highly similar
to the previous file. Is there a file system (EG: ext4? xfs?) that, when
re-writing a similar file, will write only the changed blocks and not
rewrite the entire file to a new set of blocks?
Assume that we're writing a 500 MB file with only
2018 Sep 18
2
LVM and Backups
Il 17/09/2018 22:12, Gordon Messmer ha scritto:
> On 9/17/18 7:50 AM, Alessandro Baggi wrote:
>> Running a backup I follow this steps:
>>
>> 1) Stop httpd
>> 2) Create lvm snapshot on the dataset
>> 3) Backup database
>> 4) restart httpd (to avoid more downtime)
>> 5) mount the snapshot and execute backup
>> 6) umount and remove the snapshot
2005 Mar 07
1
postgres unit testing in 0.10.1
http://dev.rubyonrails.com/changeset/856
looks to me like it''s not going to work, since there''s no way of
specifying a password to pg_dump/dropdb/createdb.
If you set PGPASSWORD in your environment it might work, but that''s not
documented anywhere and isn''t done automatically in the Rakefile.
See
http://thread.gmane.org/gmane.comp.lang.ruby.rails/3693
for
2007 Feb 06
1
Postgres, testing and maybe spurious database DROPpings?
Folks,
I don''t like that I have to grant CREATEDB rights to the test user to
get testing working smoothly with Postgres.
What is the philosophical reason that Rails wants to drop and recreate
databases during the testing? It would seem to me that "pg_dump -c" (the
"clean" dump option, similar to mysqldump''s --add-drop-tables ) would
suffice when
2009 Nov 17
2
High load averages with latest kernel and USB drives?
I'm having a server report a high load average when backing up Postgres
database files to an external USB drive. This is driving my loadbalancers all
out of kilter and causing a large volume of network monitor alerts.
I have a 1TB USB drive plugged into a USB2 port that I use to back up the
production drives (which are SCSI). It's working fine, but while doing backups
(hourly) the
2007 Sep 17
2
File bit synchronization?
Not sure if that is the correct terminology. I have an rsync setup of
files between two Windows servers using cwRsync, which uses -apv options
among --progress and --delete options. One part of the backup is to
transfer a MSSQL backup file and I notice that after the initial
transfer taking 20 minutes or more, subsequent daily transfers after is
changes each night take only a minute or two max and
2005 Sep 17
0
Postgresql + rake errors
Hi,
Using rake with Postgresql gives the following error:
psql:db/development_structure.sql:13: ERROR: must be owner of schema public
I''m using the latest version of rails and postgresql 8.0 on ruby 1.8.3
(2005-06-23) [i486-linux].
The above error occurs when rake clones my dev database to test - it
first came to my attention when I ran "rake" for my first unit tests. It
2006 Nov 24
0
PostgreSQL search_path and db:structure:dump rake task
Hello!
I''ve submitted a patch to work around a problem when multiple schemas
are included in the search_path in database.yml for PostgreSQL
databases. (Long story short: the pg_dump --schema flag only accepts
a single schema.)
http://dev.rubyonrails.org/ticket/6665
While I''ve tested the regex pattern used to check for multiple
schemas, I''m unsure of how to
2013 Sep 01
0
Asterisk database weird behaviour
I am using realtime architecture in my Asterisk 11.3.0 system; for cdr,
dialplan, sip accounts, voicemail and queues. My database (on separate
machine) is postgres 8.4 in drbd cluster with one active and one passive
node controlled by corosync and pacemaker. So 99% of my sip accounts is
defined in realtime, but I also have some statically in sip.conf
I have a following problem:
Each time I have
2015 Dec 02
2
CentOS and bacula
Hi Fabio,
thanks for your reply. (I'm new with bacula).
To make things more clear:
1) Reinstall CentOS and software;
2) Configure database and import catalog
3) Copy last valid configuration for dir, sd,fd and console.
4) Re-attach my disk backup device with last volumes
That's all?
Another question is about Catalog backup. When I need to restore a
backup, bacula reads catalog and
2016 Jun 09
0
remote backup
On 6/9/2016 11:43 AM, Valeri Galtsev wrote:
> When databases are concerned, I would never rely on a snapshot of their
> storage files. Either stop relevant daemon(s), then do fs snapshot, or
> better though do dbdump and restore databases from dump when you need to
> restore it. Also: databases usually have "hold transactions" flag or
> similar, post this flag before
2006 Mar 29
1
htdig with omega for multiple URLs (websites)
Olly,
many thanks for suggesting htdig, you saved me a lot of time.
Htdig looks better than my original idea - wget, you were right.
Using htdig, I can crawl and search single website - but I need to
integrate search of pages spread over 100+ sites. Learning, learning....
Htdig uses separate document database for every website (one database
per URL to initiate crawling). Htdig also can merge