Due to a huge amount of entires, MYSQL 5.1 was choking so I had to resort to migrating to Postgres 8.1 .
All the entries migrated but the administrator cannot post to the s9y blog.
What do I need to look for?
Further the serverlog has
ERROR: duplicate key violates unique constraint "serendipity_entries_pkey"
and
ERROR: duplicate key violates unique constraint "serendipity_referrers_pkey"
Found solution for entries at
http://hcmc.uvic.ca/blogs/index.php?blo ... &tb=1&pb=1
How to fix PostgreSQL error "duplicate key violates unique constraint"
20/04/11
Permalink 10:29:54 am, by jamie, 169 words, 4429 views English (CA)
Categories: Notes; Mins. worked: 0
How to fix PostgreSQL error "duplicate key violates unique constraint"
If you get this message when trying to insert data into a PostgreSQL database:
ERROR: duplicate key violates unique constraint
That likely means that the primary key sequence in the table you're working with has somehow become out of sync, likely because of a mass import process (or something along those lines). Call it a "bug by design", but it seems that you have to manually reset the a primary key index after restoring from a dump file. At any rate, to see if your values are out of sync, run these two commands:
SELECT MAX(the_primary_key) FROM the_table;
SELECT nextval('the_primary_key_sequence');
If the first value is higher than the second value, your sequence is out of sync. Back up your PG database (just in case), then run thisL
SELECT setval('the_primary_key_sequence', (SELECT MAX(the_primary_key) FROM the_table)+1);
That will set the sequence to the next available value that's higher than any existing primary key in the sequence.
Moved from MySQL to PostGres
-
- Core Developer
- Posts: 30022
- Joined: Tue Sep 16, 2003 9:45 pm
- Location: Cologne, Germany
- Contact:
Re: Moved from MySQL to PostGres
Hi!
Migrating a MYSQL dump to PGSQL is not so easy. PGSQL handles primary key increments completely different.
You should install s9y from fresh on a pgsql setup, and then simply import the pure data (not the structure) via INSERT INTO... into that database. Then all the oid sequences of pgsql will be created.
(The link you posted after that might also be a fix, if the autoincrements are already there, yes)
Migrating a MYSQL dump to PGSQL is not so easy. PGSQL handles primary key increments completely different.
You should install s9y from fresh on a pgsql setup, and then simply import the pure data (not the structure) via INSERT INTO... into that database. Then all the oid sequences of pgsql will be created.
(The link you posted after that might also be a fix, if the autoincrements are already there, yes)
# Garvin Hicking (s9y Developer)
# Did I help you? Consider making me happy: http://wishes.garv.in/
# or use my PayPal account "paypal {at} supergarv (dot) de"
# My "other" hobby: http://flickr.garv.in/
# Did I help you? Consider making me happy: http://wishes.garv.in/
# or use my PayPal account "paypal {at} supergarv (dot) de"
# My "other" hobby: http://flickr.garv.in/
Re: Moved from MySQL to PostGres
Correct you are Gavin.
So 4 steps.
1) from http://en.wikibooks.org/wiki/Converting ... _CSV-files
When you have a large sql dump and a binary data inside, it will be uneasy to modify the data structure, so there is another way to export your data to PostgreSQL. Mysql have an option to export each tables from database as separate .sql file with table structure and .txt file with table's data in CSV-format:
mysqldump -u username -p -T/path/to/export databasename
Notice that /path/to/export should be writeable by user who runs mysqld, in most case it mysqld.
2) Create a new Serendipity instance specifying posthresql
3) From http://en.wikibooks.org/wiki/Converting ... _CSV-files
After that you should modify your table structure according PostgreSQL format:
convert data types
create separate keys definitions
replace escape characters
When table structure will be ready, you should load it as it was shown above. You should prepare data files: replace carriage return characters to "\r" and remove invalid characters for your data encoding. Here is an example bash script how you can do this and load all the data in your database:
#!/bin/bash
CHARSET="utf-8" #your current database charset
DATADIR="/path/to/export"
DBNAME="databasename"
for file in `ls -1 $DATADIR/*.txt`; do
TMP=${file%.*}
TABLE=${TMP##*/}
echo "preparing $TABLE"
#replace carriage return
sed 's/\r/\\r/g' $file > /tmp/$TABLE.export.tmp
#cleanup non-printable and wrong sequences for current charset
iconv -t $CHARSET -f $CHARSET -c < /tmp/$TABLE.export.tmp > /tmp/$TABLE.export.tmp.out
echo "loading $TABLE"
/usr/bin/psql $DBNAME -c "copy $TABLE from '/tmp/$TABLE.export.tmp.out'"
#clean up
rm /tmp/$TABLE.export.tmp /tmp/$TABLE.export.tmp.out
done
4) If the index does not work and you see duplicate key violates unique constraint
Look at http://hcmc.uvic.ca/blogs/index.php?blo ... &tb=1&pb=1
How to fix PostgreSQL error "duplicate key violates unique constraint"
If you get this message when trying to insert data into a PostgreSQL database:
ERROR: duplicate key violates unique constraint
That likely means that the primary key sequence in the table you're working with has somehow become out of sync, likely because of a mass import process (or something along those lines). Call it a "bug by design", but it seems that you have to manually reset the a primary key index after restoring from a dump file. At any rate, to see if your values are out of sync, run these two commands:
SELECT MAX(the_primary_key) FROM the_table;
SELECT nextval('the_primary_key_sequence');
If the first value is higher than the second value, your sequence is out of sync. Back up your PG database (just in case), then run thisL
SELECT setval('the_primary_key_sequence', (SELECT MAX(the_primary_key) FROM the_table)+1);
That will set the sequence to the next available value that's higher than any existing primary key in the sequence.
Please a \dt might be needed to see what the_primary_key_sequence is.
So 4 steps.
1) from http://en.wikibooks.org/wiki/Converting ... _CSV-files
When you have a large sql dump and a binary data inside, it will be uneasy to modify the data structure, so there is another way to export your data to PostgreSQL. Mysql have an option to export each tables from database as separate .sql file with table structure and .txt file with table's data in CSV-format:
mysqldump -u username -p -T/path/to/export databasename
Notice that /path/to/export should be writeable by user who runs mysqld, in most case it mysqld.
2) Create a new Serendipity instance specifying posthresql
3) From http://en.wikibooks.org/wiki/Converting ... _CSV-files
After that you should modify your table structure according PostgreSQL format:
convert data types
create separate keys definitions
replace escape characters
When table structure will be ready, you should load it as it was shown above. You should prepare data files: replace carriage return characters to "\r" and remove invalid characters for your data encoding. Here is an example bash script how you can do this and load all the data in your database:
#!/bin/bash
CHARSET="utf-8" #your current database charset
DATADIR="/path/to/export"
DBNAME="databasename"
for file in `ls -1 $DATADIR/*.txt`; do
TMP=${file%.*}
TABLE=${TMP##*/}
echo "preparing $TABLE"
#replace carriage return
sed 's/\r/\\r/g' $file > /tmp/$TABLE.export.tmp
#cleanup non-printable and wrong sequences for current charset
iconv -t $CHARSET -f $CHARSET -c < /tmp/$TABLE.export.tmp > /tmp/$TABLE.export.tmp.out
echo "loading $TABLE"
/usr/bin/psql $DBNAME -c "copy $TABLE from '/tmp/$TABLE.export.tmp.out'"
#clean up
rm /tmp/$TABLE.export.tmp /tmp/$TABLE.export.tmp.out
done
4) If the index does not work and you see duplicate key violates unique constraint
Look at http://hcmc.uvic.ca/blogs/index.php?blo ... &tb=1&pb=1
How to fix PostgreSQL error "duplicate key violates unique constraint"
If you get this message when trying to insert data into a PostgreSQL database:
ERROR: duplicate key violates unique constraint
That likely means that the primary key sequence in the table you're working with has somehow become out of sync, likely because of a mass import process (or something along those lines). Call it a "bug by design", but it seems that you have to manually reset the a primary key index after restoring from a dump file. At any rate, to see if your values are out of sync, run these two commands:
SELECT MAX(the_primary_key) FROM the_table;
SELECT nextval('the_primary_key_sequence');
If the first value is higher than the second value, your sequence is out of sync. Back up your PG database (just in case), then run thisL
SELECT setval('the_primary_key_sequence', (SELECT MAX(the_primary_key) FROM the_table)+1);
That will set the sequence to the next available value that's higher than any existing primary key in the sequence.
Please a \dt might be needed to see what the_primary_key_sequence is.