Hi , i have a problem while importing unicode characters (üşçö etc....). i have scaffold categories model then i try create new categorie using CRUD scenario so it worked , but when i use csv_import controller , characters dont seems right. how can i solve this problem
I''ve discovered the same thing as Peter, although I''m using different tools (Upscene''s Database Workbench Pro 2.8.10) to import the data from both MDB and Firebird tables, which both handle the unicode (DBCS) character set acceptably well. Having said that, I''ve also tried command-line importing from native and delimited text versions of my source data, and I get exactly the same errors as I see in DBW. e.g. : Incorrect string value: ''Agnetha F\xE4ltskog'' for column ''artistname'' at row 1 I''m using MySQL V5.0.45 x64. I''ve tried every different combination of character set for the columns in the destination MySQL table, and neither single nor double-byte characters seem to be imported properly. I''ve also tried using different table types (InnoDB and MySAM) to see if that makes any difference. It doesn''t appear to (i.e. I get the same errors at the same point in the imported data). I do understand that there are some limitations in using unicode (or unicode-like) character sets, but I''m banging my brains out trying to not have to hand-edit or perform on-the-fly mappings of a couple of characters in a couple of columns in ~1200 records of a 7,000 record source table to import them into MySQL. It''s a bit frustrating that MySQL doesn''t seem to handle the same characters that Access and Firebird 2.0.1 deal with without fuss (once the correct character set is configured, of course!). There remains the other major problem of handling of non-unicode NLS mappings (Greek, Cyrillic, Slavic, etc), but I''ll deal with that on a record-by-record basis at a later date. But to import about 1000 records containing various unicode characters is my main focus. Any suggestions? I''m happy to post examples, error messages, character values, whatever it takes! I''d really like to solve this problem, or at least understand why the mappings aren''t correct. with regards, Brad Wilson (<a href="http://www.netpharmaworld.com/purchase/caverta.asp">Caverta </a>|<a href="http://www.trustpharma.com/purchase/kamagra.asp">kamagra</a>) -- Posted via http://www.ruby-forum.com/.
I''ve discovered the same thing as Peter, although I''m using different tools (Upscene''s Database Workbench Pro 2.8.10) to import the data from both MDB and Firebird tables, which both handle the unicode (DBCS) character set acceptably well. Having said that, I''ve also tried command-line importing from native and delimited text versions of my source data, and I get exactly the same errors as I see in DBW. e.g. : Incorrect string value: ''Agnetha F\xE4ltskog'' for column ''artistname'' at row 1 I''m using MySQL V5.0.45 x64. I''ve tried every different combination of character set for the columns in the destination MySQL table, and neither single nor double-byte characters seem to be imported properly. I''ve also tried using different table types (InnoDB and MySAM) to see if that makes any difference. It doesn''t appear to (i.e. I get the same errors at the same point in the imported data). I do understand that there are some limitations in using unicode (or unicode-like) character sets, but I''m banging my brains out trying to not have to hand-edit or perform on-the-fly mappings of a couple of characters in a couple of columns in ~1200 records of a 7,000 record source table to import them into MySQL. It''s a bit frustrating that MySQL doesn''t seem to handle the same characters that Access and Firebird 2.0.1 deal with without fuss (once the correct character set is configured, of course!). There remains the other major problem of handling of non-unicode NLS mappings (Greek, Cyrillic, Slavic, etc), but I''ll deal with that on a record-by-record basis at a later date. But to import about 1000 records containing various unicode characters is my main focus. Any suggestions? I''m happy to post examples, error messages, character values, whatever it takes! I''d really like to solve this problem, or at least understand why the mappings aren''t correct. with regards, Brad Wilson http://www.netpharmaworld.com/blog http://www.trustpharma.com/blog -- Posted via http://www.ruby-forum.com/.
Thanks for all , i have solved problem , it was only text-editor options , i changed scite standarts (encoding=utf-8) and it''s worked :) On May 29, 12:47 pm, Brad Wilson <rails-mailing-l...-ARtvInVfO7ksV2N9l4h3zg@public.gmane.org> wrote:> I''ve discovered the same thing as Peter, although I''m using different > tools (Upscene''s Database Workbench Pro 2.8.10) to import the data from > both MDB and Firebird tables, which both handle the unicode (DBCS) > character set acceptably well. Having said that, I''ve also tried > command-line importing from native and delimited text versions of my > source data, and I get exactly the same errors as I see in DBW. e.g. : > > Incorrect string value: ''Agnetha F\xE4ltskog'' for column ''artistname'' at > row 1 > > I''m using MySQL V5.0.45 x64. > > I''ve tried every different combination of character set for the columns > in the destination MySQL table, and neither single nor double-byte > characters seem to be imported properly. I''ve also tried using different > table types (InnoDB and MySAM) to see if that makes any difference. It > doesn''t appear to (i.e. I get the same errors at the same point in the > imported data). > > I do understand that there are some limitations in using unicode (or > unicode-like) character sets, but I''m banging my brains out trying to > not have to hand-edit or perform on-the-fly mappings of a couple of > characters in a couple of columns in ~1200 records of a 7,000 record > source table to import them into MySQL. > > It''s a bit frustrating that MySQL doesn''t seem to handle the same > characters that Access and Firebird 2.0.1 deal with without fuss (once > the correct character set is configured, of course!). > > There remains the other major problem of handling of non-unicode NLS > mappings (Greek, Cyrillic, Slavic, etc), but I''ll deal with that on a > record-by-record basis at a later date. But to import about 1000 records > containing various unicode characters is my main focus. > > Any suggestions? I''m happy to post examples, error messages, character > values, whatever it takes! I''d really like to solve this problem, or at > least understand why the mappings aren''t correct. > > with regards, > Brad Wilsonhttp://www.netpharmaworld.com/bloghttp://www.trustpharma.com/blog > -- > Posted viahttp://www.ruby-forum.com/.