首页 > 代码库 > Wrong codepoints for non-ASCII characters inserted in UTF-8 database using CLP

Wrong codepoints for non-ASCII characters inserted in UTF-8 database using CLP

Technote (troubleshooting)

Problem(Abstract)

During insert from the CLP there is no codepage conversion if operating system codepage and database codepage are both UTF-8. In this case data to be inserted should also be in UTF-8 encoding.
If data has a different encoding then the database codepage (this can be verified using any hex editor), then the operating system codepage should be changed to match the data‘s encoding in order to enforce the data conversion to the database codepage.

Symptom

Error executing Select SQL statement. Caught by java.io.CharConversionException. ERRORCODE=-4220

Caused by: java.nio.charset.MalformedInputException: Input length = 4759 at com.ibm.db2.jcc.b.u.a(u.java:19) at com.ibm.db2.jcc.b.bc.a(bc.java:1762)


Cause

During an insert of data using CLP characters, they do not go through codepage conversion. If operating system and database codepage both are UTF-8, but the data to be inserted is not Unicode, then data in the database might have incorrect codepoints (not-Unicode) and the above error will be a result during data retrieval.

To verify the encoding for data to be inserted you can use any editor that shows hex representation of characters. Please verify the codepoints for non-ASCII characters that you try to insert. If you see only 1 byte per non-ASCII characters then you need to force the database conversion during insert from CLP to UTF-8 database.

To force codepage conversion during insert from the CLP make sure that the operating system codepage is non-Unicode and matching to the codepage of data when you insert data to Unicode database from non-Unicode data source.

Problem Details An example problem scenario is as follows:

  1. Create a database of type UTF-8:
    CREATE DATABASE <db> USING CODESET utf-8 TERRITORY US
  2. Create a table that holds character data:
    CREATE TABLE test (col char(20))
  3. Check operating system locale:
    locale LANG=en_US.UTF-8 LC_CTYPE="en_US.UTF-8"
  4. Insert the non-ASCII characters ‘?‘ , ‘3‘, ‘?‘ which have codepoint 0x‘C3‘, 0x‘B3‘,0x‘A9‘ in codepage 819 into the table:
    INSERT INTO test VALUES (‘?‘) INSERT INTO test VALUES (‘3‘) INSERT INTO test VALUES (‘?‘)
  5. By running the following statement, you can see that all INSERT statements caused only one byte to be inserted into the table:
    SELECT col, HEX(col) FROM test
    ? C3 3 B3 ? A9
    However, the UTF-8 representation of those characters are: 0x‘C383‘ for ‘?‘, 0x‘C2B3‘ for ‘3‘, and 0x‘C2A9‘ for ‘?‘. So these three rows in the table contain invalid characters in UTF-8.
  6. When selecting from a column using the JDBC application, the following error will occur. This is expected because the table contains invalid UTF-8 data: Error executing Select SQL statement. Caught by java.io.CharConversionException. ERRORCODE=-4220 Caused by: java.nio.charset.MalformedInputException: Input length = 4759 at com.ibm.db2.jcc.b.u.a(u.java:19) at com.ibm.db2.jcc.b.bc.a(bc.java:1762)
  7. Delete all rows with incorrect Unicode codepoints from the test table: DELETE * from test
  8. Change the locale to one that matching codepage of data to be inserted: export locale=en_us. One of the way to determine the codepage for your data can be found here: http://www.codeproject.com/Articles/17201/Detect-Encoding-for-In-and-Outgoing-Text. If you prepare data yourself using some editor please check the documentation for your editor to find out how to set up the codepage for data being prepared by the editor.
  9. Insert data to the table: INSERT INTO test VALUES (‘?‘) INSERT INTO test VALUES (‘3‘) INSERT INTO test VALUES (‘?‘)
  10. Verify that inserted data were converted to UTF-8 during insert: SELECT col, HEX(col) FROM test
    ? C383 3 C2B3 ? C2A9
  11. Run your java application selecting Unicode data. No exception should be reported.

 

Environment

UNIX, Linux, Unicode database

 

Diagnosing the problem

Verify that non-ASCII data have a proper Unicode codepoints in Unicode database

 

Resolving the problem

Reinsert data with codepage conversion enforced by setting the operation system codepage matching to the codepage of data to be inserted

Related information

Export data:

 
 

Community questions and discussion

Wrong codepoints for non-ASCII characters inserted in UTF-8 database using CLP