Varchar2 or nvarchar2?

My character set is AL32UTF8. Now I want to limit the length of a column to 5 chars for both English letters and Chinese characters, but seems I can't make it work for both language. (In SQL Server, nvarchar can be used for this purpose):
create table test(a varchar2(5 char));
insert into test values('abcdef') --Will prompt value is too large for the column
insert into test values('大大大') --Will prompt value is too large for the column
looks like a chinese character always occupies two chars, I tried nvarchar2 too, but the result was the same. I can't simply increase the length of the column, because I also need to limit the number of English letters in that column.
Any ideas about this?

Your test case should work. I'm guessing there is something wrong with your environment that is causing the problem. Your client side NLS_LANG environment variable is possibly wrong, one way to check this is to perform the same test case with a VARCHAR2(100) then perform a
SELECT DUMP(a, 1016) FROM test;
And post the output here.

Similar Messages

  • Shall I use varchar2 or nvarchar2?

    My character set is AL32UTF8. Now I want to limit the length of a column to 5 chars for both English letters and Chinese characters, but seems I can't make it work for both language. (In SQL Server, nvarchar can be used for this purpose):
    create table test(a varchar2(5 char));
    insert into test values('abcdef') --Will prompt value is too large for the column
    insert into test values('大大大') --Will prompt value is too large for the column
    looks like a chinese character always occupies two chars, I tried nvarchar2 too, but the result was the same. I can't simply increase the length of the column, because I also need to limit the number of English letters in that column.
    Any ideas about this?

    I have identified this post as one that appears to have been posted between August 3rd and 7th, 2004, but had the wrong date applied, during problems with the forums. I do not have time to answer them all myself, so I am posting this reply just to bump it from the bottom of the list to the top of the list, where others may see it and respond to it.

  • VARCHAR2 to NVARCHAR2

    Hi All,
    I have a script which selects data from a varchar2 column and inserts this into a nvarchar2 column.
    ex: insert into dest_table(dcol1,dcol2....dcolN) select scol1 as "DCOL1", scol2 as "DCOL2"......, scolN as "DCOLN" from src_table
    dcol1,2...N = NVARCHAR2 columns
    scol1,2,...N = VARCHAR2 columns
    However, the data is being inserted as NULL.
    Do I have to use any special conversion function to ensure that data is inserted properly?
    (Oracle9i Enterprise Edition Release 9.2.0.4.0 - 64bit Production)
    Pls help
    Thanks

    What is your Oracle Version?
    What are your character set and national character set on creating database?
    And what is your environment variable NLS_LANG?
    If you used Oracle9i or Oracle8i, then you might need to use [url http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96540/functions22a.htm]CONVERT function for conversion.
    Please refer to [url http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96529/ch7.htm#556] SQL and PL/SQL Programming in a Global Environment  also.
    If you used Oracle10g(10.20) and your setting was correct,
    there is no need to use any function. (Implicitly converted)

  • AL32UTF8 - VARCHAR2 ok for Unicode ? No need for NVARCHAR2 ?

    If a 10gR2 database character set is AL32UTF8, then do VARCHAR2 columns suffice for storing unicode data of any language ? I'm hearing conflicting advice - some people are telling me NVARCHAR2 is necessary, others are saying VARCHAR2 is fine with AL32UTF8.
    I'll be using AL32UTF8 anyway due to needs of XML data, but will also have some character data in "normal" SQL columns, so need to choose between VARCHAR2 and NVARCHAR2.
    Thanks,
    Andy Mackie.

    If you refer to a default installation, i.e., AL32UTF8 as database character set and AL16UTF16 as national character set. Both supports Unicode happily. On the contrary, you take into account of:
    1) Performance
    2) Length semantics
    3) Sizing
    All are well documented in free online documents.

  • Cannot insert Chinese character into nvarchar2 field

    I have tested in two environments:
    1. Database Character Set: ZHS16CGB231280
    National Character Set: AL16UTF8
    If the field type of datatable is varchar2 or nvarchar2, the provider can read and write Chinese correctly.
    2. Database Character Set:WE8MSWIN1252
    National Character Set: AL16UTF8
    The provider can not read and write Chinese correctly even the field type of datatable is nvarchar2
    I find that for the second one, both MS .NET Managed Provider for Oracle and Oracle Managed Data Provider cannot read and write NCHAR or NVARCHAR2 fields. The data inserted into these fields become question marks.
    Even if I changed the NLS_LANG registry to SIMPLIFIED CHINESE_CHINA.ZHS16CGB231280, the result is the same.
    For the second situation, only after I change the Database Character Set to ZHS16CGB231280 with ALTER DATABASE CHARACTER SET statement, can I insert Chinese correctly.
    Does any know why I cannot insert Chinese characters into Unicode fields when the Database Character Set is WE8MSWIN1252? Thanks.
    Regards,
    Jason

    Hi Jason,
    First of all, I am not familiar with MS .NET Managed Provider for Oracle or Oracle Managed Data Provider.
    How did you insert these Simplified Chinese characters into the NVARCHAR2 column ? Are they hardcoded as string literals as part of the SQL INSERT statement ? If so, this could be because, all SQL statements are converted into the database character set before they are parsed by the SQL engine; hence these Chinese characters would be lost if your db character set was WE8MSWIN1252 but not when it was ZHS16CGB231280.
    Two workarounds, both involved the removal of hardcoding chinese characters.
    1. Rewrite your string literal using the SQL function UNISTR().
    2. Use bind variables instead of text literals in the SQL.
    Thanks
    Nat

  • EF6 with ODP 12cR3 : ORA-12704 (VARCHAR2 not supported?)

             Hi!
              We cannot port our working code from EF5 to EF 6.1.2, because the generated SQL output unicode query which is not compatible with VARCHAR2 column.
              How can we fix this issue ?
                Our Setup
                 Oracle 11.2.0.3.0
                 NLS_CHARACTERSET               WE8MSWIN1252                            
                 NLS_NCHAR_CHARACTERSET         AL16UTF16
                    -- TEST
                      CREATE TABLE "TEST"
                       ( "ID" NUMBER(*,0) NOT NULL ENABLE,
                     "NAME" VARCHAR2(20),
                     "VALUE" VARCHAR2(20)
                       ) PCTFREE 10 INITRANS 1 NOCOMPRESS LOGGING
                      STORAGE( INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
                    -- ID
                    ALTER TABLE "TEST" ADD CONSTRAINT "ID" PRIMARY KEY ("ID") ENABLE;
                 Objective : Trying to port working code from EF5 to EF6.1.2
                 Problem : Generated SQL contains unicode query when using VARCHAR2 column type.
                using (var ctx = new Entities())
                    var queryable = ctx.TESTs.Where(x => x.NAME.Contains("test")).Select(
                        t => new
                            TestVarcharVsUnicode = t.VALUE + "AnotherStringLiteralWillGenerateNvarcharSQL"
                    /* Generated SQL
                    queryable =
                        SELECT
                        1 AS "C1",
                        ((CASE WHEN ("Extent1"."VALUE" IS NULL) THEN N'' ELSE "Extent1"."VALUE" END)||('AnotherStringLiteralWillGenerateNvarcharSQL')) AS "C2"
                        FROM "TES_RA"."TEST" "Extent1"
                        WHERE ("Extent1"."NAME" LIKE '%test%')
                    var result = queryable.ToList();   // ORA-12704: character set mismatch here because VALUE column type is VARCHAR2 not NVARCHAR2
    UPDATE: By using an IDbCommandInterceptor class we can replace those Unicode string before execution.  But it's only a workaround until we found the correct way to fix it.
    Thank you in advance,
    Jean Francoeur

    Hi Verolamaz,
    No update on our side.
    We've also tried modelBuilder.Properties<string>().Configure(c => c.HasColumnType("varchar2"));
    But the generated SQL stay the same...
    Jean

  • Varchar2

    dear all
    what the difference betwwen varchar2 and nvarchar2
    Best Regards

    VARCHAR2 is intended for the characters that are typically used
    in the English and other similar languages. NVARCHAR2 is
    intended to support the national character set which is used for
    some Asian and other languages that require many more
    characters. You can find more detailed information in the on-
    line Oracle documentation. In the upper right-hand corner of
    your screen, you should see an empty box to the right of the
    word search. Enter NVARCHAR2 in that box and press your Enter
    key. Then when the list of selections appears, scroll down
    until you see a topic heading about pl/sql data types, and click
    on it.

  • Oracle 10g - Defining the column name in Non English

    Hi Experts,
    I have an exisitng application which is developed on Windows using ASP Technology and uses Oracle 10g 10.1.0.2.0.
    The application is supported with an instance of Data Base within which multiple tablespaces are created for different clients. The application is developed in such a way that some of the tables arecreated dynamically and the columns are named using the data entered through the UI.
    This application needs to be globalized now. The problem is, the column name entered through the UI can be in any language based on the client's settings and those values in turn will be used for naming the columns in the tables.
    1) Can I have the column names to be named using non english characters in Oracle 10g DB? If so,
    1.1) what should I do to configure the exisiting Oracle instance to support it?
    1.2) To what level is that configuration possible, is it per DB instance level (or) can it be done at Tablespace level. I would like to configure each tablespace to host tables with columns defined with different languages, say for example, tablespace 1 will have tables with Japaenese column names and tablespace 2 will have tables with German column names?
    2) What should I do to make my entire DB to support unicode data i.e., to accept any language strings. Currently all strings are declared as VarChar2, should I change all VarChar2 to NVarChar2 (or) is there a way to retain the VarChar2 as is and make some database wide setting?
    Please note that I do not have an option of retaining the column in English as per the Business Requirement.
    Envionment:
    OS - Windows 2003 32 bit
    Oracle 10g 10.1.0.2.0
    UI forms in ASP
    TIA,
    Prem

    1. Yes, you can.
    SQL> create table ÜÝÞ( ßàá number(10));
    Table created.
    SQL> insert into ÜÝÞ values (10);
    1 row created.1.1 and 1.2 and 2. You can choose UTF as your default character set. It allows the user of non-English characters in VARCHAR columns in your whole database. It is not per tablespace.
    SQL> create table ÜÝÞ( ßàá varchar2(100));
    Table created.
    SQL> insert into ÜÝÞ values ('âãäçìé');
    1 row created.

  • Top 10 Obstacles to Sql Developer Becoming a World-class Tool

    I've been working with Sql Developer day and night for the last 6 months.
    On a positive note, the SqlDeveloper team has been the most responsive Oracle product team I've worked with in the 19 years I've been working with Oracle tools. They pay attention to their customers. It's noticed and much appreciated!
    I thought I would share the biggest problems that I face with the tool on a daily basis, the kind of problems that make me want to work with a different tool each and every day.
    My intent isn't to gripe, it's to focus attention on the biggest productivity drains I face using the tool. Others may have a different list, based on their needs. Without further ado, here is my top 10 problems list:
    1) Quality Control.
    I cannot count on critical portions of the tool working correctly. This includes an oracle database development tool that is incapable of extracting oracle ddl correctly and which is incapable of correctly displaying information about SQL Server data and database objects. It also includes destroying connection files and losing keyboard settings. When the product was installed, it was incapable of properly displaying code in a worksheet when I scrolled thru the code. The details are listed in other postings of mine.
    2) Quality Control.
    See #1.
    3) Quality Control.
    See #1.
    4) Very badly done threading.
    The tool locks up on a constant basis when it does a many tasks. Rather than let me work on some other task, I have to wait for it to complete. My current work-around is to have two or three sql developer windows open. That sucks life out of my RAM supply, but at least I can get some work done. And, of course, it will often completely lock up and never return, which means I lose all unsaved worksheets. This forum is full of postings about these issues.
    5) Memory Leaks / Internal memory corruption.
    If I've had the tool open for a few days, or really worked it hard for a day, I will get bizarre compilation errors that make no sense. If I exit the tool, re-enter the tool, and compile the exact same code all will be well.
    6) Awkward and slow data entry interface for frequently performed tasks.
    Example: I create a new table and want to start defining columns for it. I remove my hands from the keyboard to press the + button, then I have to set focus on the column name field (it should do that for me!). Now that my hands are back on the keyboard, I have to backspace the dummy "column name" value in the column name field (it should ditch that dummy value for me). Only after all that can I actually enter a column name. When I want to add a new column, it's back to the mouse again, for the same drill. The down arrow key should take me to a new column record, as should pressing return at the end of the last field in the column row.
    7) Destroys code
    Changing a column datatype from varchar2 to nvarchar2 destroys the length of the column. Changing a field on a view destroys the instead of triggers. This is bad. There is no warning that this this is about to happen, which would at least give us a chance to avoid the problem. Better still, of course, would be not destroying that data.
    8) Inaccurate checking for record locking.
    When I try to edit records in a data grid for a table, I often get an error message telling me the data was modified in another session. It is simply not true. A hand-written update statement in a sql worksheet will work just fine. I've seen posts in the forum discussing this issue. An Oracle database development tool unable to reliably update oracle data tables is embarrassing. See Obstacle #1.
    9) Unicode support
    Sql Developer is heads above all the other tools I've tried out on this topic.
    However, the configuration of the tool to provide unicode support needs to be simplified.
    The tool needs to recognize the encoding of the files that are being opened up. On Windows Vista, Notepad seems to infallibly pick the right encoding for the file when I open it. Sql Developer should do the same. Files have specific encodings, the tool should have a default encoding (that I can override). Right now, the tool has a specific encoding and expects all files to match it. Extracting ddl to a worksheet does not respect the encoding choices of the tool, either. It only works with a limited set of tool encoding choices.
    10) Resources
    Does the sql developer team have the resources they need to compete with vendors like Microsoft? One really big reason for picking sqlserver over oracle is the ease of use of the Microsoft front-end tools. It's not until later that they may realize that Oracle has more capability, but that's still a lost sale.

    We are acutely aware of quality and with each release work at improving this. Providing a polished, professional and ultimately user friendly and useful tool is our constant goal.
    The broader our customer base grows, the more demands we have. This is a good and exciting position to be in, although it might mean that we need to slow down on our release cycles.
    Release 2.0 should address more of the threading and memory leaks displayed as the team have rewritten some of the sections. As for resources, it's true we're a small team and we get on with the work that we do.It might be a little slower than some would like, but I'm not convinced that having large team is necessarily always the answer.
    As ever, some of the points mentioned could be added to the Exchange. We'll be reviewing and updating the Exchange again in the New Year.
    I think there is another point to add to your list. A lot of what the tool is and will become is from a positive customer interaction we have had to date. While we continue to grow this, I think the product will grow and improve. The forum and all the positive interactions that happen here are key to taking the product forward.
    Regards
    Sue

  • How to retrieve the data type of a column of a table?

    Hi,
    I want to retrieve the data type of a column of a table. At the moment I am querying "OCI_ATTR_DATA_TYPE" attribute but it is returning SQLT_CHR for both varchar2 and nvarchar2 data type columns. I need to distinguish between these two data types columns separately. Is there any API through which I could get the exact data type of a column i.e. "nvarchar2"?
    Thanks in advance.
    Hashim

    Hi,
    This is the Oracle C++ Call Interface (OCCI) forum - I'm not sure if you are using OCCI or OCI (Oracle Call Interface - the C interface) since you reference "OCI_ATTR_DATA_TYPE" which is more of an OCI focus than OCCI.
    In any case, you might take a look at "OCI_ATTR_CHARSET_FORM" which takes the following values:
    #define SQLCS_IMPLICIT 1     /* for CHAR, VARCHAR2, CLOB w/o a specified set */
    #define SQLCS_NCHAR    2                  /* for NCHAR, NCHAR VARYING, NCLOB */So, if you have a datatype of SQLT_CHR and OCI_ATTR_CHARSET_FORM is SQLCS_IMPLICIT then you have a varchar2 if it is SQLCS_NCHAR then you have an nvarchar2.
    If you are using OCCI and not OCI then take a look at MetaData::ATTR_DATA_TYPE and MetaData::ATTR_CHARSET_FORM which expose OCI_ATTR_DATA_TYPE and OCI_ATTR_CHARSET_FORM respectively.
    Perhaps that will get you what you want.
    Regards,
    Mark

  • Unicode characters in file name

    Hi,
    I try to open a file (using UTL_FILE functionalities) whose name contains polish characters (e.g. 'test-ś.txt').
    In return, I get error message:
    ORA-29283: invalid file operation
    ORA-06512: at "SYS.UTL_FILE", line 633
    ORA-29283: invalid file operation
    Error is not due to missing rights on file/directory because when I replace the polish character by a latin one, file is opened successfuly.
    I also tried to rename a file (using UTL_FILE.FRENAME) from latin to polish characters (e.g 'test-s.txt' -> 'test-ś.txt').
    File is renamed but polish characters are lost (final result is something like 'test-Å›.txt').
    What's wrong with my environment or code?
    Thanks in advance for your help,
    Arnaud
    Here's my environment description, PL/SQL code and results.
    Environment:
    OS Windows in Polish for client box
    * code page ACP=1250
    * NLS_LANG=POLISH_POLAND.EE8MSWIN1250
    OS Windows in US/English for database server
    * code page ACP=1252
    * NLS_LANG=AMERICAN_AMERICA.AL32UTF8
    Oracle 10.2.0.5
    * NLS_CHARACTERSET=AL32UTF8
    * NLS_NCHAR_CHARACTERSET=AL16UTF16
    Tests are executed from SQL Developer on client box.
    The file I'm trying to open is located on database server.
    So, Oracle Directory path used in FOPEN procedure is something like '\\server\directory'.
    PL/SQL code:
    SET SERVEROUTPUT ON;
    declare
    Message varchar2(1000);
    Filename varchar2(1000); -- nvarchar2(1000);
    FileHandler UTL_FILE.FILE_TYPE;
    OraDir varchar2(30) := 'SGINSURANCE_DIR_SOURCE';
    begin
    dbms_output.enable(10000);
    --Filename := 'test-s.txt';
    Filename := 'test-ś.txt';
    Message := 'Opening file ['||Filename||']';
    dbms_output.put_line(Message);
    --FileHandler := UTL_FILE.FOPEN_NCHAR(OraDir, Filename, 'r');
    FileHandler := UTL_FILE.FOPEN(OraDir, Filename, 'r');
    Message := 'Closing file';
    dbms_output.put_line(Message);
    UTL_FILE.FCLOSE(FileHandler);
    exception
    when others then
    Message := 'Error: '||SQLERRM;
    dbms_output.put_line(Message);
    if UTL_FILE.IS_OPEN(FileHandler) then
    Message := 'Closing file ['||Filename||']';
    dbms_output.put_line(Message);
    UTL_FILE.FCLOSE(FileHandler);
    end if;
    end;
    Results:
    Test with polish characters -> error ORA-29283: invalid file operation
    anonymous block completed
    Opening file [test-ś.txt]
    Error: ORA-29283: invalid file operation
    ORA-06512: at "SYS.UTL_FILE", line 536
    ORA-29283: invalid file operation
    Test without polish characters -> no error
    anonymous block completed
    Opening file [test-s.txt]
    Closing file
    -----------------------------------------------------------

    Hello,
    I tested this issue on Oracle-10-XE on Windows-XP with different Language settings.
    It seems to me that UTL_FILE doesn't use wide character Windows API functions like _wfopen,
    but simply old fopen based on 8-bit character strings.
    Looks like UTL_FILE.FOPEN do not any character conversion
    on the filename, but pass this filename "as is" directly to the operating system,
    for example for a string "teść" with polish characters the following char codes are passed:
    SELECT dump( 'teść', 16 ) from dual;
    DUMP('TEŚĆ',16)                
    Typ=96 Len=6: 74,65,c5,9b,c4,87ś - is : c5, 9b
    ć - is : c4 87
    In windows API functions based on on 8-bit char * strings are interpreted as being in the system code page
    - look at this thread -> [ http://stackoverflow.com/questions/480849/windows-codepage-interactions-with-standard-c-c-filenames]
    So if your code page is a Windows ANSI 1252, these characters are treated as:
    ś -> c5 is "Å" , 9b is "›" --> Å›
    ć -> c4 -> Ä, 87 -> ‡ --> ć
    so instead of a 'teść', Windows converts it to 'teŘć' ;)
    Here is a table of codes of CP-1252 -> [http://en.wikipedia.org/wiki/Windows-1252]
    CP 1525 doesn't support polish characters, the only Windows ANSII code page that supports them is CP 1250
    I've changed the system code page to 1250 on the server side, and this have worked fine:
    declare
      fh UTL_FILE.FILE_TYPE;  
      strbuffer NVARCHAR2(1000);
    begin
      fh := UTL_FILE.FOPEN_NCHAR( 'DIR_USER_FILES', CONVERT('teść.txt', 'EE8MSWIN1250' ), 'w' );
      utl_file.put_line_nchar( fh, 'chrząszcz brzmi w trzcinie');
      utl_file.put_line_nchar( fh, 'teść żócał mięśńęm');
      utl_file.fclose( fh );
      fh := UTL_FILE.FOPEN_NCHAR( 'DIR_USER_FILES', CONVERT('teść.txt', 'EE8MSWIN1250' ), 'r' );
      LOOP
        BEGIN
          utl_file.get_line_nchar( fh, strbuffer );
          dbms_output.put_line( strbuffer );
        EXCEPTION
          WHEN OTHERS THEN
            EXIT;
        END;
      END LOOP;
      utl_file.fclose( fh );
    END;
    /I've leaved untouched the users locale as "English (United States), only *the system locale* has been changed to "Polish"
    - there are two different locales, look at this thread for details [http://mihai-nita.net/2005/06/11/setting-the-user-and-system-locales/]
    If you change the server's system locale, this will affect all other non-unicode programs running on this server,
    so something other may stop running properly.

  • Error when using DBMS_SQL.parse

    Has anyone ever ran into this error "ORA-00932: inconsistent datatypes: expected NUMBER got DATE" when using DBMS_SQL.parse? I'm trying to pass in a sql statement that includes date columns but it keeps failing during the parse step. If I put a "to_char" around the dates it works fine.
    Any ideas?
    declare
        l_cursor   PLS_INTEGER;
        l_rows     PLS_INTEGER;
        l_col_cnt  PLS_INTEGER;
        l_desc_tab DBMS_SQL.desc_tab;
        l_buffer   CLOB;
        v_query    clob;
        l_file UTL_FILE.file_type;
        g_sep  VARCHAR2(5) := ',';
      BEGIN
        l_cursor := DBMS_SQL.open_cursor;
        v_query := 'SELECT CREATED FROM DBA_USERS';
        DBMS_SQL.parse(l_cursor, v_query, DBMS_SQL.native);
        DBMS_SQL.describe_columns(l_cursor, l_col_cnt, l_desc_tab);
        FOR i IN 1 .. l_col_cnt
        LOOP
          DBMS_SQL.define_column(l_cursor, i, l_buffer);
        END LOOP;
        l_rows := DBMS_SQL.execute(l_cursor);   
        -- Output the column names.
        FOR i IN 1 .. l_col_cnt
        LOOP
          IF i > 1 THEN
            UTL_FILE.put(l_file, g_sep);
          END IF;
          UTL_FILE.put(l_file, l_desc_tab(i).col_name);
        END LOOP;
        UTL_FILE.new_line(l_file);
        -- Output the data.
        LOOP
          EXIT WHEN DBMS_SQL.fetch_rows(l_cursor) = 0;
          FOR i IN 1 .. l_col_cnt
          LOOP
            IF i > 1 THEN
              UTL_FILE.put(l_file, g_sep);
            END IF;
            DBMS_SQL.COLUMN_VALUE(l_cursor, i, l_buffer);
            -- Check for column data type. If "character" data type enclose in quotes
            -- 1 = VARCHAR2 and NVARCHAR2, 96 = CHAR and NCHAR, 112 = CLOB
            IF l_desc_tab(i).col_type IN (1, 96, 112) THEN
              l_buffer := '"' || l_buffer || '"';
            END IF;
            UTL_FILE.put(l_file, l_buffer);
          END LOOP;
          UTL_FILE.new_line(l_file);
        END LOOP;
        UTL_FILE.fclose(l_file);
      EXCEPTION
        WHEN OTHERS THEN
          IF UTL_FILE.is_open(l_file) THEN
            UTL_FILE.fclose(l_file);
          END IF;
          IF DBMS_SQL.is_open(l_cursor) THEN
            DBMS_SQL.close_cursor(l_cursor);
          END IF;
          RAISE;
      END;Edited by: jpvybes on Jun 6, 2013 3:47 PM

    >
    Has anyone ever ran into this error "ORA-00932: inconsistent datatypes: expected NUMBER got DATE" when using DBMS_SQL.parse? I'm trying to pass in a sql statement that includes date columns but it keeps failing during the parse step. If I put a "to_char" around the dates it works fine.
    >
    No - it is NOT failing on the parse step. If you comment out various sections of the code you will find that your loop is causing the problem.
    Comment out this loop and there is NO exception.
        l_buffer   CLOB;
        FOR i IN 1 .. l_col_cnt
        LOOP
          DBMS_SQL.define_column(l_cursor, i, l_buffer);
        END LOOP;Do you now see the problem?
    You are using 'define_column' and passing 'l_buffer' which is a CLOB. But this is your query
        v_query := 'SELECT CREATED FROM DBA_USERS';And that 'CREATED' column in the query is a DATE.
    Why are you defining a column as a CLOB when the cursor column is a DATE?
    See the example3 in Chapter 100 DBMS_SQL in the Packages and Types doc. It shows an example that includes a DATE column.
    http://docs.oracle.com/cd/B19306_01/appdev.102/b14258/d_sql.htm#i996963

  • Like operator functionality

    We have a simple select query which is using the 'Like' operator on a char(4) column.
    In a oracle windows environment when we have a query such as:
    select col1, col2, col3
    from table1
    where col1 like 'AB'
    it returns everything which is = to 'AB' and doesn't seem to be including the trailing 2 spaces as would be stored becasue the column is char(4)
    In a oracle unix environment when we the same query:
    select col1, col2, col3
    from table1
    where col1 like 'AB'
    it returns nothing...
    It appears as if the version running on a windows environment is truncating the trailing 2 spaces when using the like expression but in a unix environment, it is not. Does anybody have any idea or clue what could be occuring or if there is some database setting which could cause this to occur?

    Quote from Oracle Doc:
    Character Values
    Character values are compared using one of these comparison rules:
    Blank-padded comparison semantics
    Nonpadded comparison semantics
    The following sections explain these comparison semantics.
    Blank-Padded Comparison Semantics If the two values have different lengths, then Oracle first adds blanks to the end of the shorter one so their lengths are equal. Oracle then compares the values character by character up to the first character that differs. The value with the greater character in the first differing position is considered greater. If two values have no differing characters, then they are considered equal. This rule means that two values are equal if they differ only in the number of trailing blanks. Oracle uses blank-padded comparison semantics only when both values in the comparison are either expressions of datatype CHAR, NCHAR, text literals, or values returned by the USER function.
    Nonpadded Comparison Semantics Oracle compares two values character by character up to the first character that differs. The value with the greater character in that position is considered greater. If two values of different length are identical up to the end of the shorter one, then the longer value is considered greater. If two values of equal length have no differing characters, then the values are considered equal. Oracle uses nonpadded comparison semantics whenever one or both values in the comparison have the datatype VARCHAR2 or NVARCHAR2.
    "

  • Issue with using N'...' values in a where clause against a function index

    We have a table that is defined with non Unicode columns with a UPPER(..) function index on the index column (to allow searching in any alphabetic case)
    e.g.
    create table my_table
    index_column varchar2(20),
    desc_column varchar2(40)
    create index my_table_idx as on my_table(UPPER(index_column));
    there is approx. > 10 million rows in this table
    The issue we have is that when we do the following select
    select index_column, desc_column from my_table
    where upper(index_column) = 'SOME VALUE'; this statement runs in approx 0.03 seconds which is great
    But we also have some statements that run as
    select index_column, desc_column from my_table
    where upper(index_column) = N'SOME VALUE'; notice the N'...' string (unicode) value used. This ends up doing a full table scan (> 5 seconds)
    So... the question is how can i make this select statement passing in a Unicode string value hit this function based UPPER index? Is there anyway?
    I have tried these extra indexes - to no avail
    create index my_table_idx as on my_table(UPPER(CAST(index_column as nvarchar2(20)));
    and
    create index my_table_idx as on my_table(UPPER(COMPOSE(index_column)));
    I assumed Oracle should have done and implicit conversion back to a non unicode string value and then passed that value into the UPPER(...) function index, but it appears as though Oracle isn't recognizing that the column is a different type (varchar2 vs nvarchar2)
    Any help greatly appreciated?

    Horrible amount of irrelevant tags, and the only thing relevant , the four digit version, is of course not mentioned.
    Also the characterset of the database is relevant.
    As far as I know in 11g and higher one no longer needs the N'<string construct>'
    As to implicit conversion
    assume <number_column> = '9'
    Oracle always converts this into
    to_char(<number_column>='9'
    Same applies to your case.
    Try leaving out the N, check whether it works and whether your function based index is used.
    Sybrand Bakker
    Senior Oracle DBA

  • A record selection problem with a string field when UNICODE database

    We used report files made by Crystal Reports 9 which access string fields
    (char / varchar2 type) of NON-UNICODE database tables.
    Now, our new product needs to deal with UNICODE database, therefore,
    we created another database schema changing table definition as below.
    (The table name and column name are not changed.)
        char type -> nchar type
        varchar2 type -> nvarchar2 type
    When we tried to access the above table, and output a report,
    the SQL statement created from the report seemed to be wrong.
    We confirmed the SQL statement using Oracle trace function.
        SELECT (abbr.) WHERE "XXXVIEW"."YYY"='123'.
    We think the above '123' should be N'123' because UNICODE string
    is stored in nchar / nvarchar2 type field.
    Question:
    How can we obtain the correct SQL statement in this case?
    Is there any option setting?
    FYI:
    The environment are as follows.
        Oracle version: 11.2.0
        ODBC version: 11.2.0.1
        National character set: AL16UTF16

    With further investigating, we found patterns that worked well.
    Worked well patters:
        Oracle version: 11.2.0
        ODBC version: 11.2.0.1
        National character set: AL16UTF16
        Report file made by Crystal Reports 2011
        Crystal Reports XI
    Not worked patters:
        Oracle version: 11.2.0 (same above)
        ODBC version: 11.2.0.1 (same above)
        National character set: AL16UTF16 (same above)
        Report file made by Crystal Reports 2011 (same above)
        Crystal Reports 2008 / 2011
    We think this phenomenon is degraded behavior of Crystal Reports 2008 / 2011.
    But we have to use the not worked patters.
    Anything wrong with us? Pls help.
    -Nobuhiko

Maybe you are looking for

  • Smtp mail for Grid connection timed out

    Moving from simple dbconsole to full EM Grid Control. Mail was configured and working previously for dbconsole (and continues to). However, when in EM Grid Control I am getting the error: Could not connect to SMTP host: smtp.xxxxxxx.com, port: 25; ne

  • Best Way to Embed Flash in DW??

    What is the best way to embed a flash movie in Dreamweaver w/o messing up cells/rows of a previous creation? I have a site together & want to insert a movie on a page & keep the nav bar the same from all the other pages , when I insert the Flash movi

  • EDI - Porcess Code for Po create

    Hi I have the need to create PO (stock transport orders) in a client from a idoc that pas been posted. Whats the process code i set in we20 inbound to create a purchase order. Thanks Barry

  • XI Adapter parameters

    in XI Adapter paramters, i am using File to proxy, where in the XI Adapter parameters, i am giving the login parametrs of my SAP R/3 user nam and password should i give PI username and password please help in this thanking y ou ' Sridhar

  • Scanner not working right

    I have had my printer since 2009 and it has worke great until today.  I have used the scanner many, many times and it has scanned my photos perfectly.  Today however, when I try to scan the photos are way too bright.  They are so bright you cannot se