Changing nls_length semantics

hi guys,
i have unicode database (10g) with
nls_length_semantics = 'BYTE'
i try alter system set nls_length_semantics = 'CHAR';
then i try a select * from nls_instance_parameters;
now it is showing that the parameter value is CHAR.
However when i create a new table, i can see that the column size are still in bytes.
q1) how to change the whole database from byte to char
q2) will the change affect existing table columns in bytes ? and its data
*q3) if i export a schema from a database with tables define in BYTE, then i do a import into a database with nls_length_semantics = CHAR
will the tables imported be redefined as CHAR
or will it still be in bytes ?
q4) is there any sql statement whereby i can see if a table column is in byte or char.
doing a describe doesnt show
q5) i try to recreate a new session
and do a
select * from nls_session_parameters
the nls_length_semantics are still in BYTES
but from nls_instance_parameters, it is in CHAR
why is it so? do i need to bounce the db?
thanks guys!
Message was edited by:
OracleWannabe

However when i create a new table, i can see that the column size are still in bytes.How did you determined this?
Few points, if NLS_LENGTH_SEMANTICS=CHAR then you would not be able to see the CHAR/VARCHAR/VARCHAR2 columns as CHAR, try changing it to BYTE then it would display you CHAR, Same may also apply for NLS_LENGTH_SEMANTICS=BYTE, that is only columns with CHAR will be displayed as CHAR but those with BYTES will not be specified. This behaviour also changes from version to version.
q1) how to change the whole database from byte to charA1) Oracle does not support CHAR Semantics for its own components, hence SYS schema will always remain BYTE, there is not option to change this for entire database level, however if you set NLS_LENGTH_SEMANTICS to CHAR by following command, SYS Schema or Database itself will contibue to operate under BYTE semantics.
SQL> alter system set NLS_LENGTH_SEMANTICS=CHAR scope=both;
q2) will the change affect existing table columns in bytes ? and its dataA2) No, you may chose to convert them manually using alter table statements, for eg.
SQL> alter table emp modify ename varchar2(10 CHAR);
*q3) if i export a schema from a database with tables define in BYTE, then i do a import into a database with nls_length_semantics = CHAR
will the tables imported be redefined as CHAR
or will it still be in bytes ?A3) No, by default they would be in BYTE unless you chose to modify them manually.
q4) is there any sql statement whereby i can see if a table column is in byte or char.
doing a describe doesnt showA4) change NLS_LENGTH_SEMANTICS TO BYTE at session level and then describe the table, those that display CHAR are CHAR
SQL> alter session set nls_length_semantics=byte;
Session altered.
SQL> desc emp
Name Null? Type
EMPNO NOT NULL NUMBER(4)
ENAME VARCHAR2(10 CHAR)
JOB VARCHAR2(9)
MGR NUMBER(4)
HIREDATE DATE
SAL NUMBER(7,2)
COMM NUMBER(7,2)
DEPTNO NUMBER(2)
q5) i try to recreate a new session
and do a
select * from nls_session_parameters
the nls_length_semantics are still in BYTES
but from nls_instance_parameters, it is in CHARA5) Are you connected as SYSDBA? Database's own objects will always remain as BYTE, hence if it is SYS schema the nls_session_parameters will always return as BYTE for SYS.
To verify the value of the parameter just type the following command
show parameter nls_length_semantics
Cheers,
Manoj

Similar Messages

  • Issues caused by changing length semantics

    Hi All,
    Our database was formerly using BYTE for columns of tables and for stored procedures. We recently changed the Length Semantics to CHAR but caused some issues. An ORA-06502 pl/sql numeric or value error character string buffer too small appears when we access the database's stored procedures via Java. What could be the possible cause of this? Could you give me some paths to take in troubleshooting this issue?
    Thanks to all!
    Edited by: 1002671 on 25.4.2013 23:55

    1002671 wrote:
    Thanks for answering Sir! Are you kidding!!! No 'Sir' please... Common I don't know anything yet.
    Correct me if I'm wrong but doesn't CHAR already handle multi-byte characters passed to or used in stored procedures? I'm really not that knowledgeable when it comes to the effects of changing the Length Semantics. We already changed the columns from VARCHAR2(BYTE) to VARCHAR2(CHAR). The problem lies within the stored procedures.I'm not clear on your doubt, but please check this -
    Link: http://docs.oracle.com/cd/E11882_01/appdev.112/e10472/datatypes.htm (Section 'Declaring Variables for Multibyte Characters')
    >
    When declaring a CHAR or VARCHAR2 variable, to ensure that it can always hold n characters in any multibyte character set, declare its length in characters—that is, CHAR(n CHAR) or VARCHAR2(n CHAR), where n does not exceed FLOOR(32767/4) = 8191.
    >
    What i feel is you getting confused with the SQL data-type 'VARCHAR2' (i.e. used in specifying column type) and PL/SQL data-type 'VARCHAR2' (i.e. used while declaring variables)
    Then check this : difference between BYTE & CHAR
    Read and thoroughly research each comment given by the Experts there.

  • Change NLS_LENGTH_SEMANTICS to CHAR

    I need to change the length semantics of all the tables in an existing application schema from BYTE to CHAR.
    I have explored two methods
    1- Datapump export/import
    Due to large tables with numerous CLOB columns, the performance of the export/import is hardly acceptable for our production downtime window.
    2- ALTER TABLE OWNER.TABLE modify (C160 VARCHAR2(255 CHAR))
    This solution is to alter all table for all varchar2 or char columns.
    Questions :
    a) Does the alter table solution modify only the data dictionary or does it also modify/rearrange the blocks for existing rows.
    a.1) What happens to a row with a varchar(3) string previously stored in three bytes when you change the semantics to CHAR and you update that string with two-bytes charactyers ? Where does the "enlarged" field in the row piece/block ? Does it go at the end of the row piece ?
    b) Are there performance or management benefits to use one methods over the other one ?
    Thanks for any information you can provide.
    Serge Vedrine
    Edited by: [email protected] on 13-Mar-2009 11:12 AM
    Edited by: [email protected] on 13-Mar-2009 11:24 AM

    ## I have explored two methods
    ## 1- Datapump export/import
    This will not work. Export/import preserves the length semantics
    ## 2- ALTER TABLE OWNER.TABLE modify (C160 VARCHAR2(255 CHAR))
    This is the simplest approach. You can write a simple select on the view ALL_TAB_COLUMNS to generate the necessary ALTER TABLE statements.
    ## Questions :
    ## a) Does the alter table solution modify only the data dictionary or does it also modify/rearrange the blocks for existing rows.
    Only data dictionary.
    ## a.1) What happens to a row with a varchar(3) string previously stored in three bytes when you change the semantics to CHAR
    ## and you update that string with two-bytes charactyers ? Where does the "enlarged" field in the row piece/block ? Does it go at the end of the row piece ?
    This is the standard behavior of UPDATE when the new value is longer than the old one. If there is place in the block, the new longer value will push itself into the old place moving the rest of the block data to higher offsets. If there is no place, an extra block will be grabbed and inserted into the chain of blocks holding the row. This is called "row chaining".
    ## b) Are there performance or management benefits to use one methods over the other one ?
    As I said, export/import will not work anyway.
    -- Sergiusz

  • Character Semantics multilingual

    Thanks in advance
    Now we are using Byte Semantics to support multilingual, If i change Byte Semantics to Character semantics then it will support multilingual?. I want to store all japanese, thai, english, German, chinese languge chracters. please reply me asap.
    With Best Regards,
    Prabakaran K

    Character vs byte length semantics has nothing to do with what characters you can store.
    The characters you can store depend on the database character set (for CHAR and VARCHAR2 columns) and the national character set (for NCHAR and NVARCHAR2 columns). Assuming you intend to store multlingual characters in CHAR & VARCHAR2 columns, your database character set would need to be AL32UFT8 (or UTF8 in older versions).
    Once your database character set supports multilingual character sets, the choice of character or byte length semantics is a question of programmer simplicity. Character length semantics tends to be easier to deal with for PL/SQL programs.
    Justin

  • Unicode Migration using National Characterset data types - Best Practice ?

    I know that Oracle discourages the use of the national characterset and national characterset data types(NCHAR, NVARCHAR) but that is the route my company has decide to take and I would like to know what is the best practice regarding this specifically in relation to stored procedures.
    The database schema is being converted by changing all CHAR, VARCHAR and CLOB data types to NCHAR, NVARCHAR and NCLOB data types respectively and I would appreciate any suggestions regarding the changes that need to be made to stored procedures and if there are any hard and fast rules that need to be followed.
    Specific questions that I have are :
    1. Do CHAR and VARCHAR parameters need to be changed to NCHAR and NVARCHAR types ?
    2. Do CHAR and VARCHAR variables need to be changed to NCHAR and NVARCHAR types ?
    3. Do string literals need to be prefixed with 'N' in all cases ? e.g.
    in variable assignments - v_module_name := N'ABCD'
    in variable comparisons - IF v_sp_access_mode = N'DL'
    in calls to other procedures passing string parameters - proc_xyz(v_module_name, N'String Parameter')
    in database column comparisons - WHERE COLUMN_XYZ = N'ABCD'
    If anybody has been through a similar exercise, please share your experience and point out any additional changes that may be required in other areas.
    Database details are as follows and the application is written in COBOL and this is also being changed to be Unicode compliant:
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    NLS_CHARACTERSET = WE8MSWIN1252
    NLS_NCHAR_CHARACTERSET = AL16UTF16

    ##1. while doing a test convertion I discovered that VARCHAR paramaters need to be changed to NVARCHAR2 and not VARCHAR2, same for VARCHAR variables.
    VARCHAR columns/parameters/variables should not by used as Oracle reserves the right to change their semantics in the future. You should use VARCHAR2/NVARCHAR2.
    ##3. Not sure I understand, are you saying that unicode columns(NVARCHAR2, NCHAR) in the database will only be able to store character strings made up from WE8MSWIN1252 characters ?
    No, I meant literals. You cannot include non-WE8MSWIN1252 characters into a literal. Actually, you can include them under certain conditions but they will be transformed to an escaped form. See also the UNISTR function.
    ## Reason given for going down this route is that our application works with SQL Server and Oracle and this was the best option
    ## to keep the code/schemas consistent between the two databases
    First, you have to keep two sets of scripts anyway because syntax of DDL is different between SQL Server and Oracle. There is therefore little benefit of just keeping the data type names the same while so many things need to be different. If I designed your system, I would use a DB-agnostic object repository and a script generator to produce either SQL Server or Oracle scripts with the appropriate data types or at least I would use some placeholder syntax to replace placeholders with appropriate data types per target system in the application installer.
    ## I don't know if it is possible to create a database in SQL Server with a Unicode characterset/collation like you can in Oracle, that would have been the better option.
    I am not an SQL Server expert but I think VARCHAR data types are restricted to Windows ANSI code pages and those do not include Unicode.
    -- Sergiusz

  • Bug in WITH clause (subquery factoring clause) in Oracle 11?

    I'm using WITH to perform a set comparison in order to qualify a given query as correct or incorrect regarding an existing solution. However, the query does not give the expected result - an empty set - when comparing the solution to itself in Oracle 11 whereas it does in Oracle 10. A minimal example os posted below as script. There are also some observations about changes to the tables or the query that make Oracle 11 returning correct results but in my opinion these changes must not change the semantics of the queries.
    Is this a bug or am I getting something wrong? The Oracle versions are mentioned in the script.
    -- Bug in WITH clause (subquery factoring clause)
    -- in Oracle Database 11g Enterprise Edition 11.2.0.1.0?
    DROP TABLE B PURGE;
    DROP TABLE K PURGE;
    DROP TABLE S PURGE;
    CREATE TABLE S (
         m     number NOT NULL,
         x     varchar2(30) NOT NULL
    CREATE TABLE K (
         k char(2) NOT NULL,
         x varchar2(50) NOT NULL
    CREATE TABLE B (
         m     number NOT NULL ,
         k char(2) NOT NULL ,
         n     number
    INSERT INTO S VALUES(1, 'h');
    INSERT INTO S VALUES(2, 'l');
    INSERT INTO S VALUES(3, 'm');
    INSERT INTO K VALUES('k1', 'd');
    INSERT INTO K VALUES('k2', 'i');
    INSERT INTO K VALUES('k3', 'm');
    INSERT INTO K VALUES('k4', 't');
    INSERT INTO K VALUES('k5', 't');
    INSERT INTO K VALUES('k6', 's');
    INSERT INTO B VALUES(1, 'k1', 40);
    INSERT INTO B VALUES(1, 'k2', 30);
    INSERT INTO B VALUES(1, 'k4', 50);
    INSERT INTO B VALUES(3, 'k1', 10);
    INSERT INTO B VALUES(3, 'k2', 20);
    INSERT INTO B VALUES(3, 'k1', 30);
    INSERT INTO B VALUES(3, 'k6', 90);
    COMMIT;
    ALTER TABLE S ADD CONSTRAINT S_pk PRIMARY KEY (m);
    ALTER TABLE K ADD CONSTRAINT K_pk PRIMARY KEY (k);
    ALTER TABLE B ADD CONSTRAINT B_S_fk
    FOREIGN KEY (m) REFERENCES S(m) ON DELETE CASCADE;
    CREATE OR REPLACE VIEW v AS
    SELECT S.m, B.n
    FROM S JOIN B ON S.m=B.m JOIN K ON B.k=K.k
    WHERE K.x='d'
    ORDER BY B.n DESC;
    -- Query 1: Result should be 0
    WITH q AS
    SELECT S.m, B.n
    FROM S JOIN B ON S.m=B.m JOIN K ON B.k=K.k
    WHERE K.x='d'
    ORDER BY B.n DESC
    SELECT COUNT(*)
    FROM
    SELECT * FROM q
    MINUS
    SELECT * FROM v
    UNION ALL
    SELECT * FROM v
    MINUS
    SELECT * FROM q
    -- COUNT(*)
    -- 6
    -- 1 rows selected
    -- Query 2: Result set should be empty (Query 1 without counting)
    WITH q AS
    SELECT S.m, B.n
    FROM S JOIN B ON S.m=B.m JOIN K ON B.k=K.k
    WHERE K.x='d'
    ORDER BY B.n DESC
    SELECT *
    FROM
    SELECT * FROM q
    MINUS
    SELECT * FROM v
    UNION ALL
    SELECT * FROM v
    MINUS
    SELECT * FROM q
    -- M N
    -- null 10
    -- null 30
    -- null 40
    -- 1 40
    -- 3 10
    -- 3 30
    -- 6 rows selected
    -- Observations:
    -- Incorrect results in Oracle Database 11g Enterprise Edition 11.2.0.1.0:
    -- Query 1 returns 6, Query 2 returns six rows.
    -- Correct in Oracle Database 10g Enterprise Edition 10.2.0.1.0.
    -- Correct without the foreign key.
    -- Correct if attribute x is renamed in S or K.
    -- Correct if attribute x is left out in S.
    -- Correct without the ORDER BY clause in the definition of q.
    -- Only two results if the primary key on K is left out.
    -- Correct without any change if not using WITH but subqueries (see below).
    -- Fixed queries
    -- Query 1b: Result should be 0
    SELECT COUNT(*)
    FROM
    SELECT * FROM
    SELECT S.m, B.n
    FROM S JOIN B ON S.m=B.m JOIN K ON B.k=K.k
    WHERE K.x='d'
    ORDER BY B.n DESC
    MINUS
    SELECT * FROM v
    UNION ALL
    SELECT * FROM v
    MINUS
    SELECT * FROM
    SELECT S.m, B.n
    FROM S JOIN B ON S.m=B.m JOIN K ON B.k=K.k
    WHERE K.x='d'
    ORDER BY B.n DESC
    -- COUNT(*)
    -- 0
    -- 1 rows selected
    -- Query 2b: Result set shoud be empty (Query 1b without counting)
    SELECT *
    FROM
    SELECT * FROM
    SELECT S.m, B.n
    FROM S JOIN B ON S.m=B.m JOIN K ON B.k=K.k
    WHERE K.x='d'
    ORDER BY B.n DESC
    MINUS
    SELECT * FROM v
    UNION ALL
    SELECT * FROM v
    MINUS
    SELECT * FROM
    SELECT S.m, B.n
    FROM S JOIN B ON S.m=B.m JOIN K ON B.k=K.k
    WHERE K.x='d'
    ORDER BY B.n DESC
    -- M N
    -- 0 rows selected

    You're all gonna love this one.....
    The WITH clause works. But not easily.
    Go ahead, build the query, (as noted in a recent thread, I, too, always use views), set the grants and make sure DISCOVERER and EULOWNER have SELECT privs.
    1. Log into Disco Admin as EULOWNER. Trust me.
    2. Add the view as a folder to the business area.
    3. Log into Disco Desktop as EULOWNER. Don't laugh. It gets better.
    4. Build the workbook and the worksheet (or just the worksheet if apropos)
    5. Set the appropriate "sharing" roles and such
    6. Save the workbook to the database.
    7. Save the workbook to your computer.
    8. Log out of Desktop.
    9. Log back into Desktop as whatever, whoever you usually are to work.
    10. elect "open existing workbook"
    11. Select icon for "open from my computer". See? I told you it would get better!
    12. Open the save .dis file from your computer.
    13. Save it to the database.
    14. Open a web browser and from there, you're on your own.
    Fortran in VMS. Much easier and faster. I'm convinced the proliferation of the web is a detriment to the world at large...On the other hand, I'm also waiting for the Dodgers to return to Brooklyn.

  • SunOneAppServ with Oracle 10g vers. 10.1.0.3.0

    Hello,
    I'm using two different versions of SunOneAppServer,
    Sun Java System Application Server Enterprise Edition 8.1_02(build b19-p08) and
    Sun Java System Application Server 7 2004Q2UR2.
    These two instances of Application Servers work over two different databases, Oracle 8 and Oracle 10g vers. 10.1.0.3.0.
    When working with Oracle 10 I get locks over some records during transactions and that transactions don't run correctly so the records remain locked and all following transactions go on waiting indefinitely or until I kill the session using toad.
    Looking at the query locking the record it seems to me it's a query done by the application server.
    Using the same software and making the same operation over the database with Oracle 8 this doesn't happen.
    Does anyone has the same problem or knows how to solve it?
    A collegue of mines told me that these versions of application servers doesn't support Oracle 10g. Is it true?

    Both versions of the appserver below will work with Oracle 10g.
    There is not enough details as to asist with your problem, but it could well be a change in semantics for locking in 10g that is causing your issue

  • Query with order by & View/procedure

    1)I have a query its getting joined with few tables and the base table contains 12 billion rows . my issue is when I execute the query with necessary parameter am getting the result in few seconds . but when I add an order by for any column am not getting the result even after 15 minuts.
    The sort column is an indexed column and I have even tried with the primary column of the base table but no change .. is there any way to make it faster with order by ??
    2)I have got a view which also getting joined with few high volume tables . when I call the view with required parameter am not getting the result even after 15 minutes.. but when I took out the query of the view and hardcode the value am getting the result in 3 seconds . so I just made it to a procedure that returns a cursor . now its working fine .. could you please explain me the reason for this ….??
    Please help …

    select * from
    (select Rownum RowNO,Qr.* from
    (select T1.c1,T2.C2,T3.c3 from TI,T2,T3
    where < all required joins>
    order by Ti.c)Qr
    where RowNum <20 )
    where RowNO >10 ;As said before:
    Your view very likely prevented predicate pushing and by manually adding the predicate inside the query, you changed the semantics of the query.
    Your view contains a rownum column. This prevents predicate pushing because the semantics of the query changes. An example to clarify:
    SQL> explain plan
      2  for
      3  select *
      4    from ( select empno
      5                , ename
      6                , sal
      7             from emp
      8         )
      9   where empno = 7839
    10  /
    Uitleg is gegeven.
    SQL> select * from table(dbms_xplan.display)
      2  /
    PLAN_TABLE_OUTPUT
    Plan hash value: 4024650034
    | Id  | Operation                   | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |        |     1 |    14 |     1   (0)| 00:00:01 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| EMP    |     1 |    14 |     1   (0)| 00:00:01 |
    |*  2 |   INDEX UNIQUE SCAN         | EMP_PK |     1 |       |     0   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("EMPNO"=7839)
    14 rijen zijn geselecteerd.
    SQL> exec dbms_lock.sleep(1)
    PL/SQL-procedure is geslaagd.
    SQL> explain plan
      2  for
      3  select *
      4    from ( select empno
      5                , ename
      6                , sal
      7             from emp
      8            where empno = 7839
      9         )
    10  /
    Uitleg is gegeven.
    SQL> select * from table(dbms_xplan.display)
      2  /
    PLAN_TABLE_OUTPUT
    Plan hash value: 4024650034
    | Id  | Operation                   | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |        |     1 |    14 |     1   (0)| 00:00:01 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| EMP    |     1 |    14 |     1   (0)| 00:00:01 |
    |*  2 |   INDEX UNIQUE SCAN         | EMP_PK |     1 |       |     0   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("EMPNO"=7839)
    14 rijen zijn geselecteerd.The previous two queries show that in this case the predicate "empno = 7839" can be pushed inside the view. Both queries are semantically the same.
    However, when you add a rownum to your view definition, like you did, the predicates cannot be pushed inside the view:
    SQL> exec dbms_lock.sleep(1)
    PL/SQL-procedure is geslaagd.
    SQL> explain plan
      2  for
      3  select *
      4    from ( select empno
      5                , ename
      6                , sal
      7                , rownum rowno
      8             from emp
      9         )
    10   where empno = 7839
    11  /
    Uitleg is gegeven.
    SQL> select * from table(dbms_xplan.display)
      2  /
    PLAN_TABLE_OUTPUT
    Plan hash value: 2077119879
    | Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT    |      |    14 |   644 |     3   (0)| 00:00:01 |
    |*  1 |  VIEW               |      |    14 |   644 |     3   (0)| 00:00:01 |
    |   2 |   COUNT             |      |       |       |            |          |
    |   3 |    TABLE ACCESS FULL| EMP  |    14 |   196 |     3   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("EMPNO"=7839)
    15 rijen zijn geselecteerd.
    SQL> exec dbms_lock.sleep(1)
    PL/SQL-procedure is geslaagd.
    SQL> explain plan
      2  for
      3  select *
      4    from ( select empno
      5                , ename
      6                , sal
      7                , rownum rowno
      8             from emp
      9            where empno = 7839
    10         )
    11  /
    Uitleg is gegeven.
    SQL> select * from table(dbms_xplan.display)
      2  /
    PLAN_TABLE_OUTPUT
    Plan hash value: 1054641936
    | Id  | Operation                     | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |        |     1 |    46 |     1   (0)| 00:00:01 |
    |   1 |  VIEW                         |        |     1 |    46 |     1   (0)| 00:00:01 |
    |   2 |   COUNT                       |        |       |       |            |          |
    |   3 |    TABLE ACCESS BY INDEX ROWID| EMP    |     1 |    14 |     1   (0)| 00:00:01 |
    |*  4 |     INDEX UNIQUE SCAN         | EMP_PK |     1 |       |     0   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       4 - access("EMPNO"=7839)
    16 rijen zijn geselecteerd.Now the two queries are not the same anymore. If you wonder why, please look carefully at the query results of both:
    SQL> select *
      2    from ( select empno
      3                , ename
      4                , sal
      5                , rownum rowno
      6             from emp
      7         )
      8   where empno = 7839
      9  /
         EMPNO ENAME             SAL      ROWNO
          7839 KING             5000          9
    1 rij is geselecteerd.
    SQL> select *
      2    from ( select empno
      3                , ename
      4                , sal
      5                , rownum rowno
      6             from emp
      7            where empno = 7839
      8         )
      9  /
         EMPNO ENAME             SAL      ROWNO
          7839 KING             5000          1
    1 rij is geselecteerd.Regards,
    Rob.

  • Formatting XML Output

    How can I format the XML output I get from extract()?
    - get rid of new lines
    - get rid of indents
    - get rid of namespaces
    Thanks.

    You write some PL/SQL or Java code yourself to do so. We'll provide the capability to turn off the indent & newline stuff in a future release. I'm at a loss to understand why you would want to turn off namespace declarations though--they change the semantics of the data.

  • Strange expdp errors

    I've exported schema from database with CL8MSWIN1251 encoding and now I try to import it into database with AL32UTF8 character encoding. DBMS version is 10.2.0.4.0.
    At first, I've set nls_length_semantics to char in database.
    After that I've restarted instance and imported metadata from schema.
    But on importing data I'm getting a lot of the following errors :
    ORA-02374: conversion error loading table "SFEDU_TEST"."ELEMS"
    ORA-12899: value too large for column ES_COMMENT (actual: 2456, maximum: 4000)
    ORA-02372: data for row: ES_COMMENT : 0X'200D0A09CAE0F4E5E4F0E020F1EEE7E4E0EDE020E220313939'
    It's interesting that Oracle says: "value too large for column ES_COMMENT (actual: 2456, maximum: 4000)", so maximum size is more than actual...
    How can I deal with this issue?
    Edited by: alp on Apr 29, 2009 4:48 AM

    There may be two errors here:
    1. The actual data expansion issue. Setting NLS_LENGTH_SEMANTICS to CHAR does not change the semantics of objects recreated by import. Import uses explicit semantics specifications per column and, therefore, does not use the default value set by NLS_LENGTH_SEMANTICS.
    What does DESCRIBE "SFEDU_TEST"."ELEMS" show in the target database?To deal with this error I've run thr following code:
    begin
    for cur_col in ( select table_name,column_name, DATA_TYPE,CHAR_COL_DECL_LENGTH,DATA_DEFAULT from user_tab_cols T, user_objects O where DATA_TYPE='VARCHAR2' and O.object_name=T.table_name and O.OBJECT_TYPE='TABLE' and T.column_name not like 'SYS_%' and column_name not in ('EDP_KURS','EDP_SNSP_SHIFR','EDP_SNP_SHIFR'))
    loop
    begin
         if cur_col.DATA_DEFAULT IS NULL then
         begin
              dbms_output.put_line('alter table ' || cur_col.table_name||' modify ' || cur_col.column_name || ' VARCHAR2(' || cur_col.CHAR_COL_DECL_LENGTH ||' CHAR )');
              execute immediate 'alter table ' || cur_col.table_name||' modify ' || cur_col.column_name || ' VARCHAR2(' || cur_col.CHAR_COL_DECL_LENGTH ||' CHAR )';
         end;
         else
         begin
              dbms_output.put_line('alter table ' || cur_col.table_name||' modify ' || cur_col.column_name || ' VARCHAR2(' || cur_col.CHAR_COL_DECL_LENGTH ||' CHAR ) default '||cur_col.data_default);
              execute immediate 'alter table ' || cur_col.table_name||' modify ' || cur_col.column_name || ' VARCHAR2(' || cur_col.CHAR_COL_DECL_LENGTH ||' CHAR ) default '||cur_col.data_default;          
         end;
         end if ;
    end;
    end loop;
    end;
    now descr says the following:
    descr sfedu_test.elems
    Name                         Null? Type
    ES_ID                              NUMBER(16)
    ES_PHONE                         VARCHAR2(4000 CHAR)
    ES_URL                          VARCHAR2(4000 CHAR)
    ES_EMAIL                         VARCHAR2(4000 CHAR)
    ES_COMMENT                         VARCHAR2(4000 CHAR)
    EST_EST_ID                         NUMBER(10)
    ES_ES_ID                         NUMBER(16)
    ES_EDU                          NUMBER(1)
    ES_ADR                          VARCHAR2(4000 CHAR)
    ES_PHOTO                         VARCHAR2(4000 CHAR)
    but I still have :
    KUP-11007: conversion error loading table "SFEDU_TEST"."ELEMS"
    ORA-12899: value too large for column ES_COMMENT (actual: 2446, maximum: 4000)
    KUP-11009: data for row: ES_COMMENT : 0X'CAE0F4E5E4F0E020FDEAEEEBEEE3E8E820E820EFF0E8F0EEE4'
    >
    2. There is a bug in the specific place where the error message ORA-12899 is generated in your case (this message comes from many places in code).
    As the message is most probably generated in the server code, you could try to issue:
    ALTER SYSTEM SET EVENTS '12899 trace name errorstack level 1, forever';before starting the import. This should generate a trace (*.trc) file in the udump or bdump directory with ORA-12899 and the associated call stack. If you can get the stack, paste it here.There is no new trace files in udump,bdump after enabling tracing...

  • Preserve "exactly" line spacing in epub output.

    In my Pages document, there are many inline equations (from Mathtype). I like having the line spacing set to "exactly" so the line spacing remains constant, even between lines with equations that 'overlap' the lines around it.
    Is there a way to preserve this option when I export my Pages document to Epub? Right now, the line spacing is single-spaced in the Epub output.

    In my Pages document, there are many inline equations (from Mathtype). I like having the line spacing set to "exactly" so the line spacing remains constant, even between lines with equations that 'overlap' the lines around it. Is there a way to preserve this option when I export my Pages document to Epub? Right now, the line spacing is single-spaced in the Epub output.
    No, if you use the ePub document model. Yes, if you mix document models.
    ePub is a document markup model where you are expected to encode character information in ISO10646/Unicode and encode the logical organisation (structure) the define the parts of your document. However, ePub has no support for font embedding and without font embedding it is meaningless to talk of fixed line length, fixed line spacing, and fixed glyph spacing and sizing.
    If you set up your equations in whatever originating application you prefer and save into a fixed geometry format such as Adobe PDF, then the fixed geometry file can be handled like a graphic in ePub, HTML and other document markup models. Be aware that Adobe PDF encodes the glyphs but not the characters whereas ePub and HTML encode the characters but not the glyphs which may impact search support, depending on what you are trying to do. Also, check you mathematical fonts in the Apple Character Palette to be certain that the glyphs map to meaningful characters in ISO10646/Unicode so that the glyph identifiers can be used to synthesise character semantics. A whole lot of mathematical fonts are constructed to draw mathematical glyphs from say English characters so that there is no relationship at all between information processing and image presentation. In this respect, OpenType is simply a rebranding of TrueType whereby system software displays an OpenType icon if a TrueType font file has the DSIG table - there is no guarantee whatsoever that the relationship of characters to glyphs is of this world (lots and lots of laughs to Heidelberg and its management of the Linotype Library here). See below from the ePub Specification.
    /hh
    http://www.idpf.org/doclibrary/epub/OPS_2.0.1draft.htm
    3.4: Embedded Fonts
    To provide authors with control over the appearance of the text, OPS supports the CSS2 font-face at-rule (@font-face). See section 15.3.1 of the CSS2 Recommendation. The following font descriptors must be supported:
    font-family
    font-style
    font-variant
    font-weight
    font-size
    src
    For portability, authors must not use any other descriptors. Font files must carry all information needed for rendering Unicode characters. Fonts must not provide mappings for Unicode characters that would change the semantics of the text (e.g. mapping the letter "A" to a biohazard symbol). Content creators must not assume that any particular font format is supported. Fonts could be included in multiple formats by using a list of files for the src descriptor; the first supported format should be used. At least one file in OpenType format should always be included in the list. It is advisable for a Reading System to support the OpenType font format, but this is not a conformance requirement; a reading system may support no embedded font formats at all. Content creators should use comma-separated lists for font-family properties to specify fallback font choices.
    Content creators must always honor usage restrictions that are encoded in OpenType fonts (and many other font formats). Fonts that are marked "no embedding" must not be included in OPS Publications.
    Any font files included in an OPS Publication must be included in the OPF manifest with appropriate media type (application/vnd.ms-opentype for OpenType fonts).

  • How TOP query will work in SQL Server?

    Hi Experts,
      While running TOP command, what internally happen in SQL Server Engine ? How its fetching TOP 10 rows from Table.
    Thanks
    Selva

    That is the *logical* query processing order, which isn't the same as the *physical* processing order.
    As for how SQL Server performs TOP, then as syggested it very much depends on whether there is an ORDER BY or not. Since, TOP is logically processed after ORDER BY, you change the semantics of the TOP operation a lot when you add ORDER BY. But as usual,
    we can't say anything about the physcial query order with having an example to talk about. Too many involved factors, like how the rest of the query look like, the schema, what indexes we have, data distribution etc.
    Tibor Karaszi, SQL Server MVP |
    web | blog

  • European Charset

    Hi,
    I got problem in create database when if I chose some European
    charset or so, like WE8ISO9..P1 or P15.
    I seem to be in currency or decimal format.
    Anyway, after I installed it, I have a charset conversion
    between Apache+Php and Oracle.
    They always convert my character after 128 ASCII code.
    Someone can help me?
    I got a RedHat 7.2 box + Oracle9i + apache 1.3.20 + php 4.0.6 +
    tomcat, but I got the same problem with RedHat 6.2 and oracle8i.
    Thank morgan

    1. You have to modify the character semantics in the source database before the export,
    or
    you have to first import table metadata (i.e. create empty tables), then change the semantics in the target database, and then import the rows.
    2. CSSCAN shows if you have issues assuming that the current semantics does not change. But character length semantics columns (containing valid character codes) can cause truncation issues only if the post-conversion length exceeds the datatype limit (4000 bytes for VARCHAR2, 2000 bytes for CHAR). You can look for such values in the error report or in CSMIG.CSMV$COLUMNS view.
    -- Sergiusz

  • Off Topic: Hidden Features of Java

    Happy Friday everyone,
    I stumbled upon this conversation on Stack Overflow last night about "hidden" features of Java. Although I knew about most of them (mostly learned from reading posts here in the past few months), thinking about less-often used features of the language is a fun refresher (at least for me).
    Can anybody think of any more "hidden" features of Java? Or just something interesting that not many people are familiar with?
    Here's the link: [http://stackoverflow.com/questions/15496/hidden-features-of-java|http://stackoverflow.com/questions/15496/hidden-features-of-java]
    Edit- To make things a bit more interesting, I'll throw some Dukes at people who show me interesting things.
    Edited by: kevinaworkman on Nov 13, 2009 7:56 AM

    dcminter wrote:
    Here's one I banged my shins on the other day. Comparator may change Set semantics:It's always bothered me that TreeSet relies on Comparator to define equality.
    Note that the ordering maintained by a set (whether or not an explicit comparator is provided) must be consistent with equals if it is to correctly implement the Set interface. It bugs me because it's not an explicit requirement of Comparable or Comparable. Therefore, if I did:
    set.add("Hello");
    System.out.println(set.contains("World"));I would expect false to be printed (in other words, like HashSet, I would expect it to do a final equals() to make sure). But it's not.
    However, in your example, it's easy to see why TreeSet could not possibly allow that. If you added 2 Objects that were comparatively equal, it would be impossible to traverse the tree to find anything.

  • New language feature: lazy local pattern matching

    <p>In the upcoming release of the Open Quark Framework, CAL gets a new language       feature: lazy local pattern matching.</p> <p>The new local pattern match syntax allows one to bind one or more variables to       the fields of a data constructor or a record in a single declaration inside a       let block.</p> <p>      For example:</p> <p>      // data constructor patterns:<br />      public foo1 = let Prelude.Cons a b = ["foo"]; in a;<br />      public foo2 = let Prelude.Cons {head=a, tail=b} = ["foo"]; in a;<br />      <br />      // list cons patterns:<br />      public foo3 = let a:b = [3]; in a;<br />      <br />      // tuple patterns:<br />      public foo4 = let (a, b, c) = (b, c, 1 :: Prelude.Int); in abc;<br />      <br />      // record patterns:<br />      public foo5 = let = {a = "foo"}; in a; // non-polymorphic record pattern<br />      public foo6 = let {_ | a} = {a = "foo", b = "bar"}; in a; // polymorhpic record       pattern<br />      <br />      Whereas a case expression such as (case expr of a:b -> ...) forces the       evaluation of expr to weak-head normal form (WHNF), a similar pattern match       declaration (let a:b = expr; in ...) does not force the evaluation of expr       until one of a or b is evaluated. In this sense, we can regard this as a form       of lazy pattern matching.<br /> </p> <p>Thus,</p> <p>      let a:b = []; in 3.0;</p> <p>is okay and would not cause a pattern match failure, but the case expression</p> <p>      case [] of a:b -> 3.0;</p> <p>would cause a pattern match failure.</p> <p>This laziness is useful in situations where unpacking via a case expression may       result in an infinite loop. For example, the original definition of List.unzip3       looks like this:</p> <p>// Original implementation of List.unzip3<br />      unzip3 :: [(a, b, c)] -> ([a], <b>, [c]);<br />      public unzip3 !list =<br />          case list of<br />          [] -> ([], [], []);<br />          x : xs -><br />              let<br />                  ys =       unzip3 xs;<br />              in<br />                  case x       of<br />                  (x1,       x2, x3) -><br />                      //do       not do a "case" on the ys, since this makes unzip3 strictly evaluate the list!<br />                      (x1       : field1 ys, x2 : field2 ys, x3 : field3 ys);<br />              ;<br />          ;<br /> </p> <p>The use of the accessor functions field1, field2 and field3 here is necessary,       as the alternate implementation shown below would result in "unzip3 xs" to be       evaluated to WHNF during the evaluation of "unzip3 (x:xs)". Thus if the input       list is infinite, the function would never terminate. </p> <p>// Alternate (defective) implementation of List.unzip3<br />      unzip3 :: [(a, b, c)] -> ([a], <b>, [c]);<br />      public unzip3 !list =<br />          case list of<br />          [] -> ([], [], []);<br />          x : xs -><br />              let<br />                  ys =       unzip3 xs;<br />              in<br />                  case x       of<br />                  (x1,       x2, x3) -><br />                      case       ys of // the use of "case" here is inappropriate, as it causes "unzip3 xs" to       be evaluated to WHNF<br />                      (y1,       y2, y3) -> (x1 : y1, x2 : y2, x3 : y3);<br />                  ;<br />              ;<br />          ;<br /> </p> <p>With the new syntax, the original implementation can be expressed more nicely       without changing its semantics:</p> <p>// New implementation of List.unzip3, revised to use the local pattern match       syntax<br />      unzip3 :: [(a, b, c)] -> ([a], <b>, [c]);<br />      public unzip3 !list =<br />          case list of<br />          [] -> ([], [], []);<br />          x : xs -><br />              let<br />                  (y1,       y2, y3) = unzip3 xs; // using a tuple pattern to perform a lazy local pattern       match<br />              in<br />                  case x       of<br />                  (x1,       x2, x3) -><br />                      (x1       : y1, x2 : y2, x3 : y3);<br />              ;<br />          ;<br /> </p> <p style="text-decoration: underline">It is important to note that in places where       a case expression can be used (without having an unwanted change in the       laziness of the expression being unpacked), it should be used instead of this       local pattern match syntax.</p> <p>Things to note about the new syntax:</p> <p>      - local type declarations on the pattern-bound variables are allowed, and these       type declarations can have associated CALDoc comments. On the other hand, the       actual local pattern match declaration itself cannot have a type declaration       nor a CALDoc comment.</p> <p>      - this syntax cannot be used for top-level definitions, only local definitions       in let blocks</p> <p>      - one cannot use patterns with multiple data constructors, e.g.</p> <p>      let (Left|Right) = ...;</p> <p>      is not allowed</p> <p>      - one cannot specify a variable for the base record pattern, e.g.</p> <p>      let {r | a, b} = ...;</p> <p>      is not allowed, but this is okay:</p> <p>      let {_ | a, b} = ...;</p> <p>      - patterns without no variables are disallowed, e.g.</p> <p>      let _ = ...;<br />      let [] = ...;<br />      let () = ...;<br />      let : = ...;<br />      let {_|#1} = ...;      <br /> </p>

    If you use just / it misinterprets it and it ruins
    your " " tags for a string. I don't think so. '/' is not a special character for Java regex, nor for Java String.
    The reason i used
    literal is to try to force it to directly match,
    originally i thought that was the reason it wasn't
    working.That will be no problem because it enforces '.' to be treated as a dot, not as a regex 'any character'.
    Message was edited by:
    hiwa

Maybe you are looking for

  • Kernel Panic issue -- What's com.apple.GeForce got to do with this?

    Hi all, Been experiencing this a lot lately, those dreaded Kernel Panic messages that ask you to restart the computer, complete with the depressing darkened screen in the background. The tech details below. Hope someone at Apple can help too. What ha

  • "Invalid Content Type" using oracle.example.hr empsecformat in EA2 listener

    We've done a new vanilla install of Apex 4.1.1.00.23 and applied the patch that comes with the EA2 listener We then created a workspace and logged in, looking at the Restful services we can see the oracle.example.hr in there. Using the empsecformat/J

  • Find admin in config.xml

    Hello i have many managed servers defined in config.xml. with wlshell connect file:./config.xml cd /Server ls how to know who is admin ? Thanks Arnaud

  • Problem with InDesign server - Server slows down fast

    My javascript code iterates the pages of the InDesign Document and searches for the text frames with some special character style. I found some samples from Jongware : sourcelist = app.activeDocument.findText() and sourcelist = app.activeDocument.pag

  • Sign in? +1

    so I've been using iTunes for years, first on my PC, now on my mac. I've had no issue switching over, so that's not the problem. My question is, all of a sudden, it tells me to log in when I open iTunes. It's not required, I can click 'cancel' and no