Too long row

I display text in sql report. Text has 1000 characters.
I would display this text in more rows(not recors).I tried set up report/edit TEXT/Tabular Form Element/element width : 20 (for example), but it is not function.
It is too long row(column TEXT) again.

Hi,
Useful Tips to customize Reports.
1. When a column is huge:
Contents of some columns are so huge peace of information that it could destroy the look of the report. A possible solution is to set up this column at the end (last in the Query) and extend (“colspan”) it in full report length. For instance let’s suppose that one has six columns and "DESCRIPTION" has several rows of contents.
SELECT ADDRESS, UNIT_ID, UNIT_PRICE, PRICE, OFFER_NUMBER,
('</tr><tr><td colspan="5">' || description || '<hr></td>') "DESCRIPTION"
FROM OFFERS;
</tr><tr> will set up a new table row and <hr> will draw a line. Colspan=”5” should be substituted with (count of shown column – 1).
2. Report is too long (too many columns):
The solution is similar to the previous one, but the row could be bending on two or more report lines. In the report query for instance set up the following:
SELECT ID, (SELECT property_subtype FROM property_types WHERE ID = PROPERTY_TYPE_ID) "PROPERTY",
(SELECT name FROM cities WHERE ID = city) "CITY",
STREET_ADDRESS, OFF_SIZE, UNIT_PRICE, PRICE,
('</td></tr><tr><td>' || DESCRIPTION) "DESCRIPTION",
('</td></tr><tr><td>' || GSM || ' ' || FAX || ' ' || PHONE) "GSM", CLIENT_REPRESENTATIVE
FROM OFFERS;
The difference here is that the column "DESCRIPTION” is not a last one and you have to set it up in Reports Attributes/Custom/Heading string similar to the one below.
Id<br>Description<br>GSM
Property<br>Client Representative
One could get rid of horizontal rule using Report Template - Borderless.
I would like to ask every one of the forum "Oracle Database 10g Express Edition".
What is the primary goal using Oracle XE?
1. Training (Training)
2. Production single desktop database (Single)
3. Production LAN database (LAN)
4.WEB services database (WEB)
Please, send me back a word.
Konstantin
Message was edited by:
konstantin.gudjev

Similar Messages

  • Deleting 1 row from a table takes too long...why?

    We are running the following query...
    delete gemdev.lu_messagecode where mess_code ='SSY'
    and it takes way too long as there is only 1 record in this table with SSY as the mess_code.
    SQL> set timing on;
    SQL> delete gemdev.lu_messagecode where mess_code ='SSY';
    1 row deleted
    Executed in 293.469 seconds
    The table structure is very simple as you can see below.
    CREATE TABLE GEMDEV.LU_MESSAGECODE
    MESS_CODE VARCHAR2(3) NOT NULL,
    ROUTE_CODE VARCHAR2(4) NULL,
    REPORT_CES_MNEMONIC VARCHAR2(3) NULL,
    CONSTRAINT SYS_IOT_TOP_52662
    PRIMARY KEY (MESS_CODE)
    VALIDATE
    ORGANIZATION INDEX
    NOCOMPRESS
    TABLESPACE IWORKS_IOT
    LOGGING
    PCTFREE 10
    INITRANS 2
    MAXTRANS 255
    STORAGE(BUFFER_POOL DEFAULT)
    PCTTHRESHOLD 50
    NOPARALLEL
    ALTER TABLE GEMDEV.LU_MESSAGECODE
    ADD CONSTRAINT LU_ROUTECODE_FK3
    FOREIGN KEY (ROUTE_CODE)
    REFERENCES GEMDEV.LU_ROUTECODE (ROUTE_CODE)
    ENABLE
    ALTER TABLE GEMDEV.LU_MESSAGECODE
    ADD CONSTRAINT MSGCODE_FK_CESMNEMONIC
    FOREIGN KEY (REPORT_CES_MNEMONIC)
    REFERENCES GEMDEV.SYS_CESMNEMONIC (CES_MNEMONIC)
    ENABLE
    My explain reads as follows.
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | DELETE STATEMENT | | | | 1 (100)| |
    | 1 | DELETE | LU_MESSAGECODE | | | | |
    | 2 | INDEX UNIQUE SCAN| SYS_IOT_TOP_52662 | 1 | 133 | 1 (0)| 00:00:01 |
    Also in my AWR Sql Report I see this as well
    Plan Statistics DB/Inst: IWORKSDB/iworksdb Snaps: 778-780
    -> % Total DB Time is the Elapsed Time of the SQL statement divided
    into the Total Database Time multiplied by 100
    Stat Name Statement Per Execution % Snap
    Elapsed Time (ms) 521,102 N/A 12.0
    CPU Time (ms) 73,922 N/A 5.1
    Executions 0 N/A N/A
    Buffer Gets 2,892,144 N/A 3.4
    Disk Reads 2,847,609 N/A 8.6
    Parse Calls 1 N/A 0.0
    Rows 0 N/A N/A
    User I/O Wait Time (ms) 475,882 N/A N/A
    Cluster Wait Time (ms) 0 N/A N/A
    Application Wait Time (ms) 0 N/A N/A
    Concurrency Wait Time (ms) 2 N/A N/A
    Invalidations 1 N/A N/A
    Version Count 1 N/A N/A
    Sharable Mem(KB) 45 N/A N/A
    Now, since the table only has 150 rows, and I am only try to delete 1 row, why is there so much disk read and why does it take 5 minutes to delete? This just weird. Does this have something to do with the Child tables?

    Any triggers on the table?
    If you trace the session, what statement(s) seem to
    be taking all that time?
    JustinWell I traced my session and I noticed that my query does take a while, but I also noticed several other queries that I was not running. Not too sure where it came from. Have a look below. It is a sample from my TKPROF utility report.
    delete gemdev.lu_messagecode
    where
    mess_code ='SSY'
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.01 0.04 0 2 23 1
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.01 0.04 0 2 23 1
    Misses in library cache during parse: 1
    Optimizer mode: FIRST_ROWS
    Parsing user id: 57
    Rows Row Source Operation
    1 DELETE LU_MESSAGECODE (cr=3446672 pr=3442028 pw=0 time=309363335 us)
    1 INDEX UNIQUE SCAN SYS_IOT_TOP_52662 (cr=2 pr=0 pw=0 time=35 us)(object id 52663)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net message to client 1 0.00 0.00
    SQL*Net message from client 1 35.87 35.87
    select /*+ all_rows */ count(1)
    from
    "GEMDEV"."TBLCLAIMCHARGE" where "CONTRACT_FEE_MESS_CODE" = :1
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 10.53 44.95 381779 382893 0 1
    total 3 10.53 44.95 381779 382893 0 1
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS (recursive depth: 1)
    Rows Row Source Operation
    1 SORT AGGREGATE (cr=382893 pr=381779 pw=0 time=44953436 us)
    0 TABLE ACCESS FULL TBLCLAIMCHARGE (cr=382893 pr=381779 pw=0 time=44953403 us)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file scattered read 47795 0.03 37.87
    db file sequential read 101 0.00 0.02
    select /*+ all_rows */ count(1)
    from
    "GEMDEV"."TBLCLAIMCHARGE" where "FEE_INEL_MESS_CODE" = :1

  • Performance issues; waited too long for a row cache enqueue lock!

    hi Experts,
    OS: Oracle Solaris on SPARC (64-bit)
    DB version:
    SQL> select * from V$VERSION;
    BANNER
    Oracle Database 11g Release 11.2.0.1.0 - 64bit Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE    11.2.0.1.0      Production
    TNS for Solaris: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production
    SQL>We have seen 100% CPU usage and high database load, so I checked the instance and have seen there were many blocking sessions and more than 71 sessions running the same select ;
    elect tablespace_name as tbsname from        (select tablespace_name,sum(bytes)/1024/1024 free_mb,0 total_mb,0 max_mb         from dba_free_space         group by tablespace_name         union         select tablespace_name, 0 current_mb,sum(bytes)/1024/1024 total_mb,                sum(decode(maxbytes, 0, bytes, maxbytes))/1024/1024 max_mb         from dba_data_files         group by tablespace_name) group by tablespace_name having round((sum(total_mb)-sum(free_mb))/sum(max_mb)*100) > 95  Blocking sessions are running queries like this;
    SELECT * from MYTABLE WHERE MYCOL=:1 FOR UPDATE;This select queries are coming from a cron job running every 10 minutes to check the tablespaces; so I first killed (kill -9 pid) those select statements so the load and CPU decreased to 13% of CPU usage. Blocking sessions still there and I didn't killed them waiting for app guys confirmation... after few hours and the CPU usage never went down the 13%; I have seen many errors;
    WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=...System State dumped to trace file .....trcAfter that , we decided to restart the DB to release the locks!
    I would like to understand why during loads we were no able to run those select statements, statspack schedule snapshot reports were not able to finish, also automatic
    database statistics... why 5 for update statements locked the whole DB?

    user12035575 wrote:
    SELECT FOR UPDATE will only lock the table row until the transaction is completed.
    "WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK" happens when it needs to acquire a lock on data dictionary. Did you check the trace file associated with the statement?The trace file is too long, which information I need to focus more?

  • Error: WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=26

    Hi every one,
    Today, i met a problem: Application cannot connect to database because database hang ( I also cannot connect to database with sqlplus) . Check alert log, only one error:
    WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=26This error not only appear first time, but also happen every one month. I must reset server for database release all memory but I think It isn't a true solution!
    Could you give a recommend for this.
    Regards.

    The Row Cache is actually the Data Dictionary Cache. It is where definitions from the data dictionary (tablespaces, objects, users etc) are loaded into memory.
    There would be an associated trace file written with the occurrence of this warning.
    See Oracle Support Note 278316.1 for more information
    Hemant K Chitale

  • SAPINST System Copy MaxDB J2EE 7.0 "Row too long" at CREATE TABLE

    Hello Guru's,
    w'll make a system copy of a EP J2EE JAVA standalone system with sapinst an jload.
    At the phase Import JAVA Dump, the import halted with the following error...
    {Apr 1, 2009 3:10:35 PM com.sap.inst.jload.Jload dbImport
    INFO: trying to drop table KMC_UWL_ITEMS
    Apr 1, 2009 3:10:35 PM com.sap.inst.jload.Jload dbImport
    INFO: table dropped
    Apr 1, 2009 3:10:35 PM com.sap.inst.jload.Jload dbImport
    INFO: trying to create table KMC_UWL_ITEMS
    Apr 1, 2009 3:10:36 PM com.sap.inst.jload.Jload logStackTrace
    SEVERE: com.sap.dictionary.database.dbs.JddException: CREATE TABLE KMC_UWL_ITEMS failed
    15:10:35 2009-04-01 dbs-Info:  <<< Analyze table KMC_UWL_ITEMS >>>
    15:10:36 2009-04-01 dbs-Info:  predefined action is: >>>null<<<
    15:10:36 2009-04-01 sap-Info:  Table KMC_UWL_ITEMS not found on DB.
    15:10:36 2009-04-01 dbs-Info:  Action: CREATE
    15:10:36 2009-04-01 ope-Info:  Create table in database SAPDB
    E R R O R ******* (DbObjectSqlStatements)
    15:10:36 2009-04-01 dbs-Error:  Exception caught during SQL execution [-2000] (at 831): Row too long CREATE TABLE "KMC_UWL_ITEMS"("ITEM_ID" FIXED(19) DEFAULT 0 NOT NULL, "I_CONNECTOR_ID" INTEGER DEFAULT 0 NOT NULL, "SYSTEM_ID" VARCHAR(54) UNICODE  NOT NULL, "EXTERNAL_ID" VARCHAR(300) UNICODE  NOT NULL, "USER_ID" FIXED(19) DEFAULT 0 NOT NULL, "APP_CONTEXT" VARCHAR(255) UNICODE  , "ATTACHMENT_COUNT" INTEGER  , "CREATED_DATE" TIMESTAMP  , "CREATOR_ID" VARCHAR(255) UNICODE  , "DESCRIPTION" LONG UNICODE  , "DELETED_FLAG" VARCHAR(1) UNICODE  NOT NULL, "DUE_DATE" TIMESTAMP  , "EXTERNAL_OBJECT_ID" VARCHAR(255) UNICODE  , "EXECUTION_URL" VARCHAR(750) UNICODE  , "EXPIRY_DATE" TIMESTAMP  , "EXTERNAL_TYPE" VARCHAR(255) UNICODE  , "FLAGS" INTEGER  , "ITEM_TYPE" VARCHAR(255) UNICODE  NOT NULL, "PRIORITY" INTEGER  , "STATUS" VARCHAR(50) UNICODE  , "SUBJECT" VARCHAR(255) UNICODE  , "GROUP_ACTION" VARCHAR(1024) UNICODE DEFAULT ' ' , "PROCESSOR" VARCHAR(255) UNICODE DEFAULT ' ' )
    15:10:36 2009-04-01 ope-Info:  Table KMC_UWL_ITEMS could not be created in database
            at com.sap.inst.jload.db.DBTable.create(DBTable.java:109)
            at com.sap.inst.jload.Jload.dbImport(Jload.java:278)
            at com.sap.inst.jload.Jload.executeJob(Jload.java:397)
            at com.sap.inst.jload.Jload.main(Jload.java:621)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:85)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:58)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:60)
            at java.lang.reflect.Method.invoke(Method.java:391)
            at com.sap.engine.offline.OfflineToolStart.main(OfflineToolStart.java:81)
    Apr 1, 2009 3:10:36 PM com.sap.inst.jload.db.DBConnection disconnect
    INFO: disconnected
    We have MaxDB 7.6.00.09.
    The SAP Note  852597 is only for MaxDB Databases at Build 10.
    It's fixed at Build 15. SapHelp is not helpful.
    how WE can solve it????
    regards
    chris

    We have solved the issue with a newer MaxDB version. Not the best solution but works.

  • "1) it takes TOOOOOO LONG for multi-row tabs to close when i click "close all/other tabs" and 2) it takes TOO long for Firefox to appear after i click the icon on the desktop"

    1) When many tabs are opened in multiple rows (say 50 or 100) it may take an enternity for them to close after I click the appropriate command
    2)MOre often than not, it takes the browser too long to appear (open) after I initiat it by clicking the desktop or start menu icon.
    NB: My first question is PRIMARY (critcally important) and second is additional (would very much like to receive an answer but it's not critical).
    Thanks a lot in advance!
    Best regards, Dmitry.

    Sorry about the bookmarks misread, I installed a Multi-Row Tab style, not for me but it in use deleted 126 of 130 tabs in about 3 seconds and then the last 4, no idea why that happens with the with the multi-row style.
    Did those 100 pages have web forms in them that you filled in an Firefox may be saving data along with the session.
    Here is a test page that you can quickly load up to 120 tabs at a time with an extension such as "Linky". See it if it takes an eternity to close them with the Multi Row tabs extension you are using, then try the same with the extension disabled. Works fast for me all tabs are on one row.
    * 001 '''Tab Capacity Test'''<br>http://dmcritchie.mvps.org/firefox/tab_capacity/001_with_underscore.htm
    Two Extensions to Help -- you may already have one or both
    * '''Stylish-Custom''' :: Add-ons for Firefox<br>https://addons.mozilla.org/en-US/firefox/addon/stylish-custom/
    * '''Linky''' :: Add-ons for Firefox<br>https://addons.mozilla.org/en-US/firefox/addon/linky/
    Style that can be installed after installing "Stylish" extension, style will show two rows of tabs you have to scroll up/down. '''This style does not recognize app-tabs formatting, new in Firefox 4.'''
    * '''App: Multi-Row Tab Bar''' - Themes and Skins for Browser <br>http://userstyles.org/styles/10930
    I put all my tabs on one row you probably would not like it, but you might like the one with the tab borders, which almost works again using multi-row tabs:
    * Tabs Bar Minimal Size - Themes and Skins for Browser <br>http://userstyles.org/styles/9043
    * '''Tab Color Underscoring active/read/unread''' - Themes and Skins for Browser<br>http://userstyles.org/styles/9023

  • PL-SQL-ORA-01704 - String literal too long

    Hello guyz;
    I am trying to store a value of over 4000 character long in a CLOB column and I got the error message that says "PL-SQL-ORA-01704 - String literal too long".
    What can I do to overcome this challenge?
    Thanking you for your usual support.

    sb92075 wrote:
    Problem Exists Between Keyboard And Chair
    We can't say what you are doing wrong since we don't know specifically what you actually do.Okay let me put it down this way.
    I have an application using SQL Server as d backend engine & now, the user wants to migrate to Oracle. I now wrote a mini-program to create a schema/user in oracle with the schema/database (being used by the app) from SQL server. I verified the structure very well & every is just fine. Now, data migration (from SQL Server to Oracle).
    I was able to move most tables data successfully without issue until I attempted to load a table which has a column (in SQL defined as text with over 4000 (var)chars/CLOB in Oracle). On moving a particular row to oracle db (after few rows have already been INSERTed into this particular table x), I got that err msg.
    After battling with that for a while, I concluded to make d (DataMigrator) app take just d first 4000 string - only if the value in that field value length > 4000. This worked perfectly without issue but you know the implication - Data lost.
    Do I need to switch something on/off in Oracle that expands the CLOB default maximum field size? Because I foresee this happening as soon as the application (that would now sit on Oracle) is now in use.
    If you still don't understand this, I don't know how beta 2 explain this!
    Edited by: aweklin on Mar 17, 2013 8:25 AM
    Edited by: aweklin on Mar 17, 2013 8:25 AM
    Edited by: aweklin on Mar 17, 2013 8:27 AM
    Edited by: aweklin on Mar 17, 2013 8:27 AM

  • ORA-01704: string literal too long (URGENT)

    Folks,
    I get error - ORA-01704: string literal too long,
    when trying to insert a value >4k into the SDIDOC column of the table below:
    CREATE TABLE "B1"."SDI_XML_TAB"
    ( "SDIID" VARCHAR2(60 BYTE) NOT NULL ENABLE,
    "SDIDOC" "SYS"."XMLTYPE" ,
    CONSTRAINT "SDI_XML_TAB_PK" PRIMARY KEY ("SDIID")
    USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
    STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "B1SYSTEM" ENABLE
    ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
    STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "B1SYSTEM"
    XMLTYPE COLUMN "SDIDOC" STORE AS CLOB (
    TABLESPACE "B1SYSTEM" ENABLE STORAGE IN ROW CHUNK 16384 PCTVERSION 10
    NOCACHE LOGGING
    STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)) ;
    The SQL I used using OCI was:
    "INSERT INTO SDI_XML_TAB(SDIID,SDIDOC) VALUES('ABC','clobVal')"
    What am I doing wrong?
    P.S. I also used the following and it gives the same error:
    "INSERT INTO SDI_XML_TAB(SDIID,SDIDOC) VALUES('ABC',XMLType('clobVal'))"
    Thanks,
    Arthur
    Message was edited by:
    ArthurJohnson

    Check this cool-bi.com

  • "PLS-00172: string literal too long" When Writing XML file into a Table

    Hi.
    I'm using DBMS_XMLStore to get a XML file into a db table. See the example below, I'm using that for my PL/SQL format. Problem is that because there're too many XML elements that I use in "xmldoc CLOB:= ...", I get "PLS-00172: string literal too long" error.
    Can someone suggest a workaround?
    THANKS!!!
    DECLARE
    insCtx DBMS_XMLStore.ctxType;
    rows NUMBER;
    xmldoc CLOB :=
    '<ROWSET>
    <ROW num="1">
    <EMPNO>7369</EMPNO>
    <SAL>1800</SAL>
    <HIREDATE>27-AUG-1996</HIREDATE>
    </ROW>
    <ROW>
    <EMPNO>2290</EMPNO>
    <SAL>2000</SAL>
    <HIREDATE>31-DEC-1992</HIREDATE>
    </ROW>
    </ROWSET>';
    BEGIN
    insCtx := DBMS_XMLStore.newContext('scott.emp'); -- get saved context
    DBMS_XMLStore.clearUpdateColumnList(insCtx); -- clear the update settings
    -- set the columns to be updated as a list of values
    DBMS_XMLStore.setUpdateColumn(insCtx,'EMPNO');
    DBMS_XMLStore.setUpdateColumn(insCtx,'SAL');
    DBMS_XMLStore.setUpdatecolumn(insCtx,'HIREDATE');
    -- Now insert the doc.
    -- This will only insert into EMPNO, SAL and HIREDATE columns
    rows := DBMS_XMLStore.insertXML(insCtx, xmlDoc);
    -- Close the context
    DBMS_XMLStore.closeContext(insCtx);
    END;
    /

    You ask where am getting the XML doc. Well, am not
    getting the doc itself.I either don't understand or I disagree. In your sample code, you're certainly creating an XML document-- your local variable "xmldoc" is an XML document.
    DBMS_XMLSTORE package needs
    to know the canonical format and that's what I
    hardcoded. Again, I either don't understand or I disagree... DBMS_XMLStore more or less assumes the format of the XML document itself-- there's a ROWSET tag, a ROW tag, and a then whatever column tags you'd like. You can override what tag identifies a row, but the rest is pretty much assumed. Your calls to setUpdateColumn identifies what subset of column tags in the XML document you're interested in.
    Later in code I use
    DBMS_XMLStore.setUpdateColumn to specify which
    columns are to be inserted.Agreed.
    xmldoc CLOB :=
    '<ROWSET>
    <ROW num="1">
    <KEY_OLD> Smoker </KEY_OLD>
    <KEY_NEW> 3 </KEY_NEW>
    <TRANSFORM> Specified </TRANSFORM>
    <KEY_OLD> Smoker </KEY_OLD>
    <VALUEOLD> -1 </VALUEOLD>
    EW> -1 </VALUENEW>
         <DESCRIPTION> NA </DESCRIPTION>
    </ROW>
    ROWSET>';This is your XML document. You almost certainly want to be reading this from the file system and/or have it passed in from another procedure. If you hard-code the XML document, you're limited to a 32k string literal, which is almost certainly causing the error you were reporting initially.
    As am writing this I'm realizing that I'm doing this
    wrong, because I do need to read the XML file from
    the filesystem (but insert the columns
    selectively)...What I need to come up with is a proc
    that would grab the XML file and do inserts into a
    relational table. The XML file will change in the
    future and that means that all my 'canonical format'
    code will be broken. How do I deal with anticipated
    change? Do I need to define/create an XML schema in
    10g if am just inserting into one relat. table from
    one XML file?What does "The XML file will change in the future" mean? Are you saying that the structure of the XML document will change? Or that the data in the XML document would change? Your code should only need to change if the structure of the document changes, which should be exceptionally uncommon and would only be an issue if you're adding another column that you want to work with, which would necessitate code changes.
    I found an article where the issue of changing XML
    file is dealt by using a XSL file (that's where I'd
    define the 'canonical format'), but am having a
    problem with creating one, because the source XML is
    screwed up in terms of the format:
    it's not <x> blah </x>
    <x2> blah </x2>
    x2="blah" x3="blah> ...etc
    Can you point me in the right direction, please?You can certainly use something like the DBMS_XSLProcessor package to transform whatever XML document you have into an XML document in an appropriate format for the XMLStore package and pass that transformed XML document into something like your sample procedure rather than defining the xmldoc local variable with a hard-coded document. Of course, you'd need to write appropriate XSL code to do the actual transform.
    Justin

  • [ORA-01704: string literal too long] in a long xquery

    I get an error when using queries with xmlquery.
    If the quoted string part is longest than 4000 chr then I get
    ORA-01704: string literal too long.
    Example...
    select xmlquery(
    ' <Root >
         {for $xxx in ora:view("ttttttt")
            let $nnn := $xxx/ROW/nnn/number()
    [total chr>4000] '
    RETURNING CONTENT)
    from dual
    Is it really not possible to write Oracle xmlqueries longest than 4000chr or there's a workaround for it?

    I got similary problem using the OCCI to insert record in my table.
    I have a BLOB field in my table and whenever I quote an HEX string that is bigger than 4000 chars I get the error: ORA-01704: string literal too long
    To sort that problem I have to pass to the prepared SQL statement, but now I got a new problem for which I do not have solution.
    I write here the piece of C++ code to insert and the error I get hoping somebody can help me.
    I have problem to write into a Blob object.
    I get the error: ORA-22275: Invalid LOB Locator Specified.
    The error occurs when I try to write into the blob variable:
    blobField.writeChunk(....
    Routine:
    void insert2 (void) throw (SQLException)
    cout << "=== Prepare Statement ===" << endl;
    Statement* stmtIns = occiConn->createStatement("insert into test_tab values (:1,:2,:3,:4,:5)");
    cout << "=== Prepare the Blob data ===" << endl;
    // Prepare all Blobs in an array of char.
    char* data[5];
    ub2 dataLen[5];
    int i;
    for (i = 0; i < 5; i++)
    data[i] = new char[16364];
    memset(data, 65+i, 16364);
    dataLen[i] = 16364;
    cout << "=== Assign the other fields ===" << endl;
    stmtIns->setInt(1, i + 10);
    stmtIns->setInt(2, i + 1000);
    stmtIns->setInt(3, i + 10000);
    stmtIns->setInt(4, i + 20);
    // Assign the blob field
    Blob blobField(occiConn);
    blobField.setEmpty();
    cout << "=== Opening the blob field in read/write mode ===" << endl;
    blobField.open(OCCI_LOB_READWRITE);
    cout << "=== Writing data into the blob ===" << endl;
    blobField.writeChunk(dataLen[0], reinterpret_cast<unsigned char*>(data[0]), dataLen[0], 1);
    blobField.close();
    cout << "=== Done ===" << endl;
    stmtIns->setBlob(5, blobField);
    cout << "=== Execute the iteration ===" << endl;
    stmtIns->executeUpdate();
    for (i = 0; i < 5; i++)
    delete [] data[i];
    stmtIns->executeUpdate("COMMIT");
    occiConn->terminateStatement(stmtIns);

  • String Literal too long / CLOB Issue

    I have a table with a "Clob" column called Message. This is where I store the message of an email. We have an internal email app. What is happening is I can't insert anything bigger then around 3990 or something around there. In my procedure I have the parameter coming in as clob and I use a TO_CHAR() around the table column and my parameter or my procedure gives me this error
    PL/SQL: ORA-00932: inconsistent datatypes: expected - got CLOB
    But when it runs and I put in a 9000 character message it says
    Oracle.DataAccess.Client.OracleException was unhandled by user code
    Message="ORA-01704: string literal too long
    I'm using C# with
    Oracle.DataAccess.Client.OracleConnection to create my oracle connection and Oracle 10g.
    I'm calling this procedure from my app. By doing this am I causing it to only hold 4000 characters?
    Here is a scaled down version of my code on just that column
    CREATE OR REPLACE PROCEDURE EMAILINS (
    P_MSG IN CLOB
    AS
    varT VARCHAR2(10000);
    varSQL VARCHAR2(20000);
    varTemp NUMBER;
    BEGIN
    -- SEE IF STRING EXISTS
    SELECT 1 INTO varTemp
    FROM tblEmail
    WHERE TO_CHAR(MESSAGE) = P_MSG
    EXCEPTION
    WHEN TOO_MANY_ROWS THEN
    varSQL := varT||CHR(10)||'***Multiple Rows Exist in Table tblEmail***';
    DBMS_OUTPUT.PUT_LINE(varSQL);
    WHEN NO_DATA_FOUND THEN
    varT := P_MSG;
    varSQL := 'INSERT INTO TBL_EMAIL( MESSAGE)'||CHR(10);
    varSQL := varSQL || 'VALUES (tblEmail_SEQ.NEXTVAL,'||varT||')';
    EXECUTE IMMEDIATE varSQL;
    END EMAILINS;

    In the first place, you don't need (and surely don't want) dynamic SQL to do the insert. Replace
    varT := P_MSG;
    varSQL := 'INSERT INTO TBL_EMAIL( MESSAGE)'||CHR(10);
    varSQL := varSQL || 'VALUES (tblEmail_SEQ.NEXTVAL,'||varT||')';
    EXECUTE IMMEDIATE varSQL;with the simpler
    INSERT INTO tbl_email( <<primary key column>>, message )
      VALUES( tblEmail_Seq.nextval, p_msg );Secondly, you want to use the DBMS_LOB.COMPARE function to determine whether the contents of the LOBs match. So replace
    SELECT 1
      INTO varTemp
      FROM tblEmail
    WHERE TO_CHAR(MESSAGE) = P_MSG with
    SELECT 1
      INTO varTemp
      FROM tblEmail
    WHERE dbms_lob.compare( message, p_msg ) = 0Of course, it is going to be relatively expensive to run this query every time you insert a new message unless the table is always going to be very small, which seems unlikely. It also doesn't prevent duplicate entries if there are multiple threads executing at the same time.
    Justin

  • When trying to 'Consolidate Files' via iTunes I get the message "Copying files failed. The name was invalid or too long."

    I am trying to copy my entire iTunes library and everything in it from my PC Desktop to my PC Laptop using the "External Drive" method as shown on the Apple website. On Part 1 (5) I am told to consolidate files. When attempting to do this I get the message "Copying files failed. The file name was invalid or too long."
    How do I resolve this?

    I have just been having this problem and it has been driving me mad.   The error message doesn't tell you which file is causing the problem so you can't fix it and it leaves your libray in a state of limbo with some files copied into the new location and some in the old location.  After many hours I found a surprisingly quick and simple solution.
    1) in iTune create a smart play list that includes everything that was added before tomorrows date.  This will include everything.
    2) Right click this playlist and export it as a text file to produce a tab delimited file.
    3) Import this into a spread sheet, or just view it in a text editor with line wrap turned off.  What you are interested in is the last column (or end of the line in a text editor).  This gives you the path of the file associated with each entry in the library. 
    The path of the files that have already been copied successfully will start with the new location for your media files you specified.  Scroll thru the rows until you get a blank path.  If you start getting paths starting with the old Media file location then you went to far and start scrolling back.
    The row with the blank path was the file that failed.  Move back to the start of the row to find out more about it.  In my case it was a podcast with a very long name. I just deleted that podcast from iTunes.
    4) Restart the consolidation by choosing File, Library as you did before and the process starts again where it left off.
    5) If it fails again on another file just repeat the process

  • Too long time for editing BEx

    Deal all
    I have a problem with BEx.
    Whenever I edit BEx Query Designer for creating or modifing Query, it takes time too long.
    For instance, when I restrict value in Row or Columns, it need too much time.
    Is there Tcode for checking this problem?
    Or is there something way to solve it?

    Hi,
    Please restart the query designer and check you can able to edit the query faster.
    Also please check the note is related to your performance issue:
    1387593 - Performance optimization for query change/generation
    1416737 - Performance optimization for query change/generation(2)
    1106067 - Low performance when opening Bex Analyzer on Windows server.
    Performance issues can be solved in general by faster hardware. so please check the Hardware parts.
    Thanks,
    Venkat

  • SQL Update statement taking too long..

    Hi All,
    I have a simple update statement that goes through a table of 95000 rows that is taking too long to update; here are the details:
    Oracle Version: 11.2.0.1 64bit
    OS: Windows 2008 64bit
    desc temp_person;
    Name                                                                                Null?    Type
    PERSON_ID                                                                           NOT NULL NUMBER(10)
    DISTRICT_ID                                                                     NOT NULL NUMBER(10)
    FIRST_NAME                                                                                   VARCHAR2(60)
    MIDDLE_NAME                                                                                  VARCHAR2(60)
    LAST_NAME                                                                                    VARCHAR2(60)
    BIRTH_DATE                                                                                   DATE
    SIN                                                                                          VARCHAR2(11)
    PARTY_ID                                                                                     NUMBER(10)
    ACTIVE_STATUS                                                                       NOT NULL VARCHAR2(1)
    TAXABLE_FLAG                                                                                 VARCHAR2(1)
    CPP_EXEMPT                                                                                   VARCHAR2(1)
    EVENT_ID                                                                            NOT NULL NUMBER(10)
    USER_INFO_ID                                                                                 NUMBER(10)
    TIMESTAMP                                                                           NOT NULL DATE
    CREATE INDEX tmp_rs_PERSON_ED ON temp_person (PERSON_ID,DISTRICT_ID) TABLESPACE D_INDEX;
    Index created.
    ANALYZE INDEX tmp_PERSON_ED COMPUTE STATISTICS;
    Index analyzed.
    explain plan for update temp_person
      2  set first_name = (select trim(f_name)
      3                    from ext_names_csv
      4                               where temp_person.PERSON_ID=ext_names_csv.p_id
      5                               and   temp_person.DISTRICT_ID=ext_names_csv.ed_id);
    Explained.
    @?/rdbms/admin/utlxpls.sql
    PLAN_TABLE_OUTPUT
    Plan hash value: 3786226716
    | Id  | Operation                   | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | UPDATE STATEMENT            |                | 82095 |  4649K|  2052K  (4)| 06:50:31 |
    |   1 |  UPDATE                     | TEMP_PERSON    |       |       |            |          |
    |   2 |   TABLE ACCESS FULL         | TEMP_PERSON    | 82095 |  4649K|   191   (1)| 00:00:03 |
    |*  3 |   EXTERNAL TABLE ACCESS FULL| EXT_NAMES_CSV  |     1 |   178 |    24   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       3 - filter("EXT_NAMES_CSV"."P_ID"=:B1 AND "EXT_NAMES_CSV"."ED_ID"=:B2)
    Note
       - dynamic sampling used for this statement (level=2)
    19 rows selected.By the looks of it the update is going to take 6 hrs!!!
    ext_names_csv is an external table that have the same number of rows as the PERSON table.
    ROHO@rohof> desc ext_names_csv
    Name                                                                                Null?    Type
    P_ID                                                                                         NUMBER
    ED_ID                                                                                        NUMBER
    F_NAME                                                                                       VARCHAR2(300)
    L_NAME                                                                                       VARCHAR2(300)Anyone can help diagnose this please.
    Thanks
    Edited by: rsar001 on Feb 11, 2011 9:10 PM

    Thank you all for the great ideas, you have been extremely helpful. Here is what we did and were able to resolve the query.
    We started with Etbin's idea to create a table from the ext table so that we can index and reference easier than an external table, so we did the following:
    SQL> create table ext_person as select P_ID,ED_ID,trim(F_NAME) fst_name,trim(L_NAME) lst_name from EXT_NAMES_CSV;
    Table created.
    SQL> desc ext_person
    Name                                                                                Null?    Type
    P_ID                                                                                         NUMBER
    ED_ID                                                                                        NUMBER
    FST_NAME                                                                                     VARCHAR2(300)
    LST_NAME                                                                                     VARCHAR2(300)
    SQL> select count(*) from ext_person;
      COUNT(*)
         93383
    SQL> CREATE INDEX EXT_PERSON_ED ON ext_person (P_ID,ED_ID) TABLESPACE D_INDEX;
    Index created.
    SQL> exec dbms_stats.gather_index_stats(ownname=>'APPD', indname=>'EXT_PERSON_ED',partname=> NULL , estimate_percent=> 30 );
    PL/SQL procedure successfully completed.We had a look at the plan with the original SQL query that we had:
    SQL> explain plan for update temp_person
      2  set first_name = (select fst_name
      3                    from ext_person
      4                               where temp_person.PERSON_ID=ext_person.p_id
      5                               and   temp_person.DISTRICT_ID=ext_person.ed_id);
    Explained.
    SQL> @?/rdbms/admin/utlxpls.sql
    PLAN_TABLE_OUTPUT
    Plan hash value: 1236196514
    | Id  | Operation                    | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | UPDATE STATEMENT             |                | 93383 |  1550K|   186K (50)| 00:37:24 |
    |   1 |  UPDATE                      | TEMP_PERSON    |       |       |            |          |
    |   2 |   TABLE ACCESS FULL          | TEMP_PERSON    | 93383 |  1550K|   191   (1)| 00:00:03 |
    |   3 |   TABLE ACCESS BY INDEX ROWID| EXTT_PERSON    |     9 |  1602 |     1   (0)| 00:00:01 |
    |*  4 |    INDEX RANGE SCAN          | EXT_PERSON_ED  |     1 |       |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       4 - access("EXT_PERSON"."P_ID"=:B1 AND "RS_PERSON"."ED_ID"=:B2)
    Note
       - dynamic sampling used for this statement (level=2)
    20 rows selected.As you can see the time has dropped to 37min (from 6 hrs). Then we decided to change the SQL query and use donisback's suggestion (using MERGE); we explained the plan for teh new query and here is the results:
    SQL> explain plan for MERGE INTO temp_person t
      2  USING (SELECT fst_name ,p_id,ed_id
      3  FROM  ext_person) ext
      4  ON (ext.p_id=t.person_id AND ext.ed_id=t.district_id)
      5  WHEN MATCHED THEN
      6  UPDATE set t.first_name=ext.fst_name;
    Explained.
    SQL> @?/rdbms/admin/utlxpls.sql
    PLAN_TABLE_OUTPUT
    Plan hash value: 2192307910
    | Id  | Operation            | Name         | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | MERGE STATEMENT      |              | 92307 |    14M|       |  1417   (1)| 00:00:17 |
    |   1 |  MERGE               | TEMP_PERSON  |       |       |       |            |          |
    |   2 |   VIEW               |              |       |       |       |            |          |
    |*  3 |    HASH JOIN         |              | 92307 |    20M|  6384K|  1417   (1)| 00:00:17 |
    |   4 |     TABLE ACCESS FULL| TEMP_PERSON  | 93383 |  5289K|       |   192   (2)| 00:00:03 |
    |   5 |     TABLE ACCESS FULL| EXT_PERSON   | 92307 |    15M|       |    85   (2)| 00:00:02 |
    Predicate Information (identified by operation id):
       3 - access("P_ID"="T"."PERSON_ID" AND "ED_ID"="T"."DISTRICT_ID")
    Note
       - dynamic sampling used for this statement (level=2)
    21 rows selected.As you can see, the update now takes 00:00:17 to run (need to say more?) :)
    Thank you all for your ideas that helped us get to the solution.
    Much appreciated.
    Thanks

  • Streamline query ? Taking too long

    First I wanted to say thanks to all in this forum, its been a huge help learning sql.
    Hoping someone can take a look at this query. It works but it takes a very long time to run.
    maybe there is a way to streamline it. right now its using one project number, but typically I would put in 60 or 70 project numbers here..
    select proj_id from project_master where status='A' and project_number IN(
    '502998'
    )))c,
    if someone knows a better way to run this please let me know , I will try anything. currently it takes about an hour to run it for 1 project number.
    select * from(
    select
    b.doc_folder_id,c.project_number,b.name,a.doc_file_name,a.rec_update_date,a.rec_create_date,d.rnk
    from
    select doc_id,proj_id, doc_file_name,rec_create_date,rec_update_date from document_master where doc_status ='A'
    and doc_file_extension like 'pdf' or doc_file_extension like 'jpg'
    or doc_file_extension like 'xls' or doc_file_extension like 'doc'
    or doc_file_extension like 'txt' or doc_file_extension like 'png'
    or doc_file_extension like 'tif' or doc_file_extension like 'ppt'
    or doc_file_extension like 'pps' or doc_file_extension like 'msg'
    ) a,
    (select * from doc_folder_master
    where upper(name) LIKE '%DAILY REPORTS%'
    OR upper(name) LIKE '%MANPOWER REPORTS%'
    OR upper(name) LIKE '%SITE PURCHASES%'
    OR upper(name) LIKE '%SETE EHS ISSUES%'
    OR upper(name) LIKE '%TURNOVER PACKAGES%'
    OR upper(name) LIKE '11.04.01 PAD%'
    OR upper(name) LIKE '%EDSR%'
    OR upper(name) LIKE '%COQ WORKFLOW%'
    and status='A'
    ) b,
    (select proj_id,project_number from project_master where proj_id IN (
    select proj_id from project_master where status='A' and project_number IN(
    '502998'
    )))c,
    (select child_doc_type,
    parent_doc_id,child_doc_id,to_char(rec_create_date,'mm/dd/yyyy hh24:mi:ss'),
    row_number() over (partition by parent_doc_id,child_doc_type order by rec_create_date desc) rnk
    from document_relations)d
    where a.proj_id=b.proj_id
    and c.proj_id=a.proj_id
    and c.proj_id=b.proj_id
    and d.parent_doc_id=b.doc_folder_id
    and a.doc_id=d.child_doc_id)
    where rnk <3
    thanks for any assistance.
    Edited by: Jay on Dec 29, 2010 12:08 PM

    Hi,
    Please, you might want to read this post:
    When your query takes too long ...
    Providing further information is key to obtaining quality answers.
    Now on to the actual subject:
    It seems you want some sort of top-n query. Depending on the cardinality and the data that you have, you probably would want to prune the rows from the DOCUMENT_RELATIONS table early, before joining to the other tables. This way you can avoid the database wasting effort of looking up matches on the other tables to only then discard those joined rows. You can do that by pushing the WHERE rnk < 3 predicate into the inline view.
    SELECT *
      FROM (SELECT b.doc_folder_id, c.project_number, b.name, a.doc_file_name, a.rec_update_date, a.rec_create_date, d.rnk
              FROM (SELECT doc_id, proj_id, doc_file_name, rec_create_date, rec_update_date
                      FROM document_master
                     WHERE doc_status = 'A'
                           AND doc_file_extension IN ('pdf', 'jpg', 'xls', 'doc', 'txt', 'png', 'tif', 'ppt', 'pps', 'msg')) a,
                   (SELECT *
                      FROM doc_folder_master
                     WHERE upper(NAME) LIKE '%DAILY REPORTS%'
                           OR upper(NAME) LIKE '%MANPOWER REPORTS%'
                           OR upper(NAME) LIKE '%SITE PURCHASES%'
                           OR upper(NAME) LIKE '%SETE EHS ISSUES%'
                           OR upper(NAME) LIKE '%TURNOVER PACKAGES%'
                           OR upper(NAME) LIKE '11.04.01 PAD%'
                           OR upper(NAME) LIKE '%EDSR%'
                           OR upper(NAME) LIKE '%COQ WORKFLOW%'
                           AND status = 'A') b,
                   (SELECT proj_id, project_number
                      FROM project_master
                     WHERE proj_id IN (SELECT proj_id
                                         FROM project_master
                                        WHERE status = 'A'
                                              AND project_number IN ('502998'))) c,
                   (SELECT *
                      FROM (SELECT child_doc_type,
                                   parent_doc_id,
                                   child_doc_id,
                                   to_char(rec_create_date, 'mm/dd/yyyy hh24:mi:ss'),
                                   row_number() over(PARTITION BY parent_doc_id, child_doc_type ORDER BY rec_create_date DESC) rnk
                              FROM document_relations)
                     WHERE rnk < 3) d
             WHERE a.proj_id = b.proj_id
                   AND c.proj_id = a.proj_id
                   AND c.proj_id = b.proj_id
                   AND d.parent_doc_id = b.doc_folder_id
                   AND a.doc_id = d.child_doc_id)Like Toon said, if not needed you should avoid the 2nd scan in the PROJECT_MASTER table.
    Perhaps you should check the possibility of creating a function-based index on the doc_folder_master table, with the upper(NAME) expression, to improve those Like conditions.
    Docs on Function-based Indexes:
    http://download.oracle.com/docs/cd/E11882_01/server.112/e16638/data_acc.htm#PFGRF94785

Maybe you are looking for