Performance issues - which oracle xml technology to use?

Our company has spent some time researching different oracle xml technologies to obtain the fastest performence but also have the flexibily of a changing xml schemas across revs.
Not registering schemas gives the flexibity of quickly changing schemas between revs and simpler table structure, but hurts performance quite a bit compared to registering schemas.
Flat non xml tables seems the fastest but seeing that everything is going xml, this doesn;t seems like a choice.
Anyhow, let me know any input/experience anyone can offer.
here's what we have tested all with simple
10000 record tests, each of the form:
insert into po_tab values (1,
xmltype('<PurchaseOrder xmlns="http://www.oracle.com/PO.xsd"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.oracle.com/PO.xsd
http://www.oracle.com/PO.xsd">
<PONum>akkk</PONum>
<Company>Oracle Corp</Company>
<Item>
<Part>9i Doc Set</Part>
<Price>2550</Price>
</Item>
<Item>
<Part>8i Doc Set</Part>
<Price>350</Price>
</Item>
</PurchaseOrder>'));
we have tried three schenerios
-flat tables, non xml db
-xml db with registering schemas
-xml db but not registering schemas, just using XmlType
and adding oracle text xml indexes with paths to speed up
queries
now for the results
- flat tables, non xml db (we where thinking of using it like this to ease fetching of data for ui views in ad hoc situations, then have code to export the data into
xml, is there any oracle tool that will let me
export the data into
xml automatically via a schema?)
create table po_tabSimple(
id int constraint id_pk PRIMARY KEY,
part varchar2(100),
price number
insert into po_tabSimple values (i, 'test', 1.000);
select part from po_tabSimple;
2 seconds (Quickest)
-xml db with registering schemas
declare
doc varchar2(1000) := '<schema
targetNamespace="http://www.oracle.com/PO.xsd"
xmlns:po="http://www.oracle.com/PO.xsd"
xmlns="http://www.w3.org/2001/XMLSchema">
<complexType name="PurchaseOrderType">
<sequence>
<element name="PONum" type="decimal"/>
<element name="Company">
<simpleType>
<restriction base="string">
<maxLength value="100"/>
</restriction>
</simpleType>
</element>
<element name="Item" maxOccurs="1000">
<complexType>
<sequence>
<element name="Part">
<simpleType>
<restriction base="string">
<maxLength value="1000"/>
</restriction>
</simpleType>
</element>
<element name="Price" type="float"/>
</sequence>
</complexType>
</element>
</sequence>
</complexType>
<element name="PurchaseOrder" type="po:PurchaseOrderType"/>
</schema>';
begin
dbms_xmlschema.registerSchema('http://www.oracle.com/PO.xsd', doc);
end;
create table po_tab(
id number,
po sys.XMLType
xmltype column po
XMLSCHEMA "http://www.oracle.com/PO.xsd"
element "PurchaseOrder";
select EXTRACT(po_tab.po, '/PurchaseOrder/Item/Part/text()','xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"').getStringVal() from po_tab;
4 sec
-xml db but not registering schemas, just using XmlType
and adding oracle text xml indexes with paths to speed up
queries
create table po_tabOld(
id number,
po sys.XMLType
select EXTRACT(po_tabOld.po, '/PurchaseOrder/Item/Part/text()','xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"').getStringVal() from po_tabOld;
41 seconds without indexes
41 seconds with indexes
here are the indexes used
CREATE INDEX po_tabColOld_idx ON po_tabOld(po) indextype is ctxsys.ctxxpath;
CREATE INDEX po_tabOld_idx ON po_tabOld X (X.po.extract('/PurchaseOrder/Item/Part/text()','xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"').getStringVal());

Our company has spent some time researching different oracle xml technologies to obtain the fastest performence but also have the flexibily of a changing xml schemas across revs.
Not registering schemas gives the flexibity of quickly changing schemas between revs and simpler table structure, but hurts performance quite a bit compared to registering schemas.
Flat non xml tables seems the fastest but seeing that everything is going xml, this doesn;t seems like a choice.
Anyhow, let me know any input/experience anyone can offer.
here's what we have tested all with simple
10000 record tests, each of the form:
insert into po_tab values (1,
xmltype('<PurchaseOrder xmlns="http://www.oracle.com/PO.xsd"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.oracle.com/PO.xsd
http://www.oracle.com/PO.xsd">
<PONum>akkk</PONum>
<Company>Oracle Corp</Company>
<Item>
<Part>9i Doc Set</Part>
<Price>2550</Price>
</Item>
<Item>
<Part>8i Doc Set</Part>
<Price>350</Price>
</Item>
</PurchaseOrder>'));
we have tried three schenerios
-flat tables, non xml db
-xml db with registering schemas
-xml db but not registering schemas, just using XmlType
and adding oracle text xml indexes with paths to speed up
queries
now for the results
- flat tables, non xml db (we where thinking of using it like this to ease fetching of data for ui views in ad hoc situations, then have code to export the data into
xml, is there any oracle tool that will let me
export the data into
xml automatically via a schema?)
create table po_tabSimple(
id int constraint id_pk PRIMARY KEY,
part varchar2(100),
price number
insert into po_tabSimple values (i, 'test', 1.000);
select part from po_tabSimple;
2 seconds (Quickest)
-xml db with registering schemas
declare
doc varchar2(1000) := '<schema
targetNamespace="http://www.oracle.com/PO.xsd"
xmlns:po="http://www.oracle.com/PO.xsd"
xmlns="http://www.w3.org/2001/XMLSchema">
<complexType name="PurchaseOrderType">
<sequence>
<element name="PONum" type="decimal"/>
<element name="Company">
<simpleType>
<restriction base="string">
<maxLength value="100"/>
</restriction>
</simpleType>
</element>
<element name="Item" maxOccurs="1000">
<complexType>
<sequence>
<element name="Part">
<simpleType>
<restriction base="string">
<maxLength value="1000"/>
</restriction>
</simpleType>
</element>
<element name="Price" type="float"/>
</sequence>
</complexType>
</element>
</sequence>
</complexType>
<element name="PurchaseOrder" type="po:PurchaseOrderType"/>
</schema>';
begin
dbms_xmlschema.registerSchema('http://www.oracle.com/PO.xsd', doc);
end;
create table po_tab(
id number,
po sys.XMLType
xmltype column po
XMLSCHEMA "http://www.oracle.com/PO.xsd"
element "PurchaseOrder";
select EXTRACT(po_tab.po, '/PurchaseOrder/Item/Part/text()','xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"').getStringVal() from po_tab;
4 sec
-xml db but not registering schemas, just using XmlType
and adding oracle text xml indexes with paths to speed up
queries
create table po_tabOld(
id number,
po sys.XMLType
select EXTRACT(po_tabOld.po, '/PurchaseOrder/Item/Part/text()','xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"').getStringVal() from po_tabOld;
41 seconds without indexes
41 seconds with indexes
here are the indexes used
CREATE INDEX po_tabColOld_idx ON po_tabOld(po) indextype is ctxsys.ctxxpath;
CREATE INDEX po_tabOld_idx ON po_tabOld X (X.po.extract('/PurchaseOrder/Item/Part/text()','xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"').getStringVal());

Similar Messages

  • Performance issues with Oracle EE 9.2.0.4 and RedHat 2.1

    Hello,
    I am having some serious performance issues with Oracle Enterprise Edition 9.2.0.4 and RedHat Linux 2.1. The processor goes berserk at 100% for long (some 5 min.) periods of time, and all the ram memory gets used.
    Some environment characteristics:
    Machine: Intel Pentium IV 2.0GHz with 1GB of RAM.
    OS: RedHat Linux 2.1 Enterprise.
    Oracle: Oracle Enterprise Edition 9.2.0.4
    Application: We have a small web-application with 10 users (for now) and very basic queries (all in stored procedures). Also we use the latest version of ODP.NET with default connection settings (some low pooling, etc).
    Does anyone know what could be going on?
    Is anybody else having this similar behavior?
    We change from SQL-Server so we are not the world expert on the matter. But we want a reliable system nonetheless.
    Please help us out, gives some tips, tricks, or guides…
    Thanks to all,
    Frank

    Thank you very much and sorry I couldn’t write sooner. It seems that the administrator doesn’t see the kswap going on so much, so I don’t really know what is going on.
    We are looking at some queries and some indexing but this is nuts, if I had some poor queries, which we don’t really, the server would show pick right?
    But he goes crazy and has two oracle processes taking all the resources. There seems to be little swapping going on.
    Son now what? They are all ready talking about MS-SQL please help me out here, this is crazy!!!
    We have, may be the most powerful combinations here. What is oracle doing?
    We even kill the Working Process of the IIS and have no one do anything with the database and still dose two processes going on.
    Can some one help me?
    Thanks,
    Frank

  • View objects performance issue with oracle seeded tables

    While i am writing a view object on a oracle seeded tables like MTL_PARAMETERS, its taking more time to show in the oaf page.I am trying to display all these view object columns in detail disclosure of advanced table. My Application is taking more than two minutes to display the view columns of the query which is returning just 200 rows. Please help me how to improve performance when my query using seeded tables.
    This issue is happening only in R12 view object and advanced tables.
    Edited by: vlsn on Jun 24, 2012 11:36 PM

    Hi All,
    Here is architecture of my application:
    Java application creates XML from the screen values and then inserts that XML
    into a framework(separate DB schema) table . then Java calls a Stored Procedure from same framework DB and in SP we have following steps.
    1. It fatches XML from the XML type table and inserts XML into screen specific XML TYPE table in the framework DB Schema. This table has a trigger which parses XML and then inserts XML values into GTT which are created in separate product schemas.
    2. it calls Product SP and then in product SP we have business logic. Product SP
    does the execution and then inserts response into Response GTT.
    3. Response XML is created by using XML generation function and response GTT.
    I hope u will understand my architeture this time and now let me know if GTT are good in this scenario or not. also please not that i need data in GTT only during execution and not after that. i dont want to do specific delete which i have to do if i am using normal tables.
    Regards,
    Vikas Kumar

  • Performance Issues with large XML (1-1.5MB) files

    Hi,
    I'm using an XML Schema based Object relational storage for my XML documents which are typically 1-1.5 MB in size and having serious performance issues with XPath Query.
    When I do XPath query against an element of SQLType varchar2, I get a good performance. But when I do a similar XPath query against an element of SQLType Collection (Varray of varchar2), I get a very ordinary performance.
    I have also created indexes on extract() and analyzed my XMLType table and indexes, but I have no performance gain. Also, I have tried all sorts of storage options available for Collections ie. Varray's, Nested Tables, IOT's, LOB's, Inline, etc... and all these gave me same bad performance.
    I even tried creating XMLType views based on XPath queries but the performance didn't improve much.
    I guess I'm running out of options and patience as well.;)
    I would appreciate any ideas/suggestions, please help.....
    Thanks;
    Ramakrishna Chinta

    Are you having similar symptoms as I am? http://discussions.apple.com/thread.jspa?threadID=2234792&tstart=0

  • Tool for diagnosing performance issues in oracle database

    Is there any tool to diagnose performance issues in queries and stored procedures in oracle similar to sql profiler for sql server
    Thanks

    you can use oem oracle enterprise manager to diagnose and monitor database .
    Chapter 10: Monitoring and Tuning the Database(refer the link , oracle obe series, step by step procedures with screenshot presentation)
    This chapter introduces you to some of the monitoring and tuning operations as performed through Enterprise Manager.
    http://www.oracle.com/technology/obe/2day_dba/monitoring/monitoring.htm
    refer: Monitoring and Tuning the Database
    http://download.oracle.com/docs/cd/B14117_01/server.101/b10742/montune.htm
    hope, this will helps you.
    Edited by: rajeysh on Jul 14, 2010 9:28 PM

  • Performance issue with Oracle Global Temporary table

    Hi
    Oracle version : 10.2.0.3.0 - Production
    We have an application in Java / Oracle. Users request comes in XML and oracle parser parses it and inserts it into Global temporary tables and then Business Stored procedure picks data from these GTT's and do the required processing.
    in the end data required response data is again inserted into response GTT's from which Response XML is generated.
    Question : Is the use of Global temporary tables in Oracle degrades performance as we have large number of GTT's in our application approx. 5-600 such tables.
    Regards,
    Vikas Kumar

    Hi All,
    Here is architecture of my application:
    Java application creates XML from the screen values and then inserts that XML
    into a framework(separate DB schema) table . then Java calls a Stored Procedure from same framework DB and in SP we have following steps.
    1. It fatches XML from the XML type table and inserts XML into screen specific XML TYPE table in the framework DB Schema. This table has a trigger which parses XML and then inserts XML values into GTT which are created in separate product schemas.
    2. it calls Product SP and then in product SP we have business logic. Product SP
    does the execution and then inserts response into Response GTT.
    3. Response XML is created by using XML generation function and response GTT.
    I hope u will understand my architeture this time and now let me know if GTT are good in this scenario or not. also please not that i need data in GTT only during execution and not after that. i dont want to do specific delete which i have to do if i am using normal tables.
    Regards,
    Vikas Kumar

  • Low performance issue with EPM Add-in SP16 using Office Excel 2010

    We are having low performance issues using the EPM Excel Add-in SP16 with Excel 2010 and Windows 7
    The same reports using Excel 2007 (Windows 7 or XP) run OK
    Anyone with the same issue?
    Thanks

    Solved. It's a known issue. The solution, as provided by SAP Support is:
    Dear Customer,
    This problem may relate to a known issue, which happens on EPM Add-in
    with Excel 2010 when zoom is NOT 100%.
    to verify this, could please please try to set the zoom to 100% for
    your EPM report in question. and observe if refreshing performance is
    recovered?.
    Quite annoying, isn't it?

  • Performance issue with Oracle Text index

    Hi Experts,
    We are on Oracle 11.2..0.3 on Solaris 10. I have implemented Oracle Text in our environment and I am facing a strange performance issue that is happening in our environment.
    One sql having CONTAINS clause is taking forever - more than 20 minutes and still does not complete. This sql has a contains clause and an exists clause and a not exists clause.
    Now if I remove the exists clause and a not exists clause , it completes fast. but with those two clauses it is just taking forever. It is late night so i am not able to post the table and sql query details and will do so tomorrow but based on this general description, are there any pointers for me to review?
    sql query doing fine:
    SELECT
        U.CLNT_OID, U.USR_OID, S.MAILADDR
    FROM
        access_usr U
        INNER JOIN access_sia S
            ON S.USR_OID = U.USR_OID AND S.CLNT_OID = U.CLNT_OID
        WHERE U.CLNT_OID = 'ABCX32S'
        AND CONTAINS(LAST_NAME , 'TO%' ) >0
    --sql query that hangs forever:
    SELECT
        U.CLNT_OID, U.USR_OID, S.MAILADDR
    FROM
        access_usr U
        INNER JOIN access_sia S
            ON S.USR_OID = U.USR_OID AND S.CLNT_OID = U.CLNT_OID
        WHERE U.CLNT_OID = 'ABCX32S'
        AND CONTAINS(LAST_NAME , 'TO%' ) >0
    and exists (--one clause here wiht a few table joins)
    and not exists (--one clause here wiht a few table joins);
    --Now another strange thing I found is if instead of 'TO%' in this sql, if I were to use 'ZZ%' or 'L1%' it works fast but for 'TO%' it goes slow with those two exists not exists clauses!
    I will be most thankful for the inputs.
    OrauserN

    Hi Barbara,
    First of all, thanks a lot for reviewing the issue.
    Unluckily making the change to empty_stoplist did not work out. I am today copying the entire sql here that has this issue and will be most thankful for more insights/pointers on what can be done.
    Here is the entire sql:
    SELECT U.CLNT_OID,
           U.USR_OID,
           S.EMAILADDRESS,
           U.FIRST_NAME,
           U.LAST_NAME,
           S.JOBCODE,
           S.LOCATION,
           S.DEPARTMENT,
           S.ASSOCIATEID,
           S.ENTERPRISECOMPANYCODE,
           S.EMPLOYEEID,
           S.PAYGROUP,
           S.PRODUCTLOCALE
      FROM    ACCESS_USR U
           INNER JOIN
              ACCESS_SIA S
           ON S.USR_OID = U.USR_OID AND S.CLNT_OID = U.CLNT_OID
    WHERE     U.CLNT_OID = 'G39NY3D25942TXDA'
           AND EXISTS
                  (SELECT 1
                     FROM ACCESS_USR_GROUP_XREF UGX
                          INNER JOIN ACCESS_GROUP RELG
                             ON     RELG.CLNT_OID = UGX.CLNT_OID
                                AND RELG.GROUP_OID = UGX.GROUP_OID
                          INNER JOIN ACCESS_GROUP G
                             ON     G.CLNT_OID = RELG.CLNT_OID
                                AND G.GROUP_TYPE_OID = RELG.GROUP_TYPE_OID
                    WHERE     UGX.CLNT_OID = U.CLNT_OID
                          AND UGX.USR_OID = U.USR_OID
                          AND G.GROUP_OID = 920512943
                          AND UGX.INCLUDED = 1)
           AND NOT EXISTS
                      (SELECT 1
                         FROM    ACCESS_USR_GROUP_XREF UGX
                              INNER JOIN
                                 ACCESS_GROUP G
                              ON     G.CLNT_OID = UGX.CLNT_OID
                                 AND G.GROUP_OID = UGX.GROUP_OID
                        WHERE     UGX.CLNT_OID = U.CLNT_OID
                              AND UGX.USR_OID = U.USR_OID
                              AND G.GROUP_OID = 920512943
                              AND UGX.INCLUDED = 1)
           AND CONTAINS (U.LAST_NAME, 'Bon%') > 0;
    Like I said before if the EXISTS and NOT EXISTS clause are removed it works in sub-second. But with those EXISTS and NOT EXISTS CLAUSE IT TAKES ANY WHERE FROM 25 minutes to more than one hour.
    NOte also that it was not TO% but Bon% in the CONTAINS clause that is giving the issue - sorry that was wrong on my part.
    Also please see below the ORACLE TEXT index defined on the table ACCESS_USER:
    --definition of preferences used in the index:
    SET SERVEROUTPUT ON size unlimited
    WHENEVER SQLERROR EXIT SQL.SQLCODE
    DECLARE
       v_err       VARCHAR2 (1000);
       v_sqlcode   NUMBER;
       v_count     NUMBER;
    BEGIN
       ctxsys.ctx_ddl.create_preference ('cust_lexer', 'BASIC_LEXER');
       ctxsys.ctx_ddl.set_attribute ('cust_lexer', 'base_letter', 'YES'); -- removes diacritics
    EXCEPTION
       WHEN OTHERS
       THEN
          v_err := SQLERRM;
          v_sqlcode := SQLCODE;
          v_count := INSTR (v_err, 'DRG-10701');
          IF v_count > 0
          THEN
             DBMS_OUTPUT.put_line (
                'The required preference named CUST_LEXER with BASIC LEXER is already set up');
          ELSE
             RAISE;
          END IF;
    END;
    DECLARE
       v_err       VARCHAR2 (1000);
       v_sqlcode   NUMBER;
       v_count     NUMBER;
    BEGIN
       ctxsys.ctx_ddl.create_preference ('cust_wl', 'BASIC_WORDLIST');
       ctxsys.ctx_ddl.set_attribute ('cust_wl', 'SUBSTRING_INDEX', 'true'); -- to improve performance
    EXCEPTION
       WHEN OTHERS
       THEN
          v_err := SQLERRM;
          v_sqlcode := SQLCODE;
          v_count := INSTR (v_err, 'DRG-10701');
          IF v_count > 0
          THEN
             DBMS_OUTPUT.put_line (
                'The required preference named CUST_WL with BASIC WORDLIST is already set up');
          ELSE
             RAISE;
          END IF;
    END;
    --now below is the code of the index:
    CREATE INDEX ACCESS_USR_IDX3 ON ACCESS_USR
    (FIRST_NAME)
    INDEXTYPE IS CTXSYS.CONTEXT
    PARAMETERS('LEXER cust_lexer WORDLIST cust_wl SYNC (ON COMMIT)');
    CREATE INDEX ACCESS_USR_IDX4 ON ACCESS_USR
    (LAST_NAME)
    INDEXTYPE IS CTXSYS.CONTEXT
    PARAMETERS('LEXER cust_lexer WORDLIST cust_wl SYNC (ON COMMIT)');
    The strange thing is that, like I said, If I remove the exists clause the query returns very fast. Also if I modify the query to use only one NOT EXISTS clause and remove the other EXISTS clause it returns in less than one second.  Also if I remove the EXISTS clause and use only the NOT EXISTS  clause it returns in less than 4 seconds. But with both clauses it runs forever!
    When I tried to get dbms_xplan.display_cursor to get the query plan (for the case of both exists and not exists clause in the query), it said that previous statement's sql id was 0 or something like that so that I was not able to see the query plan. I will keep trying to get this plan (it takes 25 minutes to one hour each time but will get this info soon). Again any pointers are most helpful.
    Regards
    OrauserN

  • Which Oracle products should we use?

    We are looking to build, from the ground up, a site that is roughly a combination of www.kudzu.com and www.accessatlanta.com. Which Oracle products would help us best develop, test, and implement a similar solution.
    The primary confusion for me is whether the Portal product would be a good fit, or whether some other product(s) would be better.
    Many thanks in advance for any and all input.

    First, thank you VERY much for your reply. This post has received lots of views but no comments until yours. I appreciate your taking the time to offer a bit of advice and assistance.
    If I understand correctly, you're suggesting developing the whole thing using J2EE rather than a product like Oracle Portal. I assume we would still be able to deploy using components of Oracle Application Server. Is this correct?
    Ultimately, this portal website will be very database centric. All of the ads, listings, content, etc., would have to be stored and managed using a database. Would using J2EE be the most effective way to accomplish that?
    I'm sorry if I seem a bit of a newbie. I know enough that I want to use Oracle technology, but not enough to know which products. Essentially, I'm the guy with the idea and I need some assistance figuring out the products so I know which product expertise to look for when hiring the right technical staff.
    If you have any additional thoughts, please share. In the meantime, thanks again for your response.

  • Performance issue in oracle 11.1.0.7 version

    Hi ,
    In production environment we have some cronjobs are scheduled, they will run every Saturday. One of the cronjob is taking more time to finish the job.
    Previous oracle version is 10.2.0.4, that time it was taking 36hrs to complete it. After upgrading to 11gr1, now it's taking 47hrs some time 50hrs to finish.
    I have asked my production DBA take AWR report after finish the cronjob.
    Now he sent the AWR report, but i don't know how to read it. Can you please help me to read the AWR reports, and i need to give some recommendations to reduce the overall running time.
    I don't know how to attach the AWR report here.
    Please help me on this.
    Thanks
    Shravan Kumar

    Hi,
    Now he sent the AWR report, but i don't know how to read it. Can you please help me to read the AWR reports, and i need to give some recommendations to reduce the overall running time.An't you a DBA? Probably you should seek help of you DBA to read the AWR and mean while you should also have AWR of 10g where this job was running previously so that you can compare the things.
    Did you do a testing before upgrade? you SHOULD have done a thorough testing of your applications/reports before the upgrade and resolve the performance issues before the production upgrade.
    Mean while you do investigation, you can set optimizer_features_enable=10.2.0.4 for only cron job session to check whether you job returns to same 26 hours time
    alter session set optimizer_features_enable='10.2.0.4';Salman

  • Performance Issue in Oracle EBS

    Hi Group,
    I am working in a performance issue at customer site, let me explain the behaviour.
    There is one node for the database and other for the application.
    Application server is running all the services.
    EBS version is 12.1.3 and database version is: 11.1.0.7 with AIX both servers..
    Customer has added memory to both servers (database and application) initially they had 32 Gbytes, now they have 128 Gbytes.
    Today, I have increased memory parameters for the database and also I have increased JVM's proceesses from 1 to 2 for Forms and OAcore, both JVM's are 1024M.
    The behaviour is when users are navigating inside of the form, and they push the down button quickly the form gets thinking (reloading and waiting 1 or 2 minutes to response), it is no particular for a specific form, it is just happening in several forms.
    Gathering statistics job is scheduled every weekend, I am not sure what can be the problem, I have collected a trace of the form and uploaded it to Oracle Support with no success or advice.
    I have just send a ping command and the reponse time between servers is below to 5 ms.
    I have several activities in mind like:
    - OATM conversion.
    - ASM implementation.
    - Upgrade to 11.2.0.4.
    Has anybody had this behaviour?, any advice about this problem will be really appreciated.
    Thanks in advance.
    Kind regards,
    Francisco Mtz.

    Hi Bashar, thank you very much for your quick response.
    If both servers are on the same network then the ping should not exceed 2 ms.
    If I remember, I did a ping last Wednesday, and there were some peaks over 5 ms.
    Have you checked the network performance between the clients and the application server?
    Also, I did a ping from the PC to the application and database, and it was responding in less than 1 ms.
    What is the status of the CPU usage on both servers?
    There aren't overhead in the CPU side, I tested it (scrolling getting frozen) with no users in the application.
    Did this happen after you performed the hardware upgrade?
    Yes, it happened after changing some memory parameters in the JVM and the database.
    Oracle has suggested to apply the latest Forms patches according to this Note: Doc ID 437878.1
    Thanks in advance.
    Kind regards,
    Francisco Mtz.

  • XML performance issue in Oracle 8i

    I have implemeted the code as per the link http://www.oracle-base.com/articles/8i/parse-xml-documents-8i.php to convert & Load an XML Document to multiple tables in Oracle database. As the number of nodes increase in the xml file, the processing time of XML file is increasing exponentially.(For Example:- if there are more than 500 Employee-Dept details in an XML file, parsing of XML file is taking about 5 hours ) Could anyone let me know what changes can be done inorder to reduce the parsing time.

    Tim (Hall) starts with "First, download and install the latest copy of the XDK for PL/SQL.", so yes, you could try again on the XDK forum. As long as you don't upgrade, we can't really help.
    Thinking out of the box, a lot has improved since the "birth" of 8i, so alternatively you could load faster Java solutions (parsers etc of non oracle flavor, that is the very old XDK) into Oracle's JVM kernel and access those via you own PL/SQL wrappers / packages.
    Edited by: Marco Gralike on Aug 22, 2012 7:33 PM

  • Performance issue with Oracle data source

    Hi all,
    I've a rather strange problem that I'm stuck on need some assistance on.
    I have a rules file which drags data in via an SQL data source thats an Oracle server. If I cut/paste the 3 sections of "select" "from" and "where" into SQL-Developer and run the query, it takes less than 1 second to complete. When I run the "load data" with this rule file or even use the "Retrieve" with the rules file edit, it takes up to an hour to complete/retrieve the data.
    The table in question being used has millions of rows and I'm using one of the indexed fields to retrieve the data. It's as if the Essbase/Rule file is ognoring the index, or I have a config issue with the ODBC settings on the server that is causing the problem.
    ODBC.INI file entry for the Oracle server as follows (changed any sensitive info to xxx or 999).
    [XXX]
    Driver=/opt/data01/hyperion/common/ODBC-64/Merant/5.2/lib/ARora22.so
    Description=DataDirect 5.2 Oracle Wire Protocol
    AlternateServers=
    ApplicationUsingThreads=1
    ArraySize=60000
    CachedCursorLimit=32
    CachedDescLimit=0
    CatalogIncludesSynonyms=1
    CatalogOptions=0
    ConnectionRetryCount=0
    ConnectionRetryDelay=3
    DefaultLongDataBuffLen=1024
    DescribeAtPrepare=0
    EnableDescribeParam=0
    EnableNcharSupport=0
    EnableScrollableCursors=1
    EnableStaticCursorsForLongData=0
    EnableTimestampWithTimeZone=0
    HostName=999.999.999.999
    LoadBalancing=0
    LocalTimeZoneOffset=
    LockTimeOut=-1
    LogonID=xxx
    Password=xxx
    PortNumber=1521
    ProcedureRetResults=0
    ReportCodePageConversionErrors=0
    ServiceType=0
    ServiceName=xxx
    SID=
    TimeEscapeMapping=0
    UseCurrentSchema=1
    Can anyone please advise on this lack of performance.
    Thanks in advance
    Bagpuss

    One other thing that I've seen is that if your Oracle data source and Essbase server are in different geographic locations, you can get some delay when it retrieves data over the WAN. I guess there is some handshaking going on when passing the data from Oracle to Essbase (either by record or groups of records) that is slowed WAY down over the WAN.
    Our solution to this was remove teh query out of the load rule, run it via SQL+ on a command line at the geographic location where the Oracle database is, then ftp the resulting file to where the Essbase server is.
    With upwards of 6 million records being retrieved, it took around 4 hours in the load rule, but running the query via command line took 10 minutes, then the ftp took less than 5.

  • Performance issue in select when subselect is used

    We have a select statement that is using a where clause that has dates in. I've simplified the SQL below to demonstrate roughly what it does:
    select * from user_activity
    where starttime >= trunc(sysdate - 1) + 18/24
    and starttime < trunc(sysdate) + 18/24
    (this selects records from a table where a starttime field has values beween 6pm today and 6pm yesterday).
    We are using this statement to create a materialized view which we refresh daily. Occasionally we have the need to refresh the data for a historic day, which means that we need to go in and change the where lines, e.g. to get data from 3 days ago instead of yesterday, the where clause becomes:
    select * from user_activity
    where starttime >= trunc(sysdate - 3) + 18/24
    and starttime < trunc(sysdate - 2) + 18/24
    Having to recreate the views like this is a nuisance, so we decided to be 'smart' and create a table with a setting in that we could use to control the number of days the view looks back. So if our table is called days_ago and field called days (with a single record, set to 1), we now have
    select * from user_activity
    where starttime >= trunc(sysdate - (select days from days_ago) + 18/24
    and starttime < trunc(sysdate - ((select days from days_ago) + 1) + 18/24
    The original SQL takes a few seconds to run. However the 'improved' version takes 25 minutes.
    The days table only has 1 record in, selecting directly from it is instantaneous. Running the select on its own.
    Does anything jump out as being daft in this approach? We cannot explain why the performance has suddenly dropped off for such a simple change.

    Hi,
    Do you really need a view to do this?
    Can't you define a bind varibale, and use it in your query:
    VARIABLE  days_ago   NUMBER
    EXEC     :days_ago := 3;
    SELECT      ...
    WHERE     starttime >= TRUNC (SYSDATE - :days_ago') + (18 / 24)
    AND     starttime <  TRUNC (SYSDATE - :days_ago') + (42 / 24)     -- 42 = 24 + 18If you really must have a view, then it might be faster if you got the parameter from a SYS_CONTEXT variable, rather than from a table.
    Unfortunately, SYS_CONTEXT always returns a string, so you have to be carefule encoding the number as a string when you set the variable, and decoding it when you use the variable:
    WHERE     starttime >= TRUNC (SYSDATE - TO_NUMBER ( SYS_CONTEXT ('MY_VIEW_NAMESPACE', 'DAYS_AGO'))) + (18 / 24)
    AND     starttime <  TRUNC (SYSDATE - TO_NUMBER ( SYS_CONTEXT ('MY_VIEW_NAMESPACE', 'DAYS_AGO'))) + (42 / 24)     -- 42 = 24 + 18For more about SYS_CONTEXT, look it up in the SQL language m,anual, and follow the links there:
    http://docs.oracle.com/cd/B28359_01/server.111/b28286/functions172.htm#sthref2268
    If performance is important enough, consider storing the "fiscal day" as a separate (indexed) column. Starting in Oracle 11.1, you can use a virtual column for this. In earleir versions, you'd have to use a trigger. By "fiscal day", I mean:
    TRUNC (starttime + (6/24))If starttime is between 18:00:00 on December 28 and 17:59:59 on December 29, this will return 00:00:00 on December 29. You could use it in a query (or view) like this:
    WHERE     fiscal_day   = TRUNC (SYSDATE) - 2

  • Query Performance issue in Oracle Forms

    Hi All,
    I am using oracle 9i DB and forms 6i.
    In query form ,qry took long time to load the data into form.
    There are two tables used here.
    1 table(A) contains 5 crore records another table(B) has 2 crore records.
    The recods fetching range 1-500 records.
    Table (A) has no index on main columns,after created the index on main columns in table A ,the query is fetched the data quickly.
    But DBA team dont want to create index on table A.Because of table space problem.
    If create the index on main table (A) ,then performance overhead in production.
    Concurrent user capacity is 1500.
    Is there any alternative methods to handle this problem.
    Regards,
    RS

    1) What is a crore? Wikipedia seems to indicate that it's either 10,000,000 or 500,000
    http://en.wikipedia.org/wiki/Crore
    I'll assume that we're talking about tables with 50 million and 20 million rows, respectively.
    2) Large tables with no indexes are definitely going to be slow. If you don't have the disk space to create an appropriate index, surely the right answer is to throw a bit of disk into the system.
    3) I don't understand the comment "If create the index on main table (A) ,then performance overhead in production." That seems to contradict the comment you made earlier that the query performs well when you add the index. Are you talking about some other performance overhead?
    Justin

Maybe you are looking for

  • ID3 tags won't convert!

    I just learned that you can convert ID3 tags to Unicode right in iTunes. I used to use Unicode Rewriter, but now I can't convert them in either - it just stays the same! I don't know what's different in my computer now, but I did a complete system re

  • Regarding Change document read

    Hi All, Am using Change Document read function module to find out the changed material details. i want to fetch the plant from marc to filter the records am having the only relationship i.e material to fetch the details from marc and mara,with this i

  • PSE 10 Organizer zapped by power outage

    There was a major power outage while I was exporting files from Organizer 10.  Now when I try to open Organizer, it opens momentarily, and then a a notice pops up that says:  "Elements 10 has topped working.  A problem caused the program to stop work

  • Data Request remains in yellow status after RSAPO_CLOSE_TRANS_REQUEST

    Dear all, I am working on BI 7.0 and I need to load data from a Real-Time Cube (in planning mode) to a Standard Cube. I am using a Process Chain with a program which calls function module RSAPO_CLOSE_TRANS_REQUEST providing the technical name of the

  • Ticket

    can anyone give few critcal issues  came across in mm during support? wht is the t- code used to look into the tickets raised?(not  for transport request)