Performance problems with Oracle 9.2.0.2 Database with optimizer_mode=RULE

If I make an explain plan for a query of the following form:
SELECT a.*
FROM a, b
WHERE a.pk = b.fk
AND b.pk IN (1, 2);
the explain plan states, that a FULL-TABLE-SCAN on table b is performed and a HASH-JOIN is done to execute the query.
On 8.0.6 Database the Indexes on the tables are used and a NESTED-LOOP join is performed.
Why is the 9.2.0.2 Database using HASH-JOINS when the Database is explicitly set to optimizer_mode=RULE, as I think HASH-JOINS are only possible with CBO?
How can I force the 9.2.0.2 database to perform exactly like the 8.0.6 database with RBO?

some options:
set the hash_join_enabled parameter to false in the init.ora file.
set the compatible parameter to 8.0 in the init.ora file.
documentation:
HASH_JOIN_ENABLED specifies whether the optimizer should consider using a
hash join as a join method. When set to FALSE, hash join is turned off; that is, it is
not available as a join method that the optimizer can consider choosing. When set to
TRUE, the optimizer will compare the cost of a hash join to other types of joins, and
choose it if it gives the best cost.
COMPATIBLE allows you to use a new release, while at the same time guaranteeing
backward compatibility with an earlier release. This is in case it becomes necessary
to revert to the earlier release. This parameter specifies the release with which the
Oracle server must maintain compatibility. Some features of the current release may
be restricted.

Similar Messages

  • Is there any known problem using Oracle SQL Developer 3.0.04 with Java 1.7?

    I'm new to Oracle. I have installed Oracle SQL Developer 3.0.04 and Java 1.7. When I run Oracle SQL Developer, I will get the window Running this product is supported with minimum Java version of 1.6.0_04 and a maximum version less than 1.7. This product will not be supported....
    Is there any known problem using Oracle SQL Developer 3.0.04 with Java 1.7?
    I have already downloaded Java 1.6 but don't know whether I need to uninstall Java 1.7 first. If don't need to uninstall Java 1.7, how can I set Oracle SQL Developer to run with Java 1.6?
    Thanks for any help.
    Edited by: 881656 on Aug 25, 2011 11:22 AM

    Hi,
    One prior post discussing the use of Java 7 is:
    SQL Developer 3.0  and Java SE 7?
    There is no need to uninstall any Java version (except if you have disk space constraints) and no problem switching between Java versions. This may be controlled in the sqldeveloper.conf file in your ...\sqldeveloper\sqldeveloper\bin directory via the SetJavaHome line. For example:
    #SetJavaHome ../../jdk
    SetJavaHome C:/Program Files/Java/jdk1.6.0_26
    #SetJavaHome C:/Program Files/Java/jdk1.7.0Regards,
    Gary Graham
    SQL Developer Team

  • Link a CSV or TXT with Oracle Without loading it to Database

    Hi,
    Can we Link a CSV or TXT file with Oracle Without loading it to Database.
    Thanks & Regards,
    Rahul

    Link means data in CSV or TXT should be visible in oracle and i can use SQL queries over it but data in CSV or TXT should not be physically loaded in Oracle.
    Regards,
    Rahul

  • Performance problems post Oracle 10.2.0.5 upgrade

    Hi All,
    We have patched our SAP ECC6 system's Oracle database from 10.2.0.2 to 10.2.0.5. (Operating system Solaris). This was done using the SAP Bundle Patch released in February 2011. (patched DEV, QA and then Production).
    Post patching production, we are now experiencing slower performance of our long running background jobs, e.g. our billing runs has increased from 2 hours to 4 hours. The slow down is constant and has not increased or decreased over a period of two weeks.
    We have so far implemented the following in production without any affect:
    We have double checked that database parameters are set correctly as per note Note 830576 - Parameter recommendations for Oracle 10g.
    We have executed with db02old the abap<->db crosscheck to check for missing indexes.
    Note 1020260 - Delivery of Oracle statistics (Oracle 10g, 11g).
    It was suggested to look at adding specific indexes on tables and changing abap code identified by looking at the most "expensive" SQL statements being executed, but these were all there pre patching and not within the critical long running processes. Although a good idea to optimise, this will not resolve the root cause of the problem introduced by the upgrade to 10.2.0.5. It was thus not implemented in production, although suggested new indexes were tested in QA without effect, then backed out.
    It was also suggested to implement SAP Note 1525673 - Optimizer merge fix for Oracle 10.2.0.5, which was not part of the SAP Bundle Patch released in February 2011 which we implemented. To do this we were required to implement the SAP Bundle Patch released in May 2011. As this also contains other Oracle fixes we did not want to implement this directly in production. We thus ran baseline tests to measure performance in our QA environment, implemented the SAP Bundle patch, and ran the same tests again (simplified version of the implementation route ).Result: No improvement in performance, in fact in some cases we had degradation of performance (double the time). As this had the potential to negatively effect production, we have not yet implemented this in production.
    Any suggestions would be greatly appreciated !

    Hello Johan,
    well the first goal should be to get the original performance so that you have time to do deeper analysis in your QA system (if the data set is the same).
    If the problem is caused by some new optimizer features or bugs you can try to "force" the optimizer to use the "old" 10.2.0.2 behaviour. Just set the parameter OPTIMIZER_FEATURES_ENABLE to 10.2.0.2 and check your performance.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams142.htm#CHDFABEF
    To get more information we need an AWR (for an overview) and the problematic SQL statements (with all the information like execution plan, statistics, etc.). This analysis are very hard through a forum. I would suggest to open a SAP SR for this issue.
    Regards
    Stefan

  • Performance problem between Oracle.DataAccess v1 and v2

    Hi, I have serious performance problem with OracleDataReader when I use the GetValues method.
    My server is Oracle 9.2.0.7, and i use ODAC v10.2.0.221
    I create a dummy table for benchmark :
    create table test (a varchar2(50), b number)
    begin
    for i in 1..62359 loop
    insert into test values ('Values ' || i, i);
    end loop;
    commit;
    end;
    I use the same code for benchmark Framework v1 and Framework v2.
    Code :
    try {
    OracleConnection c = new OracleConnection("user id=saturne_dbo;password=***;data source=satedfx;");
    c.Open();
    go(c);
    c.Close();
    catch (Exception ex) {
    MessageBox.Show(ex.Message);
    private void go(IDbConnection c) {
    IDbCommand cmd = c.CreateCommand();
    cmd.CommandText = "select * from test";
    cmd.CommandType = CommandType.Text;
    DateTime dt = DateTime.Now;
    IDataReader reader = cmd.ExecuteReader();
    int count = 0;
    while (reader.Read()) {
    object[] fields = new object[reader.FieldCount];
    reader.GetValues(fields);
    count++;
    reader.Close();
    TimeSpan eps = DateTime.Now - dt;
    MessageBox.Show("Time " + count + " : " + eps.TotalSeconds);
    Result are :
    Framework v1 with OracleDataAccess 1.10.2.2.20 "Time 62359 : 0.5"
    Framework v2 with OracleDataAccess 2.10.2.2.20 "Time 62359 : 3.57" FACTOR 6 !!!!!
    I notice same problem with oleDb provider and Microsoft Oracle Client provider..
    It's a serious problem for my production server, the time calculation explode...
    Where is the explication ?
    Do u know solution ?

    Can you please try out following -
    1. Create a .NET 1.x DLL with your benchmark code. This will obviously use ODP.NET for .NET 1.x.
    2. Call this assembly routine from a .NET 1.x executable and note the results.
    3. Now call this assembly routine from a .NET 2.0 executable and note the results.
    The idea is to always use "ODP.NET for .NET 1.x" even in .NET 2.0 runtime. This will tell us whether the performance degradation is a runtime issue.

  • Performance Problem between Oracle 9i to Oracle 10g using Crystal XI

    We have a Crystal XI Report using ODBC Drivers, 14 tables, and one sub report. If we execute the report on an Oracle 9i database the report will complete in about 12 seconds. If we execute the report on an Oracle 10g database the report will complete in about 35 seconds.
    Our technical Setup:
    Application server: Windows Server 2003, Running Crystal XI SP2 Runtime dlls with Oracle Client 10.01.00.02, .Net Framework 1.1, C# for Crystal Integration, Unmanaged C++ for app server environment calling into C# through a dynamically loaded mixed-mode C++ DLL.
    Database server is Oracle 10g
    What we have concluded:
    Reducing the number of tables to 1 will reduce the execution time of the report from 180s to 13s. With 1 table and the sub report we would get 30 seconds
    We have done some database tracing and see that Crystal Reports Issues the following query when verifying the database and it takes longer in 10g vs 9i.
    We have done some profiling in the application code. When we retarget the first table to the target database, it takes 20-30 times longer in 10g than in 9i. Retargeting the other tables takes about twice as long. The export to a PDF file takes about 4-5 times as long in 10g as in 9i.
    Oracle 10g no longer supports the /*+ RULE */ hint.
    Verify DB Query:
    select /*+ RULE */ *
    from
    (select /*+ RULE */ null table_qualifier, o1.owner table_owner,
    o1.object_name table_name, decode(o1.owner,'SYS', decode(o1.object_type,
    'TABLE','SYSTEM TABLE','VIEW', 'SYSTEM VIEW', o1.object_type), 'SYSTEM',
    decode(o1.object_type,'TABLE','SYSTEM TABLE','VIEW', 'SYSTEM VIEW',
    o1.object_type), o1.object_type) table_type, null remarks from all_objects
    o1 where o1.object_type in ('TABLE', 'VIEW') union select /*+ RULE */ null
    table_qualifier, s.owner table_owner, s.synonym_name table_name, 'SYNONYM'
    table_type, null remarks from all_objects o3, all_synonyms s where
    o3.object_type in ('TABLE','VIEW') and s.table_owner= o3.owner and
    s.table_name = o3.object_name union select /*+ RULE */ null table_qualifier,
    s1.owner table_owner, s1.synonym_name table_name, 'SYNONYM' table_type,
    null remarks from all_synonyms s1 where s1.db_link is not null ) tables
    WHERE 1=1 AND TABLE_NAME='QCTRL_VESSEL' AND table_owner='QLM' ORDER BY 4,2,
    3
    SQL From Main Report:
    SELECT "QCODE_PRODUCT"."PROD_DESCR", "QCTRL_CONTACT"."CONTACT_FIRST_NM", "QCTRL_CONTACT"."CONTACT_LAST_NM", "QCTRL_MEAS_PT"."MP_NM", "QCTRL_ORG"."ORG_NM", "QCTRL_TKT"."SYS_TKT_NO", "QCTRL_TRK_BOL"."START_DT", "QCTRL_TRK_BOL"."END_DT", "QCTRL_TRK_BOL"."DESTINATION", "QCTRL_TRK_BOL"."LOAD_TEMP", "QCTRL_TRK_BOL"."LOAD_PCT", "QCTRL_TRK_BOL"."WEIGHT_OUT", "QCTRL_TRK_BOL"."WEIGHT_IN", "QCTRL_TRK_BOL"."WEIGHT_OUT_UOM_CD", "QCTRL_TRK_BOL"."WEIGHT_IN_UOM_CD", "QCTRL_TRK_BOL"."VAPOR_PRES", "QCTRL_TRK_BOL"."SPECIFIC_GRAV", "QCTRL_TRK_BOL"."PMO_NO", "QCTRL_TRK_BOL"."ODORIZED_VOL", "QARCH_SEC_USER"."SEC_USER_NM", "QCTRL_TKT"."DEM_CTR_NO", "QCTRL_BA_ENTITY"."BA_NM1", "QCTRL_BA_ENTITY_VW"."BA_NM1", "QCTRL_BA_ENTITY"."BA_ID", "QCTRL_TRK_BOL"."VOLUME", "QCTRL_TRK_BOL"."UOM_CD", "QXREF_BOL_PROD"."MOVEMENT_TYPE_CD", "QXREF_BOL_PROD"."BOL_DESCR", "QCTRL_TKT"."VOL", "QCTRL_TKT"."UOM_CD", "QCTRL_PMO"."LINE_UP_BEFORE", "QCTRL_PMO"."LINE_UP_AFTER", "QCODE_UOM"."UOM_DESCR", "QCTRL_ORG_VW"."ORG_NM"
    FROM (((((((((((("QLM"."QCTRL_TRK_BOL" "QCTRL_TRK_BOL" INNER JOIN "QLM"."QCTRL_PMO" "QCTRL_PMO" ON "QCTRL_TRK_BOL"."PMO_NO"="QCTRL_PMO"."PMO_NO") INNER JOIN "QLM"."QCTRL_MEAS_PT" "QCTRL_MEAS_PT" ON "QCTRL_TRK_BOL"."SUP_MP_ID"="QCTRL_MEAS_PT"."MP_ID") INNER JOIN "QLM"."QCTRL_TKT" "QCTRL_TKT" ON "QCTRL_TRK_BOL"."PMO_NO"="QCTRL_TKT"."PMO_NO") INNER JOIN "QLM"."QCTRL_CONTACT" "QCTRL_CONTACT" ON "QCTRL_TRK_BOL"."DRIVER_CONTACT_ID"="QCTRL_CONTACT"."CONTACT_ID") INNER JOIN "QFC_QLM"."QARCH_SEC_USER" "QARCH_SEC_USER" ON "QCTRL_TRK_BOL"."USER_ID"="QARCH_SEC_USER"."SEC_USER_ID") LEFT OUTER JOIN "QLM"."QCODE_UOM" "QCODE_UOM" ON "QCTRL_TRK_BOL"."ODORIZED_VOL_UOM_CD"="QCODE_UOM"."UOM_CD") INNER JOIN "QLM"."QCTRL_ORG_VW" "QCTRL_ORG_VW" ON "QCTRL_MEAS_PT"."ORG_ID"="QCTRL_ORG_VW"."ORG_ID") INNER JOIN "QLM"."QCTRL_BA_ENTITY" "QCTRL_BA_ENTITY" ON "QCTRL_TKT"."DEM_BA_ID"="QCTRL_BA_ENTITY"."BA_ID") INNER JOIN "QLM"."QCTRL_CTR_HDR" "QCTRL_CTR_HDR" ON "QCTRL_PMO"."DEM_CTR_NO"="QCTRL_CTR_HDR"."CTR_NO") INNER JOIN "QLM"."QCODE_PRODUCT" "QCODE_PRODUCT" ON "QCTRL_PMO"."PROD_CD"="QCODE_PRODUCT"."PROD_CD") INNER JOIN "QLM"."QCTRL_BA_ENTITY_VW" "QCTRL_BA_ENTITY_VW" ON "QCTRL_PMO"."VESSEL_BA_ID"="QCTRL_BA_ENTITY_VW"."BA_ID") LEFT OUTER JOIN "QLM"."QXREF_BOL_PROD" "QXREF_BOL_PROD" ON "QCTRL_PMO"."PROD_CD"="QXREF_BOL_PROD"."PURITY_PROD_CD") INNER JOIN "QLM"."QCTRL_ORG" "QCTRL_ORG" ON "QCTRL_CTR_HDR"."BUSINESS_UNIT_ORG_ID"="QCTRL_ORG"."ORG_ID"
    WHERE "QCTRL_TRK_BOL"."PMO_NO"=12345 AND "QXREF_BOL_PROD"."MOVEMENT_TYPE_CD"='TRK'
    SQL From Sub Report:
    SELECT "QXREF_BOL_VESSEL"."PMO_NO", "QXREF_BOL_VESSEL"."VESSEL_NO"
    FROM "QLM"."QXREF_BOL_VESSEL" "QXREF_BOL_VESSEL"
    WHERE "QXREF_BOL_VESSEL"."PMO_NO"=12345
    Does anyone have any suggestions on how we can improve the report performance with 10g?

    Hi Eric,
    Thanks for your response. The optimizer mode in our 9i database is CHOOSE. We changed the optimizer mode from ALL_ROWS to CHOOSE in 10g but it didn't make a difference.
    While researching Metalink I came across a couple of documents that indicated performance problems and issues with using certain data-dictionary views in 10g. Apparently, the definition of ALL_OBJECTS, ALL_ARGUMENTS and ALL_SYNONYMS have changed in 10g, resulting in degradation in performance, if quieried against these views. These are the same queries that crystal reports is queriying. We'll try the workaround suggested in these documents and see if it resolves the issue.
    Here are the Doc Ids, if you are interested:
    Note 377037.1
    Note:364822.1
    Thanks again for your response.
    Venu Boddu.

  • Erratic performance problems in Oracle 8.0.x

    Hi all,
    We are having a performance problem that appeared somewhere between 8.0.6 and 8.1.5 when using embedded SQL and the ProC compiler under Linux and Solaris.
    The moment we use client libraries > 8.0.x, things seem to grind to a halt. We are currently using 8.0.6 client against 8, 9 and 10g databases. Using 8.1.5 or 10g clients against Oracle 9 or 10 databases triggers the problem.
    The problem also isn't tied to any specific query. On the latest run we tried, the timings for a specific problem query are as follows: Oracle 10 server, Oracle 8.0.6 client - 40 seconds; Oracle 10 server (same database), Oracle 10 client - 14 hours
    Explain plan doesn't show anything funny with the query. On occasion, the query does get through quickly. Subsequent runs are then also quick.
    The query also runs fine in SQLPlus.
    What we have noticed is that the server process flatlines it at 100% CPU usage for the entire duration. The client, on the other hand is just sleeping, waiting for data from the server. Stopping the client in the debugger shows that the client is waiting in the sqlcxt() call when opening the cursor, not actually fetching data.
    We are at our wits end as to where look next, and we can't stay on 8.0.6 client libraries for ever as this is starting to cause us other hassles now.
    Did something significant perhaps change between 8.0.x and 8.1.x that we need to cater for in our apps?
    Any help/ideas would be greatly appreciated.
    Regards,
    Gerhard

    Check Metalink
    Client / Server / Interoperability Support Between Different Oracle Versions
    Doc ID: Note:207303.1
    Looks like Client version 8.1.5 has some problem, it was never designed to support Oracle version higher than 8.1.7
    On the other hand, 8.0.6 was supported up to 9.2
    I would stay with 8.0.6 if I have to use Oracle 8 client. Client version 8.1.7 seems much better.

  • Installing oracle application express from the database with oracle 11g

    Hi,
    I installed oracle 11g release 1 and trying to install oracle APEX from the database with Embedded PL/SQL gateway. The installation guide require grant connect privilege to any host for the APEX_030200 database user, but this schema does not exist in the database...
    How to continue the installation of APEX ?

    Thanks Hari,
    But I need to enable network services in my database (oracle 11g).
    Do I have to replace in the PL/SQL script below apex_030200 by flows_030000 ?
    DECLARE
    ACL_PATH VARCHAR2(4000);
    ACL_ID RAW(16);
    BEGIN
    -- Look for the ACL currently assigned to '*' and give APEX_030200
    -- the "connect" privilege if APEX_030200 does not have the privilege yet.
    SELECT ACL INTO ACL_PATH FROM DBA_NETWORK_ACLS
    WHERE HOST = '*' AND LOWER_PORT IS NULL AND UPPER_PORT IS NULL;
    -- Before checking the privilege, ensure that the ACL is valid
    -- (for example, does not contain stale references to dropped users).
    -- If it does, the following exception will be raised:
    -- ORA-44416: Invalid ACL: Unresolved principal 'APEX_030200'
    -- ORA-06512: at "XDB.DBMS_XDBZ", line ...
    SELECT SYS_OP_R2O(extractValue(P.RES, '/Resource/XMLRef')) INTO ACL_ID
    FROM XDB.XDB$ACL A, PATH_VIEW P
    WHERE extractValue(P.RES, '/Resource/XMLRef') = REF(A) AND
    EQUALS_PATH(P.RES, ACL_PATH) = 1;
    DBMS_XDBZ.ValidateACL(ACL_ID);
    IF DBMS_NETWORK_ACL_ADMIN.CHECK_PRIVILEGE(ACL_PATH, 'APEX_030200', 'connect') IS NULL THEN
    DBMS_NETWORK_ACL_ADMIN.ADD_PRIVILEGE(ACL_PATH,'APEX_030200', TRUE, 'connect');
    END IF;
    EXCEPTION
    -- When no ACL has been assigned to '*'.
    WHEN NO_DATA_FOUND THEN
    DBMS_NETWORK_ACL_ADMIN.CREATE_ACL('power_users.xml','ACL that lets power users to connect to everywhere',
    'APEX_030200', TRUE, 'connect');
    DBMS_NETWORK_ACL_ADMIN.ASSIGN_ACL('power_users.xml','*');
    END;
    COMMIT;
    Regards

  • Problems Installing Oracle Metadata Repository into Existing Database

    Folks,
    I'm slowly going through the process of installing the Metadata Repository into a 10g database, using RepCA I've managed to determine all the DB parameters required for setting etc. The only thing RepCA now reports is missing is the Ultrasearch schema & packages.
    I've copied the ultrasearch dirs from the RepCA distribution and placed them in my $ORACLE home dir, I've also attempted to run the admin install script 'wk0setup.sql' which throws out several errors listed below. If anyone has successfully gone through this process on Solaris I would love to hear your about your experience & advice.
    Error 1. =================================================
    User is WKSYS
    ... loading ultrasearch_db.jar
    call sys.dbms_java.loadjava('-v -f -r -s -g PUBLIC $ORACLE_HOME/ultrasearch/lib/ultrasearch_db.jar', '((* wksys) (* public))')
    ERROR at line 1:
    ORA-29532: Java call terminated by uncaught Java exception:
    oracle.aurora.server.tools.loadjava.ToolsError: Error during loadjava: Failures
    occurred during processing. Check trace file for details
    Error 2 (File Doesn't exist at all) ==============================
    SP2-0310: unable to open file "/oracle/1020a/ultrasearch/admin/wk0launchq.pkh"
    Error 3 (Again the file doesn't exist at all) ========================
    SP2-0310: unable to open file "/oracle/1020a/ultrasearch/admin/wk0launchq.plb"
    ===========================================================
    Regards
    Austin

    Hi guys,
    I have used copy of that package from Oracle Identity Management installation, version 10.1.4.0.1 and have been able to complete manual Ultra Search installation. It's valid in dba_registry. After that I did run RepCA and metadata schemas have been created successfully in existing database, version 10.2.0.2. Be careful about database version, only 10.2.0.2 is certified with Oracle Identity management, so you need to run patch for upgrade to 10.2.0.2.
    Hope this helps.
    Dali
    Message was edited by:
    dallyz7

  • Problem installing oracle in windows on a system with a share drive

    I am attempting to install oracle on windows at work. I have access to a share drive that has an oracle inventory. I do not have permission to write here.
    When I try to install oracle locally on my C: drive, by default the installation looks in the N: drive oracle inventory and since I do not have privileges to write there, it errors out.
    how do i set it so that the installer creates an oracle/inventory locally on my C: drive?

    On Windows registry key 'inst_loc' defines the Oracle inventory location,obviously a previous installation set this location to shared drive N: . Unfortunately it's NOT supported to change this , statement from Oracle:
    Development have stated that manually changing the "inst_loc" string in the Windows registry (under HKEY_LOCAL_MACHINE\Software\Oracle or HKLM\Software\Oracle) is officially unsupported.
    You could create a 'clean' machine (remove completely all Oracle components including registry keys and start again with a fresh installation)
    http://download.oracle.com/docs/cd/B19306_01/install.102/b14316/deinstall.htm#i1008427
    OR - maybe easier - ask your systemadminstrator to grant the necessary privilege.
    Werner

  • XSLT performance problem in Oracle

    Hi,
    I'm doing a XSLT in oracle. For this I use the XMLPARSER and XSLPROCESSOR package provided by Oracle.
    The function which implements this package has the following code.
    function get_XML
    (p_product_clob IN clob, p_stylesheet_id IN varchar2) return clob
    -- Local variables
    l_xslprocessor xslprocessor.PROCESSOR;
    l_stylesheet_DOMDoc xmldom.DOMDocument;
    l_xml_DOMDoc xmldom.DOMDocument;
    l_stylesheet_xmlparser xmlparser.PARSER;
    l_xml_xmlparser xmlparser.PARSER;
    l_stylesheet xslprocessor.STYLESHEET;
    -- to store the xml structure from the int_product_hist table.
    l_stylesheet_clob clob; -- to store the xsl to applied to the xml.
    l_result_xml clob; --output xml structure.
    begin
    --create a temporary clob
    dbms_lob.createtemporary(l_result_xml,true,dbms_lob.session);
    -- Check if the input xml clob variable is empty
    if dbms_lob.GETLENGTH(p_product_clob)=0 then
    return null;
    end if;
    -- Check for 205 according to which stylesheet is to be applied.
    if rtrim(ltrim(p_stylesheet_id)) is not null then
    select STYLESHEET_DESC into l_stylesheet_clob
    from SM_XML_STYLESHEETS
    where STYLESHEET_ID=p_stylesheet_id;
         if sql%notfound then
         return null;
         end if;
    else
    return null;
    end if;
    -- make objects
    l_xml_xmlparser := xmlparser.NEWPARSER;
    l_stylesheet_xmlparser := xmlparser.NEWPARSER;
    l_xslprocessor := xslprocessor.NEWPROCESSOR;
    -- Assign a parser to xml and xsl
    xmlparser.parseClob(l_xml_xmlparser, p_product_clob);
    xmlparser.parseClob(l_stylesheet_xmlparser, l_stylesheet_clob);
    l_xml_DOMDoc := xmlparser.GETDOCUMENT(l_xml_xmlparser);
    l_stylesheet_DOMDoc := xmlparser.GETDOCUMENT(l_stylesheet_xmlparser);
    l_stylesheet:=xslprocessor.NEWSTYLESHEET(l_stylesheet_DOMDoc,null);
    -- applying the xsl on xml.
    xslprocessor.processXSL(l_xslprocessor, l_stylesheet, l_xml_DOMDoc, l_result_xml);
    -- free all objects.
    xmlparser.freeParser(l_xml_xmlparser);
    xmlparser.freeParser(l_stylesheet_xmlparser);
    xslprocessor.freeStylesheet(l_stylesheet);
    xslprocessor.freeProcessor(l_xslprocessor);
    -- return the xml after xslt.
    return l_result_xml;
    end;
    The xml is sent as a parameter to this function (p_product_clob) and the stylesheet id is sent as input to this function. The stylesheet is picked from the table based on the id passed as parameter.
    The function returns a correct output. But it takes a lot of time to do this operation (2 min). My project requirement is 5 sec.
    Is there anyway that we can reduce the time for this XSLT.
    Regards,
    Milton.

    Would you send sample XML and XSL file? Or at least the XSL file.
    Thanks.

  • Oracle 10g - Cross-Platform Transportable Database (with or without RMAN?)

    Hello guys,
    i am currently on a project for migrating oracle databases from one platform to another platform (both platforms are on the same endianess):
    -> From Linux 64-Bit to Windows 64-Bit
    -> From HP-UX 64-Bit to AIX 64-Bit
    All databases are on Oracle 10g (10.2.0.4) and i have tested two different ways:
    1) The "official" way with RMAN CONVERT DATABASE
    http://youngcow.net/doc/oracle10g/backup.102/b14191/dbxptrn002.htm#CHDEEEAG
    2) "Copy & Paste"
    - Copy & Paste of the database files
    - Recreate controlfile manually on target platform (with help of "backup controlfile to trace" on source platform)
    Both ways are working well and the migrated test databases are running fine.
    Now i am wondering why there is a "RMAN CONVERT DATABASE" command, if it possible to copy and paste the data files and perform two steps manually afterwards.
    The "RMAN CONVERT DATABASE" is time consuming and you still need to copy the RMAN output files to the target platform.
    Is there any special reason, why you need the "RMAN CONVERT DATABASE"? Does the "RMAN CONVERT DATABASE" change some things in the database files internally?
    Thanks and Regards

    Hello,
    Anurag Tibrewal wrote:
    If the source platform and the target platform are of different endianness, then an additional step must be done on either the source or target platform to convert the tablespace being transported to the target format. Of course .. in this case you can not use the "Cross-Platform Transportable Database" - you have to use the "Cross-Platform Tranportable Tablespace" (with RMAN CONVERT DATAFILE or TABLESPACE).
    Anurag Tibrewal wrote:
    If they are of the same endianness, then no conversion is necessary and tablespaces can be transported as if they were on the same platform.Yes, but oracle provides this with RMAN and the command "RMAN CONVERT DATABASE" .. as i already wrote i have already performed the two ways "RMAN CONVERT DATABASE" and "just copy & paste" and both are working fine.
    So my question is if there is any special case in which you should use the RMAN way with "RMAN CONVERT DATABASE" instead of just "copy & paste" if you stay on the same endianness?
    Thanks and Regards

  • Working with Oracle Objects in JAVA - How to with views

    Hello,
    I am trying to access to Oracle object types from Java.
    My problem is, I have relational tables, Object Types and Object Views.
    All the examples I found from Oracle manuals are working with Object Tables(Table column is created as Object Type).
    You run a query and get this object type (oraconn.GetObject(1)) and the query is (SELECT * FROM Object_Table) so its get first column(which is an object type) and cast it to Java Class(with SQLDATA or ORADATA).
    The problem is that my data is in Relational Tables and I don't want to export them to Object tables.
    I want to get it from Object View.
    Is this possible? Or better is anybody has an example???
    Thx in advance

    Hi,
    I think you will be able to help me. your question made me to think that you will be able to help me. Here the question goes.
    I have a Stored Procedure which returns a user defined ROWTYPE(its a IN OUT param to the procedure). I need to call this procedure from my Java program. So I would like to know how to do it. Do we need to use any packages provided by oracle or some thing like that.
    thanks & regards,
    Anil.
    [email protected]

  • Using stored procedure with Oracle user-defined types in database control

    Hi,
    I have a requirement where I need to call an Oracle Stored Proc which has IN and OUT parameters. OUT parameters are of user-defined types in Oracle packages.
    I am getting error while calling the Stored proc like below:
    Procedure:
    ==========
    create or replace
    PROCEDURE AA_SAM_TEST (
    col1 out types_aa.aa_tex_type ,
    col2 out types_aa.aa_tex_type ,
    col2 out types_aa.aa_num_type ) As
    Types:
    ======
    create or replace
    package types_aa as
    type aa_tex_type is table of varchar2(255) index by binary_integer;
    type aa_num_type is table of char index by binary_integer;
    end
    Wli Code:
    =========
    DB Control:
    ===========
    * @jc:sql statement="call AA_SAM_TEST(?,?,?)"
    void Call_AA_SAM_TEST(SQLParameter[] param) throws SQLException;
    JPD Code:
    =========
    param = new SQLParameter[3];
    Object param1 = new String();
    Object param2 = new String();
    Object param3 = new String();
    this.param[0] = new SQLParameter(param1 , Types.STRUCT, SQLParameter.OUT);
    this.param[1] = new SQLParameter(param2, Types.STRUCT, SQLParameter.OUT);
    this.param[2] = new SQLParameter(param3, Types.STRUCT, SQLParameter.OUT);
    database_Update.Call_AA_SAM_TEST(this.param);
    I am getting the following error...
    <Jul 24, 2007 6:47:42 PM IST> <Warning> <WLW> <000000> <Id=database_Update; Method=Ctrl_files.Database_Update.Call_AA_SAM_TEST(); Failure=java.sql.SQLException: ORA-06553: PLS-:
    ORA-06553: PLS-:
    ORA-06553: PLS-:
    ORA-06553: PLS-:
    ORA-06553: PLS-:
    ORA-06553: PLS-306: wrong number or typ
    ORA-06553: PLS-306: wrong number or types of arguments in call to 'AA_SAM_TEST'
    ORA-06553: PLS-306: wrong number or types of arguments in call to 'AA_SAM_TEST'
    Can anyone know how to specify OUT parameter of USer-Defined types while using a DB control to access a Stored Proc in Oracle.
    Any help is highly appreciated.
    Thanks

    Hi,
    I have similar problem. Have you already solved the issue?
    Thanks

  • Jdeveloper 10.1.2.1.0 not working with oracle 8.1.7.4 database

    I migrated my java application using BC4j from Jdeveloper 9.0.3.4 to Jdeveloper 10.1.2.1.0. When I run my BC4j model it runs fine but when I execute the view object using BC4jContext in my action code I am getting following error:
    Error Message: JBO-27122: SQL error during statement preparation. Statement: SELECT OpenOrdersEO.ID, OpenOrdersEO.AGENCY_CLINIC_NAME, OpenOrdersEO.C_O_NAME, OpenOrdersEO.ADDRESS_1, OpenOrdersEO.ADDRESS_2, OpenOrdersEO.CITY, OpenOrdersEO.STATE, OpenOrdersEO.ZIP, OpenOrdersEO.ZIP_4, OpenOrdersEO.ORDER_DATE, OpenOrdersEO.SHIPPED_DATE, OpenOrdersEO.ORDER_STATUS, OpenOrdersEO.B_O_SHIPPED_DATE, OpenOrdersEO.AGENCY_AGENCY_CD, OpenOrdersEO.CREATED_BY, OpenOrdersEO.CREATED_ON, OpenOrdersEO.MODIFIED_BY, OpenOrdersEO.MODIFIED_ON, OpenOrdersEO.ADDRESS_ID, OpenOrdersEO.REJECT_COMMENTS FROM WICFOS.ORDERS OpenOrdersEO WHERE (ID=137)
    Error Message: Invalid column index
    The database I am trying to connect is Oracle 8.1.7.4 which was working fine in Jdev 9 version. Is it because Jdev 10 doesn't support Oracle 8 database. I would appreciate help on this.
    Thanks.

    I migrated my java application using BC4j from Jdeveloper 9.0.3.4 to Jdeveloper 10.1.2.1.0. When I run my BC4j model it runs fine but when I execute the view object using BC4jContext in my action code I am getting following error:
    Error Message: JBO-27122: SQL error during statement preparation. Statement: SELECT OpenOrdersEO.ID, OpenOrdersEO.AGENCY_CLINIC_NAME, OpenOrdersEO.C_O_NAME, OpenOrdersEO.ADDRESS_1, OpenOrdersEO.ADDRESS_2, OpenOrdersEO.CITY, OpenOrdersEO.STATE, OpenOrdersEO.ZIP, OpenOrdersEO.ZIP_4, OpenOrdersEO.ORDER_DATE, OpenOrdersEO.SHIPPED_DATE, OpenOrdersEO.ORDER_STATUS, OpenOrdersEO.B_O_SHIPPED_DATE, OpenOrdersEO.AGENCY_AGENCY_CD, OpenOrdersEO.CREATED_BY, OpenOrdersEO.CREATED_ON, OpenOrdersEO.MODIFIED_BY, OpenOrdersEO.MODIFIED_ON, OpenOrdersEO.ADDRESS_ID, OpenOrdersEO.REJECT_COMMENTS FROM WICFOS.ORDERS OpenOrdersEO WHERE (ID=137)
    Error Message: Invalid column index
    The database I am trying to connect is Oracle 8.1.7.4 which was working fine in Jdev 9 version. Is it because Jdev 10 doesn't support Oracle 8 database. I would appreciate help on this.
    Thanks.

Maybe you are looking for

  • How to delete application in AppDeployment ?

    HI We like to delete the application in domainConfig/AppDeployments. We need to remove this entry in this tree; we're not interested in Applications in config tree as the application is not shown in that list. But we're getting errors with the follow

  • XML Publisher and Application Engine in Peoplesoft

    How do i write a simple application engine program for an XML Publisher report in Peoplesoft? Thanks in advance. Message was edited by: user611481

  • Ipod Touch to Iphone

    I have an ipod Touch which i use with all my apps.. paid and free and im going to get the iphone soon.. Am I able to transfer all my Ipod Touch apps to the Iphone? Thanks A very excited Rob

  • I want all document updates and comment notifications on by default

    I got it working and sending mail to exchange.  But now I want to be able to control default notification settings.  For example:  Our company has a phone list in MS-Excel.  When it is updated I was everyone to recieve a notification that it is updat

  • Merge to HDR Pro works from Photoshop CS5, not from Bridge

    I have three nef files which I want to merge to hdr pro.  If i have them open in PS CS5 and choose to merge them from there, all works.  But if I choose the three in Bridge and use the Tools, Photoshop, Merge to HDR Pro selection from Bridge, all sta