Disappointing query performance with object-relational storag

Hello,
after some frustrating days trying to improve query performance on an xmltype table I'm on my wits' end. I have tried all possible combinations of indexes, added scopes, tried out of line and inline storage, removed recursive type definition from the schema, tried the examples from the form thread Setting Attribute SQLInline to false for Out-of-Line Storage (having the same problems) and still have no clue. I have prepared a stripped down example of my schema which shows the same problems as the real one. I'm using 10.2.0.4.0:
SQL> select * from v$version;
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Prod
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for Linux: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
You can find the script on http://www.grmblfrz.de/xmldbproblem.sql (I tried including it here but got an internal server error) The results are on http://www.grmblfrz.de/xmldbtest.lst . I have no idea how to improve the performance (and if even with this simple schema query rewrite does not work, how can oracle xmldb be feasible for more complex structures?). I must have made a mistake somewhere, hopefully someone can spot it.
Thanks in advance.
--Swen
Edited by: user636644 on Aug 30, 2008 3:55 PM
Edited by: user636644 on Aug 30, 2008 4:12 PM

Marc,
thanks, I did not know that it is possible to use "varray store as table" for the reference tables. I have tried your example. I can create the nested table, the scope and the indexes, but I get a different result - full table scan on t_element. With the original table I get an index scan. On the original table there is a trigger (t_element$xd) which is missing on the new table. I have tried the same with an xmltype table (drop table t_element; create table t_element of xmltype ...) with the same result. My script ... is on [google groups|http://groups.google.com/group/oracle-xmldb-temporary-group/browse_thread/thread/f30c3cf0f3dbcafc] (internal server error while trying to include it here). Here is the plan of the query
select rt.object_value
from t_element rt
where existsnode(rt.object_value,'/mddbelement/group[attribute[@name="an27"]="99"]') = 1;
Execution Plan
Plan hash value: 4104484998
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 40 | 2505 (1)| 00:00:38 |
| 1 | TABLE ACCESS BY INDEX ROWID | NT_GROUP | 1 | 20 | 3 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | SYS_C0082879 | 1 | | 2 (0)| 00:00:01 |
|* 3 | FILTER | | | | | |
| 4 | TABLE ACCESS FULL | T_ELEMENT | 1000 | 40000 | 4 (0)| 00:00:01 |
| 5 | NESTED LOOPS SEMI | | 1 | 88 | 5 (0)| 00:00:01 |
| 6 | NESTED LOOPS | | 1 | 59 | 4 (0)| 00:00:01 |
| 7 | TABLE ACCESS BY INDEX ROWID| NT_GROUP | 1 | 20 | 3 (0)| 00:00:01 |
|* 8 | INDEX RANGE SCAN | SYS_C0082879 | 1 | | 2 (0)| 00:00:01 |
|* 9 | TABLE ACCESS BY INDEX ROWID| T_GROUP | 1 | 39 | 1 (0)| 00:00:01 |
|* 10 | INDEX UNIQUE SCAN | SYS_C0082878 | 1 | | 0 (0)| 00:00:01 |
|* 11 | INDEX RANGE SCAN | SYS_IOT_TOP_184789 | 1 | 29 | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("NESTED_TABLE_ID"=:B1)
3 - filter( EXISTS (SELECT /*+ ???)
8 - access("NESTED_TABLE_ID"=:B1)
9 - filter("T_GROUP"."SYS_NC0001300014$" IS NOT NULL AND
SYS_CHECKACL("ACLOID","OWNERID",xmltype('<privilege
xmlns="http://xmlns.oracle.com/xdb/acl.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-insta
nce" xsi:schemaLocation="http://xmlns.oracle.com/xdb/acl.xsd
http://xmlns.oracle.com/xdb/acl.xsd DAV:http://xmlns.oracle.com/xdb/dav.xsd"><read-properties
/><read-contents/></privilege>'))=1)
10 - access("SYS_ALIAS_3"."COLUMN_VALUE"="T_GROUP"."SYS_NC_OID$")
11 - access("NESTED_TABLE_ID"="T_GROUP"."SYS_NC0001300014$")
filter("SYS_XDBBODY$"='99' AND "NAME"='an27')
Edited by: user636644 on Sep 1, 2008 9:56 PM

Similar Messages

  • Query Performance with Exception aggregation

    Hello,
    My Query Keyfigures has exception aggregation on order line level as per requirement.
    Currently cube holds 5M of records, when we run query its running more than 30min.
    We cont remove exception aggregation.
    Cube is alredy modeled correctly and we dont want to use the cache.
    Does anybody can please advice if there is any other better approach to improve query performance with exception agg?
    Thanks

    Hi,
    We have the same problem and raised an OSS ticket. They replied us with the note 1257455 which offers all ways of improving performance in such cases. I guess there s nothing else to do, but to precalculate this exception aggregated formula in data model via transformations or ABAP.
    By the way, cache can not help you in this case since exc. agg. is calculated after cache retrieval.
    Hope this helps,
    Sunil

  • XPath query works with CLOB but not with object relational

    hi all
    i have the following queries,the XQuery work with all, but XPath queries work with XMLType CLOB and Binary XML, but they do not work with XMLType as Object relational,
    select extract (object_value,'movies/directorfilms/films/film [studios/studio = "Gaumont"]')
    from xorm;
    select extract (object_value,'movies/directorfilms[director/dirname = "L.Cohen"]/films/film[position()=2]/t')
    from xorm;
    they shows this message
    ORA-00932: inconsistent datatypes: expectd SYSTEM.name683_COLL got CHAR
    thanks

    Hi Marco
    fisrt here is my RO
    BEGIN
    DBMS_XMLSCHEMA.registerSchema(
    SCHEMAURL=>'http://......../ORMovies.xsd',
    SCHEMADOC=>bfilename('DB','Movies.xsd'),
    LOCAL =>false,
    GENTYPES=>true,
    GENTABLES=>FALSE,
    CSID=>nls_charset_id('AL32UTF8'));
    END;
    create table XORM of xmltype
    xmltype store as object relational
    XMLSCHEMA "http://......../ORMovies.xsd"
    ELEMENT "movies";
    INSERT INTO XORM
    VALUES(XMLType(BFILENAME('DB','ORMovies.xml'),nls_charset_id('AL32UTF8')));
    here the XQuery format that work fine with the OR
    A/D
    select XMLQuery ('for $a in movies/directorfilms/films/film
              where $a/studios/studio = "Gaumont"
              return $a'
         passing object_value
         returning CONTENT)"TitleX"
    from xorm;
    child element query
    select XMLQuery ('for $a in movies/directorfilms/director[dirname = "Feyder"]
              let $b:=$a/../films/film[position()=2]
              return $b/t'
         passing object_value
         returning CONTENT)"TitleX"
    from xorm;
    here is the XPath format which doesn't work
    select extract (object_value,'movies/directorfilms/films/film [studios/studio = "Gaumont"]')
    from xorm;
    select extract (object_value,'movies/directorfilms[director/dirname = "Feyder"]/films/film[position()=2]/t')
    from xorm;
    by the way all queries work fine with the CLOB or Binary XML
    many thanx Marco

  • Working with Object Relational model and Eclipse

    Hello,
    I have mad some types and typed tables in a schema on oracle 10g.
    My database respects the Object Relational model, in which I work with VARRAY, TYPES, NESTED TABLES, PL/SQL,...
    I have already install Enterprise Pack for Eclipse, and also have the Weblogic server, And I can generate tables. But the problem is that it gives to me this tables with wrong types, until they contains in fact other tables as Objet column, and also the types are not supported.
    Please need a help !
    Thank's in advance :)

    nhauge wrote:
    Hi,
    I want to make sure I understand the problem but will try to give you some information as well. So you have an Oracle 10g database in which you are using the object-relational database model with VARRAY's, etc. You say you are also trying to generate tables, but the generation is not working correctly. Are you using the "Generate tables from entities..." functionality? What is the source of the generated tables? I assume you generating from some type of JPA entity? Could you give a small example of the source you are generating from?
    In order to generate proper tables of this type, you would need to be using special TopLink/EclipseLink annotations such as @Array so the persistence provider would know that they are "special" and not just regular entities that would create standard relational tables.
    Neil
    Thank you for your replay,
    well I'm generating that with "Generate tables from entities" of eclipse link.
    this is an example of what I have in the database:
    [DB-SCRIPT|http://www.mediafire.com/view/?n3knwz1o4ggk50n]
    and this is what is generated(eclipselink-orm.xml):
    <?xml version="1.0" encoding="UTF-8"?>
    <entity-mappings version="1.2" xmlns="http://www.eclipse.org/eclipselink/xsds/persistence/orm" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.eclipse.org/eclipselink/xsds/persistence/orm http://www.eclipse.org/eclipselink/xsds/eclipselink_orm_1_2.xsd">
         <entity class="model.Chefterritoire" access="VIRTUAL">
              <attributes>
                   <id name="idChef" attribute-type="long">
                        <column name="ID_CHEF"/>
                   </id>
                   <basic name="lesterritoires" attribute-type="Object">
                   </basic>
                   <basic name="nomChef" attribute-type="String">
                        <column name="NOM_CHEF"/>
                   </basic>
                   <basic name="tel" attribute-type="java.math.BigDecimal">
                   </basic>
              </attributes>
         </entity>
         <entity class="model.Client" access="VIRTUAL">
              <attributes>
                   <id name="idClient" attribute-type="long">
                        <column name="ID_CLIENT"/>
                   </id>
                   <basic name="adresseClient" attribute-type="String">
                        <column name="ADRESSE_CLIENT"/>
                   </basic>
                   <basic name="dateP" attribute-type="java.util.Date">
                        <column name="DATE_P"/>
                        <temporal>DATE</temporal>
                   </basic>
                   <basic name="lescommandes" attribute-type="Object">
                   </basic>
                   <basic name="nomClient" attribute-type="String">
                        <column name="NOM_CLIENT"/>
                   </basic>
                   <basic name="profession" attribute-type="String">
                   </basic>
                   <basic name="refrepresentant" attribute-type="Object">
                   </basic>
                   <basic name="telClient" attribute-type="java.math.BigDecimal">
                        <column name="TEL_CLIENT"/>
                   </basic>
              </attributes>
         </entity>
         <entity class="model.Commande" access="VIRTUAL">
              <attributes>
                   <id name="idCommande" attribute-type="long">
                        <column name="ID_COMMANDE"/>
                   </id>
                   <basic name="dateCommande" attribute-type="java.util.Date">
                        <column name="DATE_COMMANDE"/>
                        <temporal>DATE</temporal>
                   </basic>
                   <basic name="dateLivraison" attribute-type="java.util.Date">
                        <column name="DATE_LIVRAISON"/>
                        <temporal>DATE</temporal>
                   </basic>
                   <basic name="refreleve" attribute-type="Object">
                   </basic>
                   <basic name="refrepresentant" attribute-type="Object">
                   </basic>
                   <basic name="refvehicule" attribute-type="Object">
                   </basic>
                   <basic name="reprise" attribute-type="String">
                   </basic>
              </attributes>
         </entity>
         <entity class="model.Constructeur" access="VIRTUAL">
              <attributes>
                   <id name="idConstructeur" attribute-type="long">
                        <column name="ID_CONSTRUCTEUR"/>
                   </id>
                   <basic name="adresseConstructeur" attribute-type="String">
                        <column name="ADRESSE_CONSTRUCTEUR"/>
                   </basic>
                   <basic name="lesreleves" attribute-type="Object">
                   </basic>
                   <basic name="nomConstructeur" attribute-type="String">
                        <column name="NOM_CONSTRUCTEUR"/>
                   </basic>
                   <basic name="refvehucule" attribute-type="Object">
                   </basic>
              </attributes>
         </entity>
         <entity class="model.LesccommandesN" access="VIRTUAL">
              <table name="LESCCOMMANDES_N"/>
              <attributes>
              </attributes>
         </entity>
         <entity class="model.LescommandessN" access="VIRTUAL">
              <table name="LESCOMMANDESS_N"/>
              <attributes>
              </attributes>
         </entity>
         <entity class="model.LescommandesN" access="VIRTUAL">
              <table name="LESCOMMANDES_N"/>
              <attributes>
              </attributes>
         </entity>
         <entity class="model.LesoptionsN" access="VIRTUAL">
              <table name="LESOPTIONS_N"/>
              <attributes>
              </attributes>
         </entity>
         <entity class="model.LesrepresentantsN" access="VIRTUAL">
              <table name="LESREPRESENTANTS_N"/>
              <attributes>
              </attributes>
         </entity>
         <entity class="model.LessclientsN" access="VIRTUAL">
              <table name="LESSCLIENTS_N"/>
              <attributes>
              </attributes>
         </entity>
         <entity class="model.LesscommandesN" access="VIRTUAL">
              <table name="LESSCOMMANDES_N"/>
              <attributes>
              </attributes>
         </entity>
         <entity class="model.LesterritoiresN" access="VIRTUAL">
              <table name="LESTERRITOIRES_N"/>
              <attributes>
              </attributes>
         </entity>
         <entity class="model.LesvehiculesN" access="VIRTUAL">
              <table name="LESVEHICULES_N"/>
              <attributes>
              </attributes>
         </entity>
         <entity class="model.Modele" access="VIRTUAL">
              <attributes>
                   <id name="idModele" attribute-type="long">
                        <column name="ID_MODELE"/>
                   </id>
                   <basic name="couleurExterieure" attribute-type="String">
                        <column name="COULEUR_EXTERIEURE"/>
                   </basic>
                   <basic name="couleurInterieure" attribute-type="String">
                        <column name="COULEUR_INTERIEURE"/>
                   </basic>
                   <basic name="lesoptions" attribute-type="Object">
                   </basic>
                   <basic name="lesvehicules" attribute-type="Object">
                   </basic>
                   <basic name="prix" attribute-type="double">
                   </basic>
              </attributes>
         </entity>
         <entity class="model.Option" access="VIRTUAL">
              <table name="OPTIONS"/>
              <attributes>
                   <id name="idOption" attribute-type="long">
                        <column name="ID_OPTION"/>
                   </id>
                   <basic name="nomOption" attribute-type="String">
                        <column name="NOM_OPTION"/>
                   </basic>
                   <basic name="refmodele" attribute-type="Object">
                   </basic>
              </attributes>
         </entity>
         <entity class="model.Rapportvisite" access="VIRTUAL">
              <attributes>
                   <id name="idRapport" attribute-type="long">
                        <column name="ID_RAPPORT"/>
                   </id>
                   <basic name="dateVisite" attribute-type="java.util.Date">
                        <column name="DATE_VISITE"/>
                        <temporal>DATE</temporal>
                   </basic>
                   <basic name="frais" attribute-type="double">
                   </basic>
                   <basic name="refclient" attribute-type="Object">
                   </basic>
                   <basic name="refrepresentant" attribute-type="Object">
                   </basic>
                   <basic name="resultat" attribute-type="String">
                   </basic>
              </attributes>
         </entity>
         <entity class="model.Releve" access="VIRTUAL">
              <attributes>
                   <id name="idReleve" attribute-type="long">
                        <column name="ID_RELEVE"/>
                   </id>
                   <basic name="dateDebut" attribute-type="java.util.Date">
                        <column name="DATE_DEBUT"/>
                        <temporal>DATE</temporal>
                   </basic>
                   <basic name="dateFin" attribute-type="java.util.Date">
                        <column name="DATE_FIN"/>
                        <temporal>DATE</temporal>
                   </basic>
                   <basic name="lescommandes" attribute-type="Object">
                   </basic>
                   <basic name="refconst" attribute-type="Object">
                   </basic>
              </attributes>
         </entity>
         <entity class="model.RelevesN" access="VIRTUAL">
              <table name="RELEVES_N"/>
              <attributes>
              </attributes>
         </entity>
         <entity class="model.Remise" access="VIRTUAL">
              <attributes>
                   <id name="idRemise" attribute-type="long">
                        <column name="ID_REMISE"/>
                   </id>
                   <basic name="remise" attribute-type="double">
                   </basic>
              </attributes>
         </entity>
         <entity class="model.Representant" access="VIRTUAL">
              <attributes>
                   <id name="idRepresentant" attribute-type="long">
                        <column name="ID_REPRESENTANT"/>
                   </id>
                   <basic name="lesclients" attribute-type="Object">
                   </basic>
                   <basic name="lescommandes" attribute-type="Object">
                   </basic>
                   <basic name="nomRepresentant" attribute-type="String">
                        <column name="NOM_REPRESENTANT"/>
                   </basic>
                   <basic name="refterritoire" attribute-type="Object">
                   </basic>
                   <basic name="typeRep" attribute-type="String">
                        <column name="TYPE_REP"/>
                   </basic>
              </attributes>
         </entity>
         <entity class="model.Territoire" access="VIRTUAL">
              <attributes>
                   <id name="idTerritoire" attribute-type="long">
                        <column name="ID_TERRITOIRE"/>
                   </id>
                   <basic name="lesrepresentants" attribute-type="Object">
                   </basic>
                   <basic name="nomTerritoire" attribute-type="String">
                        <column name="NOM_TERRITOIRE"/>
                   </basic>
                   <basic name="refchef" attribute-type="Object">
                   </basic>
              </attributes>
         </entity>
         <entity class="model.Vehicule" access="VIRTUAL">
              <attributes>
                   <id name="idVehicule" attribute-type="long">
                        <column name="ID_VEHICULE"/>
                   </id>
                   <basic name="lescommandes" attribute-type="Object">
                   </basic>
                   <basic name="nomVehicule" attribute-type="String">
                        <column name="NOM_VEHICULE"/>
                   </basic>
                   <basic name="refavoir" attribute-type="Object">
                   </basic>
                   <basic name="refmodele" attribute-type="Object">
                   </basic>
              </attributes>
         </entity>
         <entity class="model.Visiteav" access="VIRTUAL">
              <attributes>
                   <id name="idVisiteav" attribute-type="long">
                        <column name="ID_VISITEAV"/>
                   </id>
                   <basic name="dateDebut" attribute-type="java.util.Date">
                        <column name="DATE_DEBUT"/>
                        <temporal>DATE</temporal>
                   </basic>
                   <basic name="dateFin" attribute-type="java.util.Date">
                        <column name="DATE_FIN"/>
                        <temporal>DATE</temporal>
                   </basic>
                   <basic name="refclient" attribute-type="Object">
                   </basic>
                   <basic name="refrepresentant" attribute-type="Object">
                   </basic>
              </attributes>
         </entity>
    </entity-mappings>

  • Query Performance with Unit Conversion

    Hi Experts,
    Right now my Customer ask to me to do a improve in the runtime of some queries.
    I detect a problem in one related to unit conversion. I execute a workbook statistics and found that the time is focusing in step conversion data.
    I'm consulting all the year and is taking around 20 minuts to give us a result. I too much time. The only thing in the query is the conversion data.
    How can I to improve the performance? what is the check list in this case?
    thanks for you help.
    Jose

    Hi Jose,
    You might not be able to reduce the unit conversion time here and try to apply the general query performance improvement techniques, e.g. caching the query results etc.
    But there is one thing which can help you, if end user is using only one of the unit for e.g. User always execute the report in USD but the source currency is different from USD. In such cases you can create another data source and do the data conversion at the time of data loading so that in the new data source all data will be available in required currency and no conversion will happen at runtime and will improve the query performance drastically.
    But above solution is not feasible if there are many currencies and report needs to be run in multiple currency frequently.
    Regards,
    Durgesh.

  • T520 - 42435gg / Sound stutter and slow Graphic performance with Intel Rapid Storage AHCI Driver

    Hi everybody,
    I have serious Problems with my 42435gg
    Any time I install the Intel Storage AHCI Driver (I've tried plenty of different versions) which is suggested by System Update I experience a horrible Sound stutter and slow Graphic performance in Windows 7 64-Bit.
    The funny thing in this case: If the external e-sata port is connected the problems do not occur. If the port is unused again, the stutter begins immediately.
    The only thing I can do is using the Windows internal Storage Driver with which I am not able to use my DVD recorder for example.
    The device was sent to lenovo for hardware testing with no result. It was sent back without any repairing.
    Anybody experience on this?
    Kind regards,
    Daniel

    Did you try the 11.5 RST beta? Load up DPClat and see if DPC conditions are favorable.
    What are you using to check graphics performance?
    W520: i7-2720QM, Q2000M at 1080/688/1376, 21GB RAM, 500GB + 750GB HDD, FHD screen
    X61T: L7500, 3GB RAM, 500GB HDD, XGA screen, Ultrabase
    Y3P: 5Y70, 8GB RAM, 256GB SSD, QHD+ screen

  • How can we improve query performance with out indexes?

    Hello Experts,
    I have a problem with table(calc) which contain 3 crore records and table doesn't have index, on that table one of the view(View_A) created.
    When i use the view in  below query SELECT count(*)
    FROM
      Table A INNER JOIN Table B ON (A.a=B.b)
       LEFT OUTER JOIN View_A ON ( Table A.a=View_A.a)
       LEFT OUTER JOIN View_B ON (Table A.a=View_B.a)
    In above query View_A is causing the problem, view_A is created on Calc table. One more thing when i execute select statement on the view it was running fine.
    With out View_A query fetching data fine. Update stats of the table also fine. When i run cost plan for scanning only cost % is 90.
    Can any help me please?.
    Thank you all.
    Regards,
    Jason.

    Jason,
    Not sure what you are trying to do. But outer joins are bad for performance. So try to avoid them.
    You also say that you have a view on a calc table. What are you calculating? Are you using user defined functions maybe?
    Regards,
    Nico

  • Query performance with filters

    Hi there,
    I've noticed that when I run a query in Answers, if the query has a filter which is not in the displayed columns, the query runs very slowly. However, if I run the same query WITH the filtering column in the displayed columns, the query will return the results almost immediately.
    Take the example of a sales report. If I run a query of [Store Number] vs. [Sales Amount] and ctrl-click filter it with the [Region] dimension column equal to 'North America'. The query will take about 5 to 10 minutes to run. However, if I include the [Region] column in the display columns (i.e. [Region], [Store Number] vs. [Sales Amount]) or the "Excluded" columns in Answers, then the query will take less than a minute to run.
    I am using Oracle BI to connect to a MS Analysis Services cube by the way.
    Any ideas or suggestions on how to improve the performance? I don't want to include the filtering columns in the select query because when users use the dashboard filters, they just want to filter the results by different dimension values instead of seeing them in the report.
    Thanks.

    Thanks.
    However, when I run a similar query in the backend (MS Analysis Services), the performance is very good. Only when I try to run the query through Oracle BI, the performance suffers. I know that it has something to do with the way Oracle constructs its query to send back to the Analysis Services.
    The main thing about my issue is that in Answers, queries with the filtering columns in both the select and where clauses run much faster than queries with the filtering columns ONLY in the where clause. Why is that, and how to speed it up?

  • Poor query performance with BETWEEN

    I'm using Oracle Reports 6i.
    I needed to add Date range parameters (Starting and Ending dates) to a report. I used lexicals in the Where condition to handle the logic.
    If no dates given,
    Start_LEX := '/**/' and
    End_LEX := '/**/'
    If Start_date given,
    Start_LEX := 'AND t1.date >= :Start_date'
    If End_date given,
    End_LEX := 'AND t1.date <= :End_date'
    When I run the report with no dates or only one of the dates, it finishes in 3 to 8 seconds.
    But when I supply both dates, it takes > 5 minutes.
    So I did the following
    If both dates are given and Start_date = End date,
    Start_LEX := 'AND t1.date = :Start_date'
    End_LEX := '/**/'
    This got the response back to the 3 - 8 second range.
    I then tried this
    if both dates are given and Start_date != End date,
    Start_LEX := 'AND t1.date BETWEEN :Start_date AND :End_date'
    End_LEX := '/**/'
    This didn't help. The response was still in the 5+ minutes range.
    If I run the query outside of Oracle Reports, in PL/SQL Developer or SQLplus, it returns the same data in 3 - 8 seconds in all cases.
    Does anyone know what is going on in Oracle Reports when a date is compared with two values either separately or with a BETWEEN? Why does the query take approx. 60 times as long to execute?

    Hi,
    Observe access plan first by using BETWEEN as well as using <= >=.
    Try to impose logic of using NVL while forming lexical parameters.
    Adinath Kamode

  • Query Performance with and without cache

    Hi Experts
    I have a query that takes 50 seconds to execute without any caching or precalculation.
    Once I have run the query in the Portal, any subsequent execution takes about 8 seconds.
    I assumed that this was to do with the cache, so I went into RSRT and deleted the Main Memory Cache and the Blob cache, where my queries seemed to be.
    I ran the query again and it took 8 seconds.
    Does the query cache somewhere else? Maybe on the portal? or on the users local cache? Does anyone have any idea of why the reports are still fast, even though the cache is deleted?
    Forum points always awarded for helpful answers!!
    Many thanks!
    Dave

    Hi,
    Cached data automatically becomes invalid whenever data in the InfoCube is loaded or purged and when a query is changed or regenerated. Once cached data becomes invalid, the system reverts to the fact table or associated aggregate to pull data for the query You can see the cache settings for all queries in your system using transaction SE16 to view table RSRREPDIR . The CACHEMODE field shows the settings of the individual queries. The numbers in this field correspond to the cache mode settings above.
    To set the cache mode on the InfoCube, follow the path Business Information Warehouse Implementation Guide (IMG)>Reporting-Relevant Settings>General Reporting Settings>Global Cache Settings or use transaction SPRO . Setting the cache mode at the InfoCube level establishes a default for each query created from that specific InfoCube.

  • PDF Preview: Accelerate Virtual Server Performance with All-Flash Storage

    Maximum virtual server performance requires storage that accelerates virtual machine operation. Storage must also address the unique needs of applications running in the virtual environment. Learn how all-flash storage solves the I/O blender effect, find out what storage features are most important, and see how All-Flash FAS can deliver a rapid return on investment.

    Saluting Mike, Would you please alsoa dvice how many enterprise users are running Epic on AFF8K? Tks by Henry PAN

  • Query Performance with/without PK Index

    Hi!
    Please take a look at these queries and tell me why their
    performances are
    so extremely different!
    1.
    SELECT DISTINCT column_name #(Many Null-Values in this
    Column)
    FROM table_name
    WHERE primary_key_index_name IN (...long list...)
    AND column_name IS NOT NULL;
    --> 1 Row, 120 msec
    2.
    #(Only Order altered:)
    SELECT DISTINCT column_name
    FROM table_name
    WHERE column_name IS NOT NULL
    AND primary_key_index_name IN (...long list...);
    --> 1 Row, 2 sec (nearly 20 times slower!)
    3.
    #(No NOT NULL)
    SELECT DISTINCT column_name
    FROM table_name
    WHERE primary_key_index_name IN (...long list...);
    --> 1 Row, 2 sec just as No. 2!
    Can anyone explain?
    TIA! Dominic

    As mentioned, you really should create explain plans for all 3
    queries. I could be that the first query loaded all the block
    into the buffer cache so when you ran the 2nd query, the data it
    needed was already in memory.

  • Query performance with %

    Hi,
    I have a system running on 11gr2 windows 64bit windows with Oracle text.
    90 % of the data is in Hebrew and the rest is in English.
    we have a great performance while running Q with % in the end of the word like : word%
    when we issue a search with the % in the beginning like : %word , of the word the performance are extremely bad.
    __I have create a test case :__
    -- CREATE TABLE
    CREATE TABLE news (pkey NUMBER,lang VARCHAR2 (2), short_content CLOB);
    -- INSERT DATA
    insert into news values (myseq.nextval,'iw','&1');
    -- The next step is to configure the MULTI_LEXER
    BEGIN
    -- hebrew
    ctx_ddl.create_preference ('hebrew_lexer', 'basic_lexer');
    --english
    ctx_ddl.create_preference('english_lexer','basic_lexer');
    ctx_ddl.set_attribute('english_lexer','index_themes','yes');
    ctx_ddl.set_attribute('english_lexer','theme_language','english');
    END;
    -- CREATE THE MULTI_LEXER
    --Create the multi-lexer preference:
    BEGIN
    ctx_ddl.create_preference('global_lexer', 'multi_lexer');
    END;
    -- make the hebrew lexer the default using CTX_DDL.ADD_SUB_LEXER:
    BEGIN
    ctx_ddl.add_sub_lexer('global_lexer','default','hebrew_lexer');
    END;
    --add the English  language with CTX_DDL.ADD_SUB_LEXER procedure.
    BEGIN
    ctx_ddl.add_sub_lexer('global_lexer','english','english_lexer','eng');
    END;
    -- create the wordlist
    begin
    -- ADD WORDLIST
    -- exec ctx_ddl.drop_preference ('my_wordlist');
    ctx_ddl.create_preference ('my_wordlist','basic_wordlist');
    ctx_ddl.set_attribute     ('my_wordlist', 'stemmer','auto');
    ctx_ddl.set_attribute ('my_wordlist','SUBSTRING_INDEX', 'YES');
    end;
    --CREATE THE INDEX 
    --drop index search_news
    create index search_news
    on news (short_content)
    indextype is ctxsys.context
    parameters
    ('lexer          global_lexer
         language column lang
         wordlist     my_wordlist')
    still the performance are bad.
    I know I am missing here somthing.
    I appropriate any help

    That's expected. Internally Oracle Text has a list of words (the $I table) on which there is an index (the $X index).
    If you use a leading wildcard, then the $X index cannot be used and Oracle Text has to do a full-table scan of the $I table.
    If you MUST have leading wildcards, you should use the SUBSTRING_INDEX wordlist preference when creating the index. That creates an extra ($P) table which allows Oracle Text to resolve leading wildcards without resorting to a full table scan.
    Be warned that your index creation will take considerably longer and use a lot more space with this option in place. Many customers prefer to disallow leading wildcards from their search interface.

  • Can you use Object Relational SQL to access XML content ?

    Is there a possibility to use the generated object types and collection types to conveniently access the xml data?

    Technically there is nothing to prevent you from using the object types generared when an XML Schema is registered to access and manipulate the contents of the instance documents that conform to the Schema. This would be done using the notation x.xmldata.attribute In this case x would be the object table, xmldata is the name of the instance of the SQL object type associated with the table in question.
    However we do not encourage or recommend this approach. Currently XML DB provides the application developer with DML / DDL independence. This holds true as long as you use XPATH expressions to express your DML operations. Using XPATH expressions to access the content of an XML document maintains an abstraction between the DML (XPATH) and the underlying Object Relational Storage Structure derived from the information in the corresponding XML Schema. Whereever possible, when you express a query using an XPATH expression, under the covers we attempt to re-write the query into Object Relational SQL, based on the meta data contained in the Schema, as access v
    If you use the object notation to access the content of the document you break this abstraction. Your application now needs to be aware of the physical (object relational) storage model being used to manage the instance documents.
    One of the key features of XML DB is that it allow a developer to use Schema Annotations to alter the storage model used to manage the instance documents. The most common example of this is using annotations to control the way in which collections are managed. Depending on the annotation in the schema you can store collections as a VARRAY, as a NestedTable or as a separate XMLType table. Dependong on the technique chosed the objects that are generated during the XML Schema registration process will change.
    If you use the XPATH expressions to accesss the content of your documents, and you decided to change the annotations in your schema so as to revise the way your documents are stored, XML DB will ensure your code should continue to work unchanged, regardless of which objects are generated during Schema registration. On the other hand, if you use the object notation to access the content of the documents, and then change the annotation you will have to change your code.
    I hope this clarifies the situation..

  • UpdateXML query rewrite with unstructured data?

    Hi,
    I'm currently loading unstructured, non-schema based XML and am trying to address some performance issues when I merge changes to a document.
    I currently read the document from the table, merge the changes using XQuery and save the document back - however, I'm aware that this can be excessive when a minor number of changes are involved (I'm also seeing some contention on the path table when a number of parallel threads are involved).
    I know that using "updateXML" and "insertChildXML" can be more efficient due to the XPath rewrite - however, I'm not sure if this is applicable when the XML is unstructured and non-schema based.
    Any advice on this would be greatly appreciated.
    Regards
    Larry

    Hi Larry,
    I know that using "updateXML" and "insertChildXML" can be more efficient due to the XPath rewrite - however, I'm not sure if this is applicable when the XML is unstructured and non-schema based.XPath rewrite applies to structured (Object-Relational) storage, but to Binary XML and/or XMLIndex as well.
    Besides, with Binary XML and a SECUREFILE LOB storage, a "sliding update" is used when the data is written back to disk (only the modified portion of the LOB is affected) thus reducing the overhead of writing whole documents over and over again even for small changes.
    For example (OOX_SML_WORKBOOK is a binary XMLType table here) :
    -- unstructured XMLIndex with path-subsetting :
    CREATE INDEX oox_sml_workbook_uxi1 ON oox_sml_workbook (object_value) INDEXTYPE IS XDB.XMLIndex
      PARAMETERS ('PATH TABLE oox_sml_workbook_ptb
      PATHS (INCLUDE (/workbook/sheets/sheet)
                         NAMESPACE MAPPING (xmlns="http://schemas.openxmlformats.org/spreadsheetml/2006/main"))');
    -- update of an attribute using updateXML function :
    update oox_sml_workbook
    set object_value = updateXML( object_value
                                , '/workbook/sheets/sheet[@sheetId="3"]/@name'
                                , 'NewName'
                                , 'xmlns="http://schemas.openxmlformats.org/spreadsheetml/2006/main"' )
    where xmlexists('declare default element namespace "http://schemas.openxmlformats.org/spreadsheetml/2006/main"; (::)
                     /workbook/sheets/sheet[@sheetId="3"]'
                    passing object_value)
    ;If you're lucky and currently working with the latest patchset (11.2.0.3), you can also use XQuery Update Facility, a small extension to the XQuery standard that allows various kinds of operations on nodes : inserting (after/before), updating, renaming and deleting.
    This extension supersedes the old proprietary functions updateXML, insertXML etc.
    And it can be optimized via XPath/XQuery rewrite too :
    update oox_sml_workbook
    set object_value =
        xmlquery('declare default element namespace "http://schemas.openxmlformats.org/spreadsheetml/2006/main"; (::)
                  copy $d := .
                  modify (
                    for $i in $d/workbook/sheets/sheet
                    return replace value of node $i/@name with concat("MySheet",$i/@sheetId)
                  return $d'
                  passing object_value
                  returning content)
    ;

Maybe you are looking for