Performance degradation with COGNOS and BW

Hello,
Do you know how to increase performance when using Cognos to request in BW ? Cognos seems to need a lot of RAM.
Thanks for your help
Catherine Bellec

In your original compile you don't use any optimisation flags, which tells the compiler to do minimal optimisation - you're basically telling the compiler that you are not interested in performance. Adding -g to this requests that you want maximal debug. So the compiler does even less optimisation, in order that the generated code more closely resembles the original source.
If you are interested in debug, then -g with no optimisation flags gives you the most debuggable code.
If you are interested in optimised code with debug, then try -O -g (or some other level of optimisation). The code will still be debuggable - you'll be able to map disassembly to lines of source, but some things may not be accessible.
If you are using C++, then -g will in SS12 switch off front-end inlining, so again you'll get some performance hit. So use -g0 to get inlining and debug.
HTH,
Darryl.

Similar Messages

  • [URGENT] Performance problem with BC4J and partioned data

    Hi all,
    I have a big performance probelm with BC4J and partitioned data. As as partitioned table shouldn't have a primary key like a sequence (or something else) my partitioned table doesn't have any primary key.
    When I debug my BC4J application I can see a message showing me "ignoring row with no primary key" from EntityCache. It takes a long time to retrieve my data even if I use the partition keys. A quick & dirty forms application was multiple times faster!
    Is this a bug in BC4J, or is BC4J not suitable for partitioned data? Can anyone give me a hint what to do, do make the BC4J application fast even with partitioned data? In a non-partitioned environment the application works quite well. So it seams that it must be an "error" somewhere in this part.
    Thanks,
    Axel

    Here's a SQL statement that creates the table.
    CREATE TABLE SEARCH
    (SEAR_PARTKEY_DAY              NUMBER(4)        NOT NULL
    ,SEAR_PARTKEY_EMP            VARCHAR2(2)      NOT NULL
    ,SEAR_ID                     NUMBER(20)       NOT NULL
    ,SEAR_ENTRY_DATE             TIMESTAMP        NOT NULL
    ,SEAR_LAST_MODIFIED            TIMESTAMP             NOT NULL
    ,SEAR_STATUS                 VARCHAR2(100)    DEFAULT '0'
    ,SEAR_ITC_DATE               TIMESTAMP        NOT NULL
    ,SEAR_MESSAGE_CLASS          VARCHAR2(15)     NOT NULL
    ,SEAR_CHIPHERING_TYPE        VARCHAR2(256)   
    ,SEAR_GMAT                   VARCHAR2(1)      DEFAULT 'U'
    ,SEAR_NATIONALITY            VARCHAR2(3)      DEFAULT 'XXX'
    ,SEAR_MESSAGE_ID             VARCHAR2(32)     NOT NULL
    ,SEAR_COMMENT                VARCHAR2(256)    NOT NULL
    ,SEAR_NUMBER_OF              NUMBER(3)        NOT NULL
    ,SEAR_INTERCEPTION_SYSTEM    VARCHAR2(40)    
    ,SEAR_COMM_PRIOD_H           NUMBER(5)        DEFAULT -1
    ,SEAR_PRIOD_R                  NUMBER(5)        DEFAULT -1
    ,SEAR_INMARSAT_CES           VARCHAR2(40)    
    ,SEAR_BEAM                   VARCHAR2(10)    
    ,SEAR_DIALED_NUMBER          VARCHAR2(70)    
    ,SEAR_TRANSMIT_NUMBER        VARCHAR2(70)    
    ,SEAR_CALLED_NUMBER          VARCHAR2(40)    
    ,SEAR_CALLER_NUMBER          VARCHAR2(40)    
    ,SEAR_MATERIAL_TYPE          VARCHAR2(3)      NOT NULL
    ,SEAR_SOURCE                 VARCHAR2(10)    
    ,SEAR_MAPPING                VARCHAR2(100)    DEFAULT '__REST'
    ,SEAR_DETAIL_MAPPING         VARCHAR2(100)
    ,SEAR_PRIORITY               NUMBER(3)        DEFAULT 255
    ,SEAR_LANGUAGE               VARCHAR2(5)      DEFAULT 'XXX'
    ,SEAR_TRANSMISSION_TYPE      VARCHAR2(40)    
    ,SEAR_INMARSAT_STD           VARCHAR2(1)     
    ,SEAR_FILE_NAME              VARCHAR2(100)    NOT NULL
    PARTITION BY RANGE (SEAR_PARTKEY_DAY, SEAR_PARTKEY_EMP)
      PARTITION SEARCH_MAX VALUES LESS THAN (MAXVALUE, MAXVALUE) MIRA4_SEARCH_EVEN
    );of course SEAR_ID is filled by a sequence but the field is not the primary key as it would decrease the performance of partitioned data.
    We moved to native JDBC with our application and the performance is like we never expected to be!

  • Performance problem with Integration with COGNOS and Bex

    Hi Gems
    I have a performance problem with some of my queries when integrating with the COGNOS
    My query is simple which gets the data for the date interval : "
    From Date: 20070101
    To date:20070829
    When executing the query in the Bex it takes 2mins but when it is executed in the COGNOS it takes almost 10mins and above..
    Any where can we debug the report how the data is sending to the cognos. Like debugging the OLEDB ..
    and how to increase the performance.. of the query in the Cognos ..
    Thanks in Advance
    Regards
    AK

    Hi,
    Please check the following CA Unicenter config files on the SunMC server:
    - is the Event Adapter (ea-start) running ?, without these daemon no event forwarding is done the CA Unicenter nor discover from Ca unicenter is working.
    How to debug:
    - run ea-start in debug mode:
    # /opt/SUNWsymon/SunMC-TNG/sbin/ea-start -d9
    - check if the Event Adaptor is been setup,
    # /var/opt/SUNWsymon/SunMC-TNG/cfg_sunmctotng
    - check the CA log file
    # /var/opt/SUNWsymon/SunMC-TNG/SunMCToTngAdaptorMain.log
    After that is all fine check this side it explains how to discover an SunMC agent from CA Unicenter.
    http://docs.sun.com/app/docs/doc/817-1101/6mgrtmkao?a=view#tngtrouble-6
    Kind Regards

  • Performance degradation with addition of unicasting option

    We have been using the multi-casting protocol for setting up the data grid between the application nodes with the vm arguments as
    *-Dtangosol.coherence.clusteraddress=${Broadcast Address} -Dtangosol.coherence.clusterport=${Broadcast port}*
    As the certain node in the application was expected in a different sub net and multi-casting was not feasible, we opted for well known addressing with following additional VM arguments setup in the server nodes(all in the same subnet)
    *-Dtangosol.coherence.machine=${server_name} -Dtangosol.coherence.wka=${server_ip} -Dtangosol.coherence.localport=${server_port}*
    and the following in the remote client node that point to one of the server node like this
    *-Dtangosol.coherence.wka=${server_ip} -Dtangosol.coherence.wka.port=${server_port}*
    But this deteriorated the performance drastically both in pushing data into the cache and getting events via map listener.
    From the coherence logging statements it doesn't seems that multi-casting is getting used atleast with in the server nodes(which are in the same subnet).
    Is it feasible to have both uni-casting and multi-casting to coexist? How to verify if it is setup already?
    Is performance degradation in well-known addressing is a limitation and expected?

    Hi Mahesh,
    From your description it sounds as if you've configured each node with a wka list just including it self. This would result in N rather then 1 clusters. Your client would then be serviced by the resources of just a single cache server rather then an entire cluster. If this is the case you will see that all nodes are identified as member 1. To setup wka I would suggest using the override file rather then system properties, and place perhaps 10% of your nodes on that list. Then use this exact same file for all nodes. If I've misinyerpreted your configuration please provide additional details.
    Thanks,
    Mark
    Oracle Coherence

  • Performance degradation with -g compiler option

    Hello
    Our mearurement of simple program compiled with and without -g option shows big performance difference.
    Machine:
    SunOS xxxxx 5.10 Generic_137137-09 sun4u sparc SUNW,Sun-Fire-V250
    Compiler:
    CC: Sun C++ 5.9 SunOS_sparc Patch 124863-08 2008/10/16
    #include "time.h"
    #include <iostream>
    int main(int  argc, char ** argv)
       for (int i = 0 ; i < 60000; i++)
           int *mass = new int[60000];
           for (int j=0; j < 10000; j++) {
               mass[j] = j;
           delete []mass;
       return 0;
    }Compilation and execution with -g:
    CC -g -o test_malloc_deb.x test_malloc.c
    ptime test_malloc_deb.xreal 10.682
    user 10.388
    sys 0.023
    Without -g:
    CC -o test_malloc.x test_malloc.c
    ptime test_malloc.xreal 2.446
    user 2.378
    sys 0.018
    As you can see performance degradation of "-g" is about 4 times.
    Our product is compiled with -g option and before shipment it is stripped using 'strip' utility.
    This will give us possibility to open customer core files using non-stripped exe.
    But our tests shows that stripping does not give performance of executable compiled without '-g'.
    So we are losing performance by using this compilation method.
    Is it expected behavior of compiler?
    Is there any way to have -g option "on" and not lose performance?

    In your original compile you don't use any optimisation flags, which tells the compiler to do minimal optimisation - you're basically telling the compiler that you are not interested in performance. Adding -g to this requests that you want maximal debug. So the compiler does even less optimisation, in order that the generated code more closely resembles the original source.
    If you are interested in debug, then -g with no optimisation flags gives you the most debuggable code.
    If you are interested in optimised code with debug, then try -O -g (or some other level of optimisation). The code will still be debuggable - you'll be able to map disassembly to lines of source, but some things may not be accessible.
    If you are using C++, then -g will in SS12 switch off front-end inlining, so again you'll get some performance hit. So use -g0 to get inlining and debug.
    HTH,
    Darryl.

  • Performance Degradation with EJBs

    I have a small J2EE application that consists of a Session EJB calling 3 Entity EJBs that access the database. It is a simple Order capture application. The 3 Entity beans are called Orders, OrderItems and Inventory.
    A transaction consists of inserting a record into the order table, inserting 5 records into the orderitems table and updating the quantity field in the inventory table for each order item in an order. With this transaction I observe performance degradation as the transactions per second decreases dramatically within 5 minutes of running.
    When I modify the transaction to insert a single record into the orderitems table I do not observe performance degradation. The only difference in this transaction is we go through the for loop 1 time as opposed to 5 times. The code is exactly the same as in the previous case with 5 items per order.
    Therefore I believe the problem is a performance degradation on Entity EJBs that
    get invoked in a loop.
    I am using OC4J 10.1.3.3.
    I am using CMP (Container Managed Persistence) and CMT (Container Managed Transactions). The Entity EJBs were all generated by Oracle JDeveloper.
    EJB version being used is 2.1.

    One thing to consider it downloading and using the Oracle AD4J utility to see if it can help you identify any possible bottlenecks, on the application server or the database.
    AD4J can be used to monitor/profile/trace applications in real time with no instrumentation required on the application. Just install it into the container and go. It can even trace a request from the app server down into the database and show you the situation is down there (it needs a db agent installed to do that).
    Overview:
    http://www.oracle.com/technology/products/oem/pdf/wp_productionappdiagnostics.pdf
    Download:
    http://www.oracle.com/technology/software/products/oem/htdocs/jade.html
    Install/Config Guide:
    http://download.oracle.com/docs/cd/B16240_01/doc/install.102/e11085/toc.htm
    Usage Scenarios:
    http://www.oracle.com/technology/products/oem/pdf/oraclead4j_usagescenarios.pdf

  • Performance Degradation - High fetches and Prses

    Hello,
    My analysis on a particular job trace file drew my attention towards:
    1) High rate of Parses instead of Bind variables usage.
    2) High fetches and poor number/ low number of rows being processed
    Please let me kno as to how the performance degradation can be minimised, Perhaps the high number of SQL* Net Client wait events may be due to multiple fetches and transactions with the client.
    EXPLAIN PLAN FOR SELECT /*+ FIRST_ROWS (1)  */ * FROM  SAPNXP.INOB
    WHERE MANDT = :A0
    AND KLART = :A1
    AND OBTAB = :A2
    AND OBJEK LIKE :A3 AND ROWNUM <= :A4;
    call     count       cpu    elapsed       disk      query    current        rows
    Parse      119      0.00       0.00          0          0          0           0
    Execute    239      0.16       0.13          0          0          0           0
    Fetch      239   2069.31    2127.88          0   13738804          0           0
    total      597   2069.47    2128.01          0   13738804          0           0
    PLAN_TABLE_OUTPUT
    Plan hash value: 1235313998
    | Id  | Operation                    | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT             |        |     2 |   268 |     1   (0)| 00:00:01 |
    |*  1 |  COUNT STOPKEY               |        |       |       |            |          |
    |*  2 |   TABLE ACCESS BY INDEX ROWID| INOB   |     2 |   268 |     1   (0)| 00:00:01 |
    |*  3 |    INDEX SKIP SCAN           | INOB~2 |  7514 |       |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter(ROWNUM<=TO_NUMBER(:A4))
       2 - filter("OBJEK" LIKE :A3 AND "KLART"=:A1)
       3 - access("MANDT"=:A0 AND "OBTAB"=:A2)
           filter("OBTAB"=:A2)
    18 rows selected.
    SQL> SELECT INDEX_NAME,TABLE_NAME,COLUMN_NAME FROM DBA_IND_COLUMNS WHERE INDEX_OWNER='SAPNXP' AND INDEX_NAME='INOB~2';
    INDEX_NAME      TABLE_NAME                     COLUMN_NAME
    INOB~2          INOB                           MANDT
    INOB~2          INOB                           CLINT
    INOB~2          INOB                           OBTAB
    Is it possible to Maximise the rows/fetch
    call     count       cpu    elapsed       disk      query    current        rows
    Parse      163      0.03       0.00          0          0          0           0
    Execute    163      0.01       0.03          0          0          0           0
    Fetch   174899     55.26      59.14          0    1387649          0     4718932
    total   175225     55.30      59.19          0    1387649          0     4718932
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 27
    Rows     Row Source Operation
      28952  TABLE ACCESS BY INDEX ROWID EDIDC (cr=8505 pr=0 pw=0 time=202797 us)
      28952   INDEX RANGE SCAN EDIDC~1 (cr=1457 pr=0 pw=0 time=29112 us)(object id 202995)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                  174899        0.00          0.16
      SQL*Net more data to client                155767        0.01          5.69
      SQL*Net message from client                174899        0.11        208.21
      latch: cache buffers chains                     2        0.00          0.00
      latch free                                      4        0.00          0.00
    ********************************************************************************

    user4566776 wrote:
    My analysis on a particular job trace file drew my attention towards:
    1) High rate of Parses instead of Bind variables usage.
    But if you look at the text you are using bind variables.
    The first query is executed 239 times - which matches the 239 fetches. You cut off some of the useful information from the tkprof output, but the figures show that you're executing more than once per parse call. The time is CPU time spent using a bad execution plan to find no data -- this looks like a bad choice of index, possibly a side effect of the first_rows(1) hint.
    2) High fetches and poor number/ low number of rows being processedThe second query is doing a lot of fetches because in 163 executions it is fetching 4.7 million rows at roughly 25 rows per fetch. You might improve performance a little by increasing the array fetch size - but probably not by more than a factor of 2.
    You'll notice that even though you record 163 parse calls for the second statement the number of " Misses in library cache during parse" is zero - so the parse calls are pretty irrelevant, the cursor is being re-used.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    "Science is more than a body of knowledge; it is a way of thinking"
    Carl Sagan                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Performance degradation with Oracle EJB

    Wonder if someone has done any benchmark on the performance degradation as the number of connection into EJB based application increases. We are experiencing rather severe degradation in one such implementation. Will appreciate if you could share your experience with regard to this.

    Try to see is there any contention on the MTS configuration. Try to increase the number of MTS if the number user is very high

  • Performance problems with XMLTABLE and XMLQUERY involving relational data

    Hello-
    Is anyone out there using XMLTABLE or XMLQUERY with more than a toy set of data? I am running into serious performance problems tyring to do basic things such as:
    * Combine records in 10 relational tables into a single table of XMLTYPE records using XMLTABLE. This hangs indefinitely for any more than 800 records. Oracle has confirmed that this is a problem and is working on a fix.
    * Combine a single XMLTYPE record with several relational code tables into a single XMLTYPE record using XMLQUERY and ora:view() to insert code descriptions after each code. Performance is 10 seconds for 10 records (terrible) passing a batch of records , or 160 seconds for one record (unacceptable!). How can it take 10 times longer to process 1/10th the number of records? Ironically, the query plan says it will do a full table scan of records for the batch, but an index access for the one record passed to the XMLQUERY.
    I am rapidly losing faith in XML DB, and desparately need some hints on how to work around these performance problems, or at least some assurance that others have been able to get this thing to perform.

    <Note>Long post, sorry.</Note>
    First, thanks for the responses above. I'm impressed with the quality of thought put into them. (Do the forum rules allow me to offer rewards? :) One suggestion in particular made a big performance improvement, and I’m encouraged to hear of good performance in pure XML situations. Unfortunately, I think there is a real performance challenge in two use cases that are pertinent to the XML+relational subject of this post and probably increasingly common as XML DB usage increases:
    •     Converting legacy tabular data into XML records; and
    •     Performing code table lookups for coded values in XML records.
    There are three things I want to accomplish with this post:
    •     Clarify what we are trying to accomplish, which might expose completely different approaches than I have tried
    •     Let you know what I tried so far and the rationale for my approach to help expose flaws in my thinking and share what I have learned
    •     Highlight remaining performance issues in hopes that we can solve them
    What we are trying to accomplish:
    •     Receive a monthly feed of 10,000 XML records (batched together in text files), each containing information about an employee, including elements that repeat for every year of service. We may need to process an annual feed of 1,000,000 XML records in the future.
    •     Receive a one-time feed of 500,000 employee records stored in about 10 relational tables, with a maximum join depth of 2 or 3. This is inherently a relational-to-XML process. One record/second is minimally acceptable, but 10 records/sec would be better.
    •     Consolidate a few records (from different providers) for each employee into a single record. Given the data volume, we need to achieve a minimum rate of 10 records per second. This may be an XML-only process, or XML+relational if code lookups are done during consolidation.
    •     Allow the records to be viewed and edited, with codes resolved into user-friendly descriptions. Since a user is sitting there, code lookups done when a record is viewed (vs. during consolidation) should not take more than 3 seconds total. We have about 20 code tables averaging a few hundred rows each, though one has 450,000 rows.
    As requested earlier, I have included code at the end of this post for example tables and queries that accurately (but simply) replicate our real system.
    Why we did and why:
    •     Stored the source XML records as CLOBS: We did this to preserve the records exactly as they were certified and sent from providers. In addition, we always access the entire XML record as a whole (e.g., when viewing a record or consolidating employee records), so this storage model seemed like a good fit. We can copy them into another format if necessary.
    •     Stored the consolidated XML employee records as “binary XML”. We did this because we almost always access a single, entire record as a whole (for view/edit), but might want to create some summary statistics at some point. Binary XML seemed the best fit.
    •     Used ora:view() for both tabular source records and lookup tables. We are not aware of any alternatives at this time. If it made sense, most code tables could be pre-converted into XML documents, but this seemed risky from a performance standpoint because the lookups use both code and date range constraints (the meaning of codes changes over time).
    •     Stored records as XMLTYPE columns in a table with other key columns on the table, plus an XMLTYPE metadata column. We thought this would facilitate pulling a single record (or a few records for a given employee) quickly. We knew this might be unnecessary given XML indexes and virtual columns, but were not experienced with those and wanted the comfort of traditional keys. We did not used XMLTYPE tables or the XML Repository for documents.
    •     Used XMLTABLE to consolidate XML records by looping over each distinct employee ID in the source batch. We also tried XMLQUERY and it seems to perform about the same. We can achieve 10 to 20 records/second if we do not do any code lookups during consolidation, just meeting our performance requirement, but still much slower than expected.
    •     Used PL/SQL with XMLFOREST to convert tabular source records to XML by looping over distinct employee IDs. We tried this outside PL/SQL both with XMLFOREST and XMLTABLE+ora:view(), but it hangs in both cases for more than 800 records (a known/open issue). We were able to get it to work by using an explicit cursor to loop over distinct employee IDs (rather than processing all records at once within the query). The performance is one record/second, which is minimally acceptable and interferes with other database activity.
    •     Used XMLQUERY plus ora:view() plus XPATH constraints to perform code lookups. When passing a single employee record, the response time ranges from 1 sec to 160 sec depending on the length of the record (i.e., number of years of service). We achieved a 5-fold speedup using an XMLINDEX (thank you Marco!!). The result may be minimally acceptable, but I’m baffled why the index would be needed when processing a single XML record. Other things we tried: joining code tables in the FOR...WHERE clauses, joining code tables using LET with XPATH constraints and LET with WHERE clause constraints, and looking up codes individually via JDBC from the application code at presentation time. All those approaches were slower. Note: the difference I mentioned above in equality/inequality constraint performance was due to data record variations not query plan variations.
    What issues remain?
    We have a minimally acceptable solution from a performance standpoint with one very awkward PL/SQL workaround. The performance of a mixed XML+relational data query is still marginal IMHO, until we properly utilize available optimizations, fix known problems, and perhaps get some new query optimizations. On the last point, I think the query plan for tabular lookups of codes in XML records is falling short right now. I’m reminded of data warehousing in the days before hash joins and star join optimization. I would be happy to be wrong, and just as happy for viable workarounds if I am right!
    Here are the details on our code lookup challenge. Additional suggestions would be greatly appreciated. I’ll try to post more detail on the legacy table conversion challenge later.
    -- The main record table:
    create table RECORDS (
    SSN varchar2(20),
    XMLREC sys.xmltype
    xmltype column XMLREC store as binary xml;
    create index records_ssn on records(ssn);
    -- A dozen code tables represented by one like this:
    create table CODES (
    CODE varchar2(4),
    DESCRIPTION varchar2(500)
    create index codes_code on codes(code);
    -- Some XML records with coded values (the real records are much more complex of course):
    -- I think this took about a minute or two
    DECLARE
    ssn varchar2(20);
    xmlrec xmltype;
    i integer;
    BEGIN
    xmlrec := xmltype('<?xml version="1.0"?>
    <Root>
    <Id>123456789</Id>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    </Root>
    for i IN 1..100000 loop
    insert into records(ssn, xmlrec) values (i, xmlrec);
    end loop;
    commit;
    END;
    -- Some code data like this (ignoring date ranges on codes):
    DECLARE
    description varchar2(100);
    i integer;
    BEGIN
    description := 'This is the code description ';
    for i IN 1..3000 loop
    insert into codes(code, description) values (to_char(i), description);
    end loop;
    commit;
    end;
    -- Retrieve one record while performing code lookups. Takes about 5-6 seconds...pretty slow.
    -- Each additional lookup (times 3 repeating elements in the data) adds about 1 second.
    -- A typical real record has 5 Elements and 20 Subelements, meaning more than 20 seconds to display the record
    -- Note we are accessing a single XML record based on SSN
    -- Note also we are reusing the one test code table multiple times for convenience of this test
    select xmlquery('
    for $r in Root
    return
    <Root>
    <Id>123456789</Id>
    {for $e in $r/Element
        return
        <Element>
          <Subelement1>
            {$e/Subelement1/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement1/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement1>
    <Subelement2>
    {$e/Subelement2/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement2/Code]/DESCRIPTION/text()}
    </Description>
    </Subelement2>
    <Subelement3>
    {$e/Subelement3/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement3/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement3>
    </Element>
    </Root>
    ' passing xmlrec returning content)
    from records
    where ssn = '10000';
    The plan shows the nested loop access that slows things down.
    By contrast, a functionally-similar SQL query on relational data will use a hash join and perform 10x to 100x faster, even for a single record. There seems to be no way for the optimizer to see the regularity in the XML structure and perform a corresponding optimization in joining the code tables. Not sure if registering a schema would help. Using structured storage probably would. But should that be necessary given we’re working with a single record?
    Operation Object
    |SELECT STATEMENT ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    With an xmlindex, the same query above runs in about 1 second, so is about 5x faster (0.2 sec/lookup), which is almost good enough. Is this the answer? Or is there a better way? I’m not sure why the optimizer wants to scan the code tables and index into the (one) XML record, rather than the other way around, but maybe that makes sense if the optimizer wants to use the same general plan as when the WHERE clause constraint is relaxed to multiple records.
    -- Add an xmlindex. Takes about 2.5 minutes
    create index records_record_xml ON records(xmlrec)
    indextype IS xdb.xmlindex;
    Operation Object
    |SELECT STATEMENT ()
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    Am I on the right path, or am I totally using the wrong approach? I thought about using XSLT but was unsure how to reference the code tables.
    I’ve done the best I can constraining the main record to a single row passed to the XMLQUERY. Given Mark’s post (thanks!) should I be joining and constraining the code tables in the SQL WHERE clause too? That’s going to make the query much more complicated, but right now we’re more concerned about performance than complexity.

  • Performance degraded with VirtualListView control

    Hi,
    We are using VirtualListView control for retrieving LDAP entries from SunOne directory server. We observed that with VirtualListView control, search performance degraded considerabaly (almost down by 95%) as compared to retrieving same result without using Paging mechanism.
    We have configured the directory server for better performance. Also added the index on attributes which we are retrieving using search operation. But still performance is very bad. Does any one has faced this issue earlier? Are there any settings which we can use to improve the performance?
    We do not want to retrieve all records without using paging to avoid any memory issue.
    Thanks,
    Kiran

    "Do i need to some setting adjustments ?"Probably not.
    "The performace degraded drastically."Could you elaborate a bit more please? Could you give an example please?
    /r

  • Performance issue with snapmirror and snapshots on target aggregate

    Hi all, does anyone of you has experience with snapmirror an larger amount of data. At the moment we do a snapmirror of about 100tb data distributed over about 10 volumes to a sata aggreagate on a second filer with 85 4tb sata disks (5x17disk raidgroup). Source is FAS 8040, target is FAS 8020 both with cdot 8.3p1. We already moved all workload from the target aggregate, so it hosts only snapmirror targets.On the source side we do 1 snapshot per day and keep 14 snapshots. Snapmirror is done once per day. From counting snapshots  I would say daily change rate is 2-2,5tb for all volumes. Snapmirror is working fine and finished in less than 2-3h, but container block reclamation and deswizzling is totally killing the aggregate on the target side. We do see continous load of 30MB read and disk util for all disks except parity disks is 90-100%. At first we planned 4h snapshot but that is just not possible. At the moment we disabled deswizzle and get to a point where if we are lucky the target aggregate load drops in the night just before next snapmirror kicks in. We are quite new to Netapp but it sounds ridiciolous, that you need so much io for just a plain replication and some snaps.Do you have any experience with snapshots and snapmirror using sata disks? I think snapshots and snapmirror on Netapp are very resource demanding. It is true that the creation of snapshots on Netapp is super efficient and instant but as soon as snapshot has to be deleted container block reclamation kicks in and takes large amount of disk resource. Same for snapmirror, it is really cool and stable, but deswizzling for logical to physical block mapping with large data affects snapmirror target performance heavily.  Best wishes, Stefan  

    Hi RPHELANIN, schedule is 24h. Yes from time to time dezwizzle does not finish, but 24h is our max, we planned with 4h. But I think this is just impossible with nl-sas disks unless you do not change data on source :-). We crossed checked deswizzle and container block reclamation with disabling each of them. Most of the load is produced by container block reclamation.The positive impact of flashcache is on deswizzle lower than on container block reclamation. I think most of NetApps internal workload is sized for 10k and 15k drives. If we compare io per gb of 10k 900gb drive with 7k 4tb drive we have a ratio of nearly 10:1. Mechanics like reclaim blocks or map virtual to physical blocks seem to produce to much load for nl-sas drives. On the othe hand deduplication and compression works fine and produces acceptable disk load. Nevertheless we disabled it cause it produces to much changed blocks for snapmirror.  Best wishes, Stefan

  • Performance issue with calendar and applescript

    Hi Community,
    I have a performance issue using applescript and calendar with this script:
    tell application "Calendar"
              tell calendar "Cal"
                                            set theList to (get {summary, start date, end date, uid} of events)
    end tell
    end tell
    There are app. 700 events in the calendar "Cal". Therefore the get-command takes about 15 seconds. The problem is, that iCal is completely blocked for this time. This means it is even not possilbe to scroll through the calendar. This problem occurs only under OS X 10.9. With OS X 10.8.x it is still possible to use calendar even a time-consuming get-command is processed.
    Any ideas? Maybe there is a way to reduce the task-priority of an applescript?

    I have to step in here...
    1) Must I set "None" or "On Time"
    - In order for the Calendar to fire an Alarm, it must know what time to fire the alarm. In the event of an All Day Event, it will go off at 12am. The option for "Repeat", below the "Alarm", states the frequency of the event (Daily, Weekly, Monthly, Yearly, etc). So to set an alarm that fires once a month, set the TIME you want the alarm to go off (Make sure "All Day" is unchecked if you want a specific time), then choose "On Time" for the "Alarm", and one of the several "Monthly" options for "Repeat". If I missed something in what you were asking, please let me know and I will do my best to more directly answer your question.
    2) Calendar cannot sync with the Mac.
    - Not directly. However, your phone automatically syncs with your Google Calendar, set up if you create your account. If you so choose, you may export your iCal calendar, import it into your Google calendar, and then use your Google calendar (http://calendar.google.com) to manage your agenda. The changes sync automatically with your device.
    Once again, I hope this shed some light on things. To the Verizon rep who originally answered this question: I have no intention to bash you, however please bear in mind that your opinions and comments will always be held in higher regard than mine, so if you choose to answer a question, please try to solve the problem as opposed to just answer the question. I have experience with all manner of devices and operating systems, from WebOS to BB to iOS to Android, and I believe this phone has the best hardware coupled with a solid operating system in TouchWiz, and I don't want to see people frustrated with these devices by questions that get nothing more than, "You can't do that" answers from the people that are expected to support them.

  • Performance management with ESS and no MSS??

    Hello Gurus,
    I have a very peculiar scenario regarding and ESS/MSS.
    Let me first post our requirments:
    We have implemented all modules of HR along with OM and now in the process of implementing Performance Management along with ESS only . I am stressing on ESS only here with NO MSS functionality.
    Now my question is..is it really possible in the first place to have a performance management functionality without MSS and just with standalone ESS? If yes, what are the options that I have to configure the performance management system and the ESS system so that the workflows for Planning, Rating an employee, and Approvals regarding appraisals can be achieved??
    Is it possible to assign an ESS role to the manager and then when the manager logs in ESS, he/she can go ahead with the performance management process and can check employees just like the way it happens in MSS?
    Thanks a lot for your time.
    Best Regards.
    Karan.

    Hi,
    In your case check for HR administrator for final ratings and other things also check for r/3 desktop services.
    but why they are not going for mss, om is in place right
    regards
    rafi

  • Performance degradation with 11.0.2 CS5 update

    Hi!
    Has anyone else run into a problem with performance reduction/degradation in GPU mode after updating to 11.0.2 of Flash Pro CS5? I've been working on breakout style game (unBrix), which I carefully built up to run at very close to 60 fps, and it has been fine - until I updated Flash to 11.0.2 (to get Android publish to work, and fix a few other issues).
    I have confirmed that downgrading back to the release version of Flash CS5 fixes the performance stuttering that I have noticed since updating to 11.0.2.
    I'd love to know if anyone else has noticed a similar problem - or even better isolated the cause - or if anyone has tried downgrading back to see if there is an improvement (I test this by installing the CS5 trial on a VMWare image if that helps).
    Thanks,
    Kevin N.

    I found the same problem with new update and already wrote about this problem in this forum.
    You can find my post here: http://forums.adobe.com/message/3214594#3214594
    But I same as you don't have any answers.
    Also I tried some benchmark test and found that the FPS result is the same for previous & updated packager.
    So, I think the problem is only visually. New packager drops a lot of frames. Looks very slowly (((

  • LDAP/SSL performance degradation with 1.6.29/1.6.30

    Hi,
    we are running an application within a Tomcat 6.0.35 server on RHEL 5.7/i386 that queries our company's Active Directory using LDAP over SSL. One of the queries involves expanding a large distribution list. Since the upgrade from JDK 1.6.27 to 1.6.29 (or 1.6.30) the performance of this LDAP query has degraded dramatically, from about 8 seconds to more than 300 seconds. This only happens when encrypting the LDAP connection.
    We are not sure how to debug this further. Which information would we need to provide to get to the root of this? I was thinking that perhaps the Tomcat output with the javax.net.debug=ssl,handshake property set for 1.6.27 and 1.6.29/30 would be sufficient?
    With Java 1.6.29/30, the basic response/reply between the Tomcat and the AD server looks like:
    TP-Processor11, WRITE: TLSv1 Application Data, length = 32
    TP-Processor11, WRITE: TLSv1 Application Data, length = 160
    Thread-270, READ: TLSv1 Application Data, length = 16368
    Thread-270, READ: TLSv1 Application Data, length = 16368
    Thread-270, READ: TLSv1 Application Data, length = 11920
    TP-Processor11, WRITE: TLSv1 Application Data, length = 32
    TP-Processor11, WRITE: TLSv1 Application Data, length = 160
    Thread-270, READ: TLSv1 Application Data, length = 16368
    Thread-270, READ: TLSv1 Application Data, length = 16368
    Thread-270, READ: TLSv1 Application Data, length = 11920
    When using Java 1.6.27, we see:
    TP-Processor12, WRITE: TLSv1 Application Data, length = 208
    Thread-42, READ: TLSv1 Application Data, length = 16368
    Thread-42, READ: TLSv1 Application Data, length = 16368
    Thread-42, READ: TLSv1 Application Data, length = 5696
    TP-Processor12, WRITE: TLSv1 Application Data, length = 208
    Thread-42, READ: TLSv1 Application Data, length = 16368
    Thread-42, READ: TLSv1 Application Data, length = 16368
    Thread-42, READ: TLSv1 Application Data, length = 5696
    Looking at the 32 bytes long requests (with javax.net.debug=all set), we see:
    Padded plaintext before ENCRYPTION: len = 32
    0000: 30 0C C2 32 83 6E 9F D8 8F 5E E8 47 7A 0B 9A F1 0..2.n...^.Gz...
    0010: 7D 44 78 0B 9E 0A 0A 0A 0A 0A 0A 0A 0A 0A 0A 0A .Dx.............
    TP-Processor1, WRITE: TLSv1 Application Data, length = 32
    Which doesn't make a whole lot of sense to us...
    Any help debugging this further would be most welcome.
    Cheers
    Stefan
    Edited by: user9158206 on Jan 12, 2012 6:06 AM

    Since you've determined that your problem is related to the use of TLS, your posting is likely to get a quicker response on the Java Secure Socket Extension (JSSE) forum. When you do get a resolution, please post a link to it on this thread to close the loop. Thanks.
    Arshad Noor
    StrongAuth, Inc.

Maybe you are looking for

  • Unable to view music videos in iTunes on mac

    I have iTunes on a G5 Mac Pro It is on the latest version of 10.5 with the most recent update to iTunes. I am unable to view music videos many of which I purchased from the iTMS or came with music CD's as extras some are m4v others are mp4. If on imp

  • FI year end closing process

    hi experts i want to close year end for FI so please guide to me how to close year and what is process for year end. it is for FI-AA ( for Asset and Deprecation ) , FI-AP , FI-AR, and FI-GL. and what do exact process for special periods. mail deleted

  • Start BPM from WDJ with Web Service  (in 7.2)

    Hi, I created a BPM process, and a new WDJ application to start the BPM process. In the WDJ application, I created a new model (adaptive web service model) and I added this to the 'Used Models' of the application. But, now, I want to start my BPM pro

  • Viewing the output from XI outbound adapter

    Hi, I am running a HTTP outbound adapter. When the message fails I am able to see the payload in SXMB_MONI but when its successful the payload is not visible.Is there a way to look out for the payload after it leaves the outbound adapter? We have som

  • How do i unzip a pages.zip file?

    Whenever I export a .pages file from PAGES it converts it to a pages.zip file. I find it impossible to unzip these files to recover the original document winzip and other unzip apps produce about  dozen files none of which are the original document.