How do u do Query Caching/Aggregates/Optimise ETL

Hello
How do u do the following?A document or step wise approach would be really handy
1.How do u do Query caching?The pro and cons?How to optimize?
2.How do u create aggragates?Step by step method?
3.How do u optimize ETL?Whats the benefits of it?Again a document would be handy
Thanks

Search SDN and ASUG for many good presentations.
Hee's a couple to get you started:
http://www.asug.com/client_files/Calendar/Upload/ACF3DBF.ppt
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/biw/p-r/performance in sap bw.pdf
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/events/asug-biti-03/sap bw query performance tuning with aggregates

Similar Messages

  • Continuous Query Caching - Expensive?

    Hello,
    I have had a look at the documentation but I still cannot find a reasonable answer to the following question : How expensive are continuous query caches?
    Is it appropriate to have many of them?
    Is the following example an acceptable usage of Continuous query caching (does it scale?)
    In the context of a web application:
    User logs onto a website
    User performs a "Search" for financial instruments
    A continuous query cache is created with a filter for those instruments returned (say, 50) to listen to price updates.
    If the user pages, or does another search, the query cache is released and a new one, with an updated filter, is created.
    Does it make a difference if we are using the extend client?

    Hi,
    So 100 CQCs is probably not too excessive depending on the configuration of the process instantiating the CQCs and the cluster size etc.
    Each CQC will hold its own set of deserialized keys and values, so yes they are distinct objects, although a CQC of 50 entries would not be very big.
    One query I have - you mention that this is a Web Application but you also mention an Extend Client. Is your Web App and Extend Client of the main cluster? Is there are reason why you did this, most people would make a Web App a storage disabled cluster member so it would perform a bit better. Providing the Web App sits on a server that is very close in network terms to the cluster (i.e. same switch) then I would make it part of the cluster - or is the Web App the thing that is in the "regional environment".
    If you are running CQCs over Extend then there used to be some issues with this if the Extend connection was lost. AFAIK this is supposed to be fixed in later patches so I would get 3.7.1.8 and make sure you test that the Web App continues to work and properly fails over if you kill its Extend connection. When the CQC fails over it will reinitialize all its data so you will need to cope with that if you are pushing changes based on the CQC.
    JK

  • How query read aggregate???

    Experts !
    I have problem with my aggregate design.
    i have created one aggregate on one of my cube. when i try to check using rsrt in debug mode, looks like that query is not hitting the aggregate which i have just created. it goes to another aggregate.
    Now, does it mean that query will always go to the same aggregate ? or when users pulls different characteristics from free charastericts , my query might  jump to another aggregates ??
    OR, At the beginig whatever aggregate it hits, the query will only stick to that !
    hows the process works ?
    thanks

    When youu2019re not sure how to design a good aggregate.  Let the system propose for you but you have  to use the cube in question for  some time. The reason is the system need to gather statistics, before it can propose a good one for you.
    Designing an aggregate  (drag and drop) is easy, but designing a  good one is not as easy as it looks.  It requires some skills.  But the good news is that skills can be learned.
    When you execute a query, OLAP Processor will look for data (based on the criteria) in the following order.
    Local OLAP Cache
    Global OLAP Cache
    Aggregate
    Cube
    The goal is the OLAP Processor should hit either of the first 3 guys, then bingo ! good hit.  But if all of them are missed , it has to go to the cube to fetch the data. Then  it defeats the purpose of aggregate.
    Remember the main purpose of aggregate is speeding up  data retrieval. But there is associated overhead. You should check the rating and delete  bad aggregates.
    Cheers.
    Jen

  • How does u find whether query touches aggregates or not?

    Hi gurus
         How does u find whether query touches aggregates or not?
    Thanks in advance
    Raj

    Hi Rajaiah.
    You can test this from TA RSRT -> Execute and debug -> Display aggregate found.
    Hope it helps.
    BR
    Stefan

  • How to write SQL query and apply aggregate functions on it

    Hello experts,
    Iu2019ve a task to write SQL query on tree tables and do inner join on them. Iu2019ve accomplish this task by using T-CODE SQVI. However now I need to write a query and apply SQL functions on it i.e. Add, Count, Max and Min etc. Please can someone tell me how I can write SQL query with aggregate functions in SAP?
    Thanks a lot in advance

    HI Mr. Cool
    you can see the below code for using aggregate functions.
    where ARTIST and SEATSOCCU are the field names for which you want to perform these functions.
    DATA: TOTAL_ENTRIES TYPE I,
          TOTAL_ATT TYPE I,
          MAX_ATT TYPE I,
          AVG_ATT TYPE I.
    SELECT COUNT( DISTINCT ARTIST )
           SUM( SEATSOCCU )
           MAX( SEATSOCCU )
           AVG( SEATSOCCU ) FROM YCONCERT INTO (TOTAL_ENTRIES, TOTAL_ATT,
    MAX_ATT, AVG_ATT).
    Thanks
    Lalit Gupta

  • Right time to do aggregates/query caching

    Hi
    when is right time to do aggregates/query caching?Whats the business scenario
    thanks

    Hi Jack,
    Refer to the document link below for details.
    https://websmp106.sap-ag.de/~sapidb/011000358700004339892004
    Hope it helps.
    Cheers
    Anurag

  • Query Cache to Derive Summary Data - Advice?

    Good Evening, I am faced with a problem and I was hoping to get a nudge in the right direction.
         I have a cache that is storing the status of orders and their respective times they took to process in that status going though our system. I need to retrieve from that some average times. I need to calculate the average time it took for an order to go from status "A" (endtime) to status "B" (starttime). So essentially i will need to retrieve the difference between variables of two different statuses for every order then get the average of those. I know how to query the cache to get my result set and then obviously iterate across them and calculate the average. However I was wondering/hoping there was a more eloquent solution.
         I have been reading through the user guide (http://wiki.tangosol.com/display/COH32UG/Provide+a+Data+Grid) and reading up on Aggregators and Agents but there are no complete examples for me to view and am unsure if these approaches would help in my goal.
         Knowing my task, is there any approach i should focus on and if so is there documentation anywhere to support that?
         Thanks in Advance!
         -- Grant
         Our data structure looks something like this:
         class ClientReport{
              public long lasttimeaccessed;
         public long clientid;
              ...some other stuff....
              Map Orders; //map of Order Objects, keyed on orderid
         class Order{
              public long orderID;
              ...some other stuff....
              Map statuses //map of status changes, keyed on status
         class Status{
              public String status;
              public long starttime;
              public long endtime;     
         }

    Hi Grant,
         first of all, you might want to create an index on the status changes, so that you can prefilter for the status changes, but for that you would need to implement a custom ValueExtractor implementation, because the status is twice nested in maps, and use that to create the index.
         You can add the index with that ValueExtractor with the addIndex(ValueExtractor, boolean, Comparator) method of the cache.
         The ValueExtractor would need to collect the status strings from each status change in each order, and return them as a collection.
         You can then use this index to filter for those report objects which do have at least one status changes ending up in the expected state.
         You could also create a similar ValueExtractor (let's name it ReportStatusObjectExtractor) which returns each Status objects from all orders in a report, and create an index also with this ValueExtractor.
         Please be aware that the ValueExtractor-s need to implement the equals() and the hashCode() method properly. In this case, because the extractors will have no parameters, you can implement equals() so that true is returned for all objects of the same class, and hashCode() can return 1.
         This index could then be used to speed up evaluation of the aggregator because you would not need to deserialize the entire report instance.
         You can now create an own InvocableMap.EntryAggregator implementation which should also implement the InvocableMap.ParallelAwareAggregator.
         The getParallelAggregator() method should return an instance of a Serializable class again implementing EntryAggregator, and in the aggregate method it should use the extract method on each entry with an instance of the second mentioned ValueExtractor (should be a constant) to extract the status objects belonging to the report instance from the index:
                  public class ReportStatusStringExtractor implements ValueExtractor, Serializable {
               public static final ReportStatusStringExtractor INSTANCE = new ReportStatusStringExtractor();
               public Object extract(Object oTarget) {
                 ClientReport report = (ClientReport) oTarget;
                 // extract each status strings from each order object and return all of them in a collection
               public boolean equals(Object other) {
                 return other!=null && other.getClass().equals(this.getClass());
               public int hashCode() {
                 return 1;
             public class ReportStatusObjectExtractor implements ValueExtractor, Serializable {
               public static final ReportStatusObjectExtractor INSTANCE = new ReportStatusObjectExtractor();
               public Object extract(Object oTarget) {
                 ClientReport report = (ClientReport) oTarget;
                 // extract each status objects from each order object and return all of them in a collection
               public boolean equals(Object other) {
                 return other!=null && other.getClass().equals(this.getClass());
               public int hashCode() {
                 return 1;
             public class OrderAverageAggregator implements InvocableMap.ParallelAwareAggregator {
               public static final OrderAverageAggregator  INSTANCE = new OrderAverageAggregator();
               public InvocableMap.EntryAggregator getParallelAggregator() {
                 return OrderAverageParallelAggregator.INSTANCE;
               public Object aggregateResults(Collection collResults) {
                 // join results from all servers
                 // the object returned from this method will be returned from the cache.aggregate call
               public Object aggregate(Set setEntries) {
                 // the object returned from this method will be returned
                 // from the cache.aggregate call if invoked on a replicated cache
                 return aggregateResults(
                   Collections.singletonList(
                     OrderAverageParallelAggregator.INSTANCE.aggregate(
                       setEntries)));
             public class OrderAverageParallelAggregator implements InvocableMap.EntryAggregator, Serializable {
               public static final OrderAverageParallelAggregator INSTANCE = new OrderAverageParallelAggregator();
             // aggregate method of the parallel aggregator
               public Object aggregate(Set setEntries) {
                 Iterator iter = setEntries.iterator();
                 while (iter.hasNext()) {
                   InvocableMap.Entry entry = iter.next();
                   Collection statusObjects = (Collection)  
                     entry.extract(ReportStatusObjectExtractor.INSTANCE);
                   // ... do averaging ...
                 // the objects returned from this method on each storage-enabled node will be passed in a collection
                 // to the aggregateResults method of the OrderAverageAggregator class
                 return (Serializable) result;
             // code doing the querying
             NamedCache cache = CacheFactory.getCache(...);
             Object result = cache.aggregate(new ContainsFilter(ReportStatusStringExtractor.INSTANCE, "expectedStatus"),
                   OrderAverageAggregator.INSTANCE);
             // code adding the indexes, this needs to run only once after the cluster is started
             NamedCache cache = CacheFactory.getCache(...);
             cache.addIndex(ReportStatusStringExtractor.INSTANCE, false, null);
             cache.addIndex(ReportStatusObjectExtractor.INSTANCE, false, null);
                       The class files for the two extractor classes and the OrderAverageParallelAggregator class must reside in the classpath of the storage-enabled cache JVMs.
         I hope this helps,
         Robert

  • Query cache,query monitor

    Hi
    wt is the purpose of Query monitor and query cache.. can u plz.. explain
    ponts assured*
    Regards
    Rekha

    As the verbeage indicates, query monitor is to monitor the runtime performance of BW queries. Query monitor is one of the tools in BW to monitor the query performance. The transaction to run query monitor is RSRT.
    In RSRT, you can exceute queries in various modes and you can to some extent enforece a query to be executed in a certain path; For example, you can simulate the execution of  q query without using a aggregate, without using cache, etc.
    In monitor you can also view how the query is getting executed and diagnose the possible causes as to why a query is running slow.
    Caching is to store the query results in the memory of the BW  system's application server.  If you cache a query, the run time performance  will improve considerably, because the result ste is stored in the meonry and every time when the query is run, the OLAP engine will not have to read the data base to fetch the records.
    The query caching has some limitations; if the query result changes, the cache will not help, because the new result set has to be again read from database and presented.
    You can get more on this in help.sap.com
    Ravi Thothadri

  • How to improve the query performance or tune query from Explain Plan

    Hi
    The following is my explain plan for sql query. (The plan is generated by Toad v9.7). How to fix the query?
    SELECT STATEMENT ALL_ROWSCost: 4,160 Bytes: 25,296 Cardinality: 204                                         
         8 NESTED LOOPS Cost: 3 Bytes: 54 Cardinality: 1                                    
              5 NESTED LOOPS Cost: 2 Bytes: 23 Cardinality: 1                               
                   2 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 13 Cardinality: 1                          
                        1 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1                     
                   4 TABLE ACCESS BY INDEX ROWID TABLE AR.HZ_CUST_ACCOUNTS Cost: 1 Bytes: 10 Cardinality: 1                          
                        3 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.HZ_CUST_ACCOUNTS_U1 Cost: 1 Cardinality: 1                     
              7 TABLE ACCESS BY INDEX ROWID TABLE AR.HZ_PARTIES Cost: 1 Bytes: 31 Cardinality: 1                               
                   6 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.HZ_PARTIES_U1 Cost: 1 Cardinality: 1                          
         10 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1                                    
              9 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1                               
         15 NESTED LOOPS Cost: 2 Bytes: 29 Cardinality: 1                                    
              12 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1                               
                   11 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1                          
              14 TABLE ACCESS BY INDEX ROWID TABLE ONT.OE_ORDER_HEADERS_ALL Cost: 1 Bytes: 17 Cardinality: 1                               
                   13 INDEX RANGE SCAN INDEX (UNIQUE) ONT.OE_ORDER_HEADERS_U2 Cost: 1 Cardinality: 1                          
         21 FILTER                                    
              16 TABLE ACCESS FULL TABLE ONT.OE_TRANSACTION_TYPES_TL Cost: 2 Bytes: 1,127 Cardinality: 49                               
              20 NESTED LOOPS Cost: 2 Bytes: 21 Cardinality: 1                               
                   18 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1                          
                        17 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1                     
                   19 INDEX RANGE SCAN INDEX (UNIQUE) ONT.OE_ORDER_HEADERS_U2 Cost: 1 Bytes: 9 Cardinality: 1                          
         23 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1                                    
              22 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1                               
         45 NESTED LOOPS Cost: 4,160 Bytes: 25,296 Cardinality: 204                                    
              42 NESTED LOOPS Cost: 4,150 Bytes: 23,052 Cardinality: 204                               
                   38 NESTED LOOPS Cost: 4,140 Bytes: 19,992 Cardinality: 204                          
                        34 NESTED LOOPS Cost: 4,094 Bytes: 75,850 Cardinality: 925                     
                             30 NESTED LOOPS Cost: 3,909 Bytes: 210,843 Cardinality: 3,699                
                                  26 PARTITION LIST ALL Cost: 2,436 Bytes: 338,491 Cardinality: 14,717 Partition #: 29 Partitions accessed #1 - #18          
                                       25 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_AE_HEADERS Cost: 2,436 Bytes: 338,491 Cardinality: 14,717 Partition #: 29 Partitions accessed #1 - #18     
                                            24 INDEX SKIP SCAN INDEX XLA.XLA_AE_HEADERS_N1 Cost: 264 Cardinality: 1,398,115 Partition #: 29 Partitions accessed #1 - #18
                                  29 PARTITION LIST ITERATOR Cost: 1 Bytes: 34 Cardinality: 1 Partition #: 32           
                                       28 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_AE_LINES Cost: 1 Bytes: 34 Cardinality: 1 Partition #: 32      
                                            27 INDEX RANGE SCAN INDEX (UNIQUE) XLA.XLA_AE_LINES_U1 Cost: 1 Cardinality: 1 Partition #: 32
                             33 PARTITION LIST ITERATOR Cost: 1 Bytes: 25 Cardinality: 1 Partition #: 35                
                                  32 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_DISTRIBUTION_LINKS Cost: 1 Bytes: 25 Cardinality: 1 Partition #: 35           
                                       31 INDEX RANGE SCAN INDEX XLA.XLA_DISTRIBUTION_LINKS_N3 Cost: 1 Cardinality: 1 Partition #: 35      
                        37 PARTITION LIST SINGLE Cost: 1 Bytes: 16 Cardinality: 1 Partition #: 38                     
                             36 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_EVENTS Cost: 1 Bytes: 16 Cardinality: 1 Partition #: 39 Partitions accessed #2               
                                  35 INDEX UNIQUE SCAN INDEX (UNIQUE) XLA.XLA_EVENTS_U1 Cost: 1 Cardinality: 1 Partition #: 40 Partitions accessed #2          
                   41 PARTITION LIST SINGLE Cost: 1 Bytes: 15 Cardinality: 1 Partition #: 41                          
                        40 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_TRANSACTION_ENTITIES Cost: 1 Bytes: 15 Cardinality: 1 Partition #: 42 Partitions accessed #2                    
                             39 INDEX UNIQUE SCAN INDEX (UNIQUE) XLA.XLA_TRANSACTION_ENTITIES_U1 Cost: 1 Cardinality: 1 Partition #: 43 Partitions accessed #2               
              44 TABLE ACCESS BY INDEX ROWID TABLE GL.GL_CODE_COMBINATIONS Cost: 1 Bytes: 11 Cardinality: 1                               
                   43 INDEX UNIQUE SCAN INDEX (UNIQUE) GL.GL_CODE_COMBINATIONS_U1 Cost: 1 Cardinality: 1

    damorgan wrote:
    Tuning is NOT about reducing the cost of i/o.
    i/o is only one of many contributors to cost and only one of many contributors to waits.
    Any time you would like to explore this further run this code:
    SELECT 1 FROM dual
    WHERE regexp_like(' ','^*[ ]*a');but not on a production box because you are going to experience an extreme tuning event with zero i/o.
    And when I say "extreme" I mean "EXTREME!"
    You've been warned.I think you just need a faster server.
    SQL> set autotrace traceonly statistics
    SQL> set timing on
    SQL> select 1 from dual
      2  where
      3  regexp_like   (' ','^*[ ]*a');
    no rows selected
    Elapsed: 00:00:00.00
    Statistics
              1  recursive calls
              0  db block gets
              0  consistent gets
              0  physical reads
              0  redo size
            243  bytes sent via SQL*Net to client
            349  bytes received via SQL*Net from client
              1  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              0  rows processedRepeated from an Oracle 10.2.0.x instance:
    SQL> SELECT DISTINCT SID FROM V$MYSTAT;
           SID
           310
    SQL> ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    Session altered.
    SQL> select 1 from dual
      2  where
      3  regexp_like   (' ','^*[ ]*a');The session is hung. Wait a little while and connect to the database using a different session:
    COLUMN STAT_NAME FORMAT A35 TRU
    SET PAGESIZE 200
    SELECT
      STAT_NAME,
      VALUE
    FROM
      V$SESS_TIME_MODEL
    WHERE
      SID=310;
    STAT_NAME                                VALUE
    DB time                                   9247
    DB CPU                                    9247
    background elapsed time                      0
    background cpu time                          0
    sequence load elapsed time                   0
    parse time elapsed                        6374
    hard parse elapsed time                   5997
    sql execute elapsed time                  2939
    connection management call elapsed        1660
    failed parse elapsed time                    0
    failed parse (out of shared memory)          0
    hard parse (sharing criteria) elaps          0
    hard parse (bind mismatch) elapsed           0
    PL/SQL execution elapsed time               95
    inbound PL/SQL rpc elapsed time              0
    PL/SQL compilation elapsed time              0
    Java execution elapsed time                  0
    repeated bind elapsed time                  48
    RMAN cpu time (backup/restore)               0Seems to be using a bit of time for the hard parse (hard parse elapsed time). Wait a little while, then re-execute the query:
    STAT_NAME                                VALUE
    DB time                                   9247
    DB CPU                                    9247
    background elapsed time                      0
    background cpu time                          0
    sequence load elapsed time                   0
    parse time elapsed                        6374
    hard parse elapsed time                   5997
    sql execute elapsed time                  2939
    connection management call elapsed        1660
    failed parse elapsed time                    0
    failed parse (out of shared memory)          0
    hard parse (sharing criteria) elaps          0
    hard parse (bind mismatch) elapsed           0
    PL/SQL execution elapsed time               95
    inbound PL/SQL rpc elapsed time              0
    PL/SQL compilation elapsed time              0
    Java execution elapsed time                  0
    repeated bind elapsed time                  48
    RMAN cpu time (backup/restore)               0The session is not reporting additional CPU usage or parse time.
    Let's check one of the session's statistics:
    SELECT
      SS.VALUE
    FROM
      V$SESSTAT SS,
      V$STATNAME SN
    WHERE
      SN.NAME='consistent gets'
      AND SN.STATISTIC#=SS.STATISTIC#
      AND SS.SID=310;
         VALUE
           163Not many consistent gets after 20+ minutes.
    Let's take a look at the plan:
    SQL> SELECT SQL_ID,CHILD_NUMBER FROM V$SQL WHERE SQL_TEXT LIKE 'select 1 from du
    al%';
    SQL_ID        CHILD_NUMBER
    04mpgrzhsv72w            0
    SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR('04mpgrzhsv72w',0,'TYPICAL'))
    select 1 from dual where regexp_like   (' ','^*[ ]*a')
    NOTE: cannot fetch plan for SQL_ID: 04mpgrzhsv72w, CHILD_NUMBER: 0
          Please verify value of SQL_ID and CHILD_NUMBER;
          It could also be that the plan is no longer in cursor cache (check v$sql_p
    lan)No plan...
    Let's take a look at the 10053 trace file:
    Registered qb: SEL$1 0x19157f38 (PARSER)
      signature (): qb_name=SEL$1 nbfros=1 flg=0
        fro(0): flg=4 objn=258 hint_alias="DUAL"@"SEL$1"
    Predicate Move-Around (PM)
    PM: Considering predicate move-around in SEL$1 (#0).
    PM:   Checking validity of predicate move-around in SEL$1 (#0).
    CBQT: Validity checks failed for 7uqx4guu04x3g.
    CVM: Considering view merge in query block SEL$1 (#0)
    CBQT: Validity checks failed for 7uqx4guu04x3g.
    Subquery Unnest
    SU: Considering subquery unnesting in query block SEL$1 (#0)
    Set-Join Conversion (SJC)
    SJC: Considering set-join conversion in SEL$1 (#0).
    Predicate Move-Around (PM)
    PM: Considering predicate move-around in SEL$1 (#0).
    PM:   Checking validity of predicate move-around in SEL$1 (#0).
    PM:     PM bypassed: Outer query contains no views.
    FPD: Considering simple filter push in SEL$1 (#0)
    FPD:   Current where clause predicates in SEL$1 (#0) :
              REGEXP_LIKE (' ','^*[ ]*a')
    kkogcp: try to generate transitive predicate from check constraints for SEL$1 (#0)
    predicates with check contraints:  REGEXP_LIKE (' ','^*[ ]*a')
    after transitive predicate generation:  REGEXP_LIKE (' ','^*[ ]*a')
    finally:  REGEXP_LIKE (' ','^*[ ]*a')
    apadrv-start: call(in-use=592, alloc=16344), compile(in-use=37448, alloc=42256)
    kkoqbc-start
                : call(in-use=592, alloc=16344), compile(in-use=38336, alloc=42256)
    kkoqbc-subheap (create addr=000000001915C238)Looks like the query never had a chance to start executing - it is still parsing after 20 minutes.
    I am not sure that this is a good example - the query either executes very fast, or never has a chance to start executing. But, it might still make your point physical I/O is not always the problem when performance problems are experienced.
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • How to find database query

    Hi all,
    how to find physical query from report here m checking like Administrator->manage session--> report-->view log
    +++Administrator:3220000:3220011:----2011/01/03 02:51:43
    -------------------- SQL Request:
    +++Administrator:3220000:3220011:----2011/01/03 02:51:43
    -------------------- General Query Info:
    Repository: Star, Subject Area: AAA, Presentation: AAA
    +++Administrator:3220000:3220011:----2011/01/03 02:51:43
    -------------------- Cache Hit on query:
    Matching Query:     Created by:     Administrator
    +++Administrator:3220000:3220011:----2011/01/03 02:51:43
    -------------------- Query Status: Successful Completion
    +++Administrator:3220000:3220011:----2011/01/03 02:51:43
    -------------------- Physical Query Summary Stats: Number of physical queries 1, Cumulative time 0, DB-connect time 0 (seconds)
    +++Administrator:3220000:3220011:----2011/01/03 02:51:43
    -------------------- Rows returned to Client 6
    +++Administrator:3220000:3220011:----2011/01/03 02:51:43
    -------------------- Logical Query Summary Stats: Elapsed time 0, Response time 0, Compilation time 0 (seconds)
    but here m not able to find database query..how to find that query plz help
    Edited by: Sonal on Jan 3, 2011 5:29 AM
    Edited by: Sonal on Jan 3, 2011 5:30 AM
    Edited by: Sonal on Jan 3, 2011 5:30 AM

    Hi,
    as Daan said, set the variable in Advanced tab but not on loglevel..
    because, our problem is to bypass the bi-server cache.
    Do this:
    1. Go to Advanced tab.
    2. In Prefix field(scroll down to see this field) , enter this:
    SET DISABLE_CACHE_HIT = 1;
    Note: Make sure that you're ended up above statement with semicolon and do not click on Set SQL option...
    3. Save report, then run it..
    4. After you see the query, i recommend you to take take out that prefix tag then save it again..

  • SAP BW 3.5 Query Cache - no data for queries

    Dear experts,
    we do have a problem with the SAP BW Query Cache (BW 3.5). Sometimes the
    problem arise that after the queries have been pre-calculated using web templates there are no figures available when running a web-report, or when doing a drilldown or filter navigation. A solution to solve that issue is to delete the cache for that query, however this solution is not reasonable or passable. The problem occurs non-reproducible for different queries.
    Is this a "normal" error of the SAP BW we have to live with, or are there any solutions for it? Any hints are greatly appreciated.
    Thanks in advance & kind regards,
    daniel

    HI Daniel
    Try to wirk without cache for those queries.
    Anyway, you should check how the cache option is configured for those queries .
    You can see that, in RSRV tx
    Hope this help

  • Finding query access frequency or how many times a query has been executed?

    Dear Experts
    I need to find the total number of access frequency of individual queries that are requested by the users say at a particular time.
    Say there are 20 distinct queries requested in the time difference of 3 hours. All of the 20 queries or some of the queries may be requested for more than 2/3 times at that time by other users. By the way say query Q1 is requested 5 times at that so its query access frequency or how many times this query is executed is 5.
    From where and how can I can find this counting of query access frequency or how many times a query has been executed at particular time or a session?
    Normally we know the there SQL history dynamic performance views or if it can be possible to query the Shared pool library cache for SQL area it may be possible to find the total number of execution time for a query. But how to find that if anyone knows, please help me about this.
    Regards-
    Engr. A.N.M. Bazlur Rashid
    OCP DBA

    That's one of the stats reported by statspack - assuming that your query does sufficient work to meet the thresholds for the standard report. Executions is of course one of the columns of v$sql so you might just wish to sample that yourself. Finally if you are on 11g and the sql you are interested is relatively low resource intensive and you are licensed for AWR then you can use the slightly madly named "colored sql" feature that ensures that a specific statement will always be sampled for AWR.
    Niall Litchfield
    http://www.orawin.info/

  • 10G NEW FEATURE-HOW TO FLUSH THE BUFFER CACHE

    제품 : ORACLE SERVER
    작성날짜 : 2004-05-25
    10G NEW FEATURE-HOW TO FLUSH THE BUFFER CACHE
    ===============================================
    PURPOSE
    이 자료는 Oracle 10g new feature 로 manual 하게
    buffer cache 를 flush 할 수 있는 기능에 대하여 알아보도록 한다.
    Explanation
    Oracle 10g 에서 new feature 로 소개된 내용으로 SGA 내 buffer cache 의
    모든 data 를 command 수행으로 clear 할 수 있다.
    이 작업을 위해서는 "alter system" privileges 가 있어야 한다.
    Buffer cache flush 를 위한 command 는 다음과 같다.
    주의) 이 작업은 database performance 에 영향을 줄 수 있으므로 주의하여 사용하여야 한다.
    SQL > alter system flush buffer_cache;
    Example
    x$bh 를 query 하여 buffer cache 내 존재하는 정보를 확인한다.
    x$bh view 는 buffer cache headers 정보를 확인할 수 있는 view 이다.
    우선 test 로 table 을 생성하고 insert 를 수행하고
    x$bh 에서 barfil column(Relative file number of block) 과 file# 를 조회한다.
    1) Test table 생성
    SQL> Create table Test_buffer (a number)
    2 tablespace USERS;
    Table created.
    2) Test table 에 insert
    SQL> begin
    2 for i in 1..1000
    3 loop
    4 insert into test_buffer values (i);
    5 end loop;
    6 commit;
    7 end;
    8 /
    PL/SQL procedure successfully completed.
    3) Object_id 확인
    SQL> select OBJECT_id from dba_objects
    2 where object_name='TEST_BUFFER';
    OBJECT_ID
    42817
    4) x$bh 에서 buffer cache 내에 올라와 있는 DBARFIL(file number of block) 를 조회한다.
    SQL> select ts#,file#,dbarfil,dbablk,class,state,mode_held,obj
    2 from x$bh where obj= 42817;
    TS# FILE# DBARFIL DBABLK CLASS STATE MODE_HELD J
    9 23 23 1297 8 1 0 7
    9 23 23 1298 9 1 0 7
    9 23 23 1299 4 1 0 7
    9 23 23 1300 1 1 0 7
    9 23 23 1301 1 1 0 7
    9 23 23 1302 1 1 0 7
    9 23 23 1303 1 1 0 7
    9 23 23 1304 1 1 0 7
    8 rows selected.
    5) 다음과 같이 buffer cache 를 flush 하고 위 query 를 재수행한다.
    SQL > alter system flush buffer_cache ;
    SQL> select ts#,file#,dbarfil,dbablk,class,state,mode_held,obj
    2 from x$bh where obj= 42817;
    6) x$bh 에서 state column 이 0 인지 확인한다.
    0 은 free buffer 를 의미한다. flush 이후에 state 가 0 인지 확인함으로써
    flushing 이 command 를 통해 manual 하게 수행되었음을 확인할 수 있다.
    Reference Documents
    <NOTE. 251326.1>

    I am also having the same issue. Can this be addressed or does BEA provide 'almost'
    working code for the bargin price of $80k/cpu?
    "Prashanth " <[email protected]> wrote:
    >
    Hi ALL,
    I am using wl:cache tag for caching purpose. My reqmnt is such that I
    have to
    flush the cache based on user activity.
    I have tried all the combinations, but could not achieve the desired
    result.
    Can somebody guide me on how can we flush the cache??
    TIA, Prashanth Bhat.

  • Query with aggregates over collection of trans. instances throws an error

    Hi, I'm executing a query with aggregates an it throws an exception with the following message "Queries with aggregates or projections using variables currently cannot be executed in-memory. Either set the javax.jdo.option.IgnoreCache property to true, set IgnoreCache to true for this query,
    set the kodo.FlushBeforeQueries property to true, or execute the query before changing any instances in the transaction.
    The offending query was on type "class Pago" with filter "productosServicios.contains(item)".
    The class Pago has the field productosServicios which is a List of Pago$ItemMonto, the relevant code is :
    KodoQuery query = (KodoQuery)pm.newQuery(Pago.class,
    pagos);
    where pagos is a list of transient instances of type Pago.
    query.declareVariables("Pago$ItemMonto item");
    query.setFilter("productosServicios.contains(item)");
    query.setGrouping("item.id");
    query.setResult("item.id as idProductoServicio,
    sum(montoTotal) as montoTotal");
    query.setResultClass(PagoAgrupado.class);
    where the class PagoAgrupado has the corresponding fields idProductoServicio and montoTotal.
    In other words, I want to aggregate the id field of class ItemMonto over the instances contained in the productosServicios field of class Pago.
    I have set to true the ignoreCache and kodo.FlushBeforeQueries flags in the kodo.properties file and in the instances of the pm and the query but it has not worked, what can be wrong?.
    I'm using Kodo 3.2.4, MySQL 5.0
    Thanks,
    Jaime.
    Message was edited by:
    jdelajaraf

    Thanks, you nailed it! I tried comparing the two files myself, but Bridge told me that the 72.009 dpi document was 72 dpi.
    I have no idea why the resolution mess things up, but as long as I know how to avoid the bug, things are grand!

  • How to clear webdynpro ABAP cache ?

    Hi,
    Please advise me on how to clear webDynPro ABAP Cache data ?
    Scenario as most of you might be knowing:
    1>We retreive data from the the table using query stmt. For ex: Creating an employee
    2>When we want to update/terminate the same employee on same day giving us short dump
    We checked in the backend that employee has been sucessfully onboarded and offboarded/updated without any dump.
    I guess it has something to do with clearing the cache/memory on each operation(create/update/delete)
    So, I got some transaction code:SWFVISU where we can maintain the cache configuration. But I'm not sure whether we can use the same.
    Is there anyway we can avoid the cache issue.
    Thanks
    Praveen

    Hi, Praveen kumar Kadi. can be to you this function will help:IQS1_REFRESH_ALL

Maybe you are looking for

  • Issues with installing windows on a new macbook pro retina.

    Hello, First time using bootcamp on my new macbook and I am running into an issue. I have read the troubleshooting guide and everything else and no matter what I do I get the same problem. The issue is that when bootcamp restarts after partitioning m

  • Credit card in Order entry

    Hi all, Is it possible to assign Credit card info at item level, for eg: 1 CC per item in sales order?? my understand was it is not possible since the credit card is at header level. Thanks.

  • Help with a panel

    I have seen many tutorials where there is a panel at the bottom with "properties", "filters" and "parameters". But I don't have that panel. I have tried to change workspace but I can't get this panel. When I choose  (Window > Properties) I only get a

  • I changed my iTunes password and now I can't download previous purchases

    I changed my iTunes password (but kept the same username), and now I can't re-download any songs that I purchased BEFORE changing the password. When I follow the steps in the Apple support pages on how to re-download previously purchased songs, I am

  • TS1718 how do i change application destination

    how do i get to the application launch screen to change it