OLAP engine is of pool performance

I create my analytical worksheet and populate data from SH sample schema according to <<Olap DML Guide>>, and define a AGGMAP as:
DEFINE AGG.SALES AGGMAP
AGGMAP
     RELATION DIM_PRODUCT.PARENT PRECOMPUTE (DIM_PRODUCT PROD_SUBCATEGORY)
     RELATION DIM_TIME.PARENT PRECOMPUTE (DIM_TIME TIME_MONTH)
     RELATION DIM_CUSTOMER.PARENT PRECOMPUTE (DIM_CUSTOMER CUST_STATE)
     RELATION DIM_PROMO.PARENT PRECOMPUTE (DIM_PROMO PROMO_SUBCATEGORY)
     RELATION DIM_CHANNEL.PARENT PRECOMPUTE (DIM_CHANNEL CHANNEL_CLASS)
END
when I execute the following command, it result an ora-03113 errors after two hours, the temp tablespace grows up to 3200M when the error occured. I have tried several times, and result the same error.
The exported eif file is of 100M size ,while I build the same cube on Microsoft Analyze Service, it is of 25M only.
Can any one help me?
My machine:
IBM Thinkpad T23
PIII 1 X 1.1G
MEMORY 512M
Oracle config:
SHARED_POOL 32M
JAVA_POOL 32M
DB_CACHE 64M
OLAP_CACHE 128M

DEFINE PROD_ID DIMENSION NUMBER (6,0)
DEFINE PROD_SUBCATEGORY DIMENSION TEXT
DEFINE PROD_CATEGORY DIMENSION TEXT
DEFINE PROD_ALL DIMENSION TEXT
DEFINE DIM_PRODUCT DIMENSION CONCAT (PROD_ALL PROD_SUBCATEGORY PROD_CATEGORY PROD_ID)
DEFINE DIM_PRODUCT.HIERARCHIES DIMENSION TEXT
MAINTAIN DIM_PRODUCT.HIERARCHIES ADD 'STANDARD'
DEFINE DIM_PRODUCT.LEVELDIM DIMENSION TEXT
MAINTAIN DIM_PRODUCT.LEVELDIM ADD 'ID' 'SUBCATEGORY' 'CATEGORY' 'ALL'
DEFINE CHANNEL_ID DIMENSION TEXT
DEFINE CHANNEL_CLASS DIMENSION TEXT
DEFINE CHANNEL_ALL DIMENSION TEXT
DEFINE DIM_CHANNEL DIMENSION CONCAT (CHANNEL_ALL CHANNEL_CLASS CHANNEL_ID)
DEFINE DIM_CHANNEL.HIERARCHIES DIMENSION TEXT
MAINTAIN DIM_CHANNEL.HIERARCHIES ADD 'STANDARD'
DEFINE DIM_CHANNEL.LEVELDIM DIMENSION TEXT
MAINTAIN DIM_CHANNEL.LEVELDIM ADD 'ID' 'CLASS' 'ALL'
DEFINE PROMO_ID DIMENSION NUMBER (6,0)
DEFINE PROMO_SUBCATEGORY DIMENSION TEXT
DEFINE PROMO_CATEGORY DIMENSION TEXT
DEFINE PROMO_ALL DIMENSION TEXT
DEFINE DIM_PROMO DIMENSION CONCAT (PROMO_ALL PROMO_CATEGORY PROMO_SUBCATEGORY PROMO_ID)
DEFINE DIM_PROMO.HIERARCHIES DIMENSION TEXT
MAINTAIN DIM_PROMO.HIERARCHIES ADD 'STANDARD'
DEFINE DIM_PROMO.LEVELDIM DIMENSION TEXT
MAINTAIN DIM_PROMO.LEVELDIM ADD 'ID' 'SUBCATEGORY' 'CATEGORY' 'ALL'
DEFINE TIME_DAY DIMENSION TEXT
DEFINE TIME_MONTH DIMENSION TEXT
DEFINE TIME_QUATER DIMENSION TEXT
DEFINE TIME_YEAR DIMENSION TEXT
DEFINE DIM_TIME DIMENSION CONCAT (TIME_YEAR TIME_QUATER TIME_MONTH TIME_DAY)
DEFINE DIM_TIME.HIERARCHIES DIMENSION TEXT
MAINTAIN DIM_TIME.HIERARCHIES ADD 'STANDARD'
DEFINE DIM_TIME.LEVELDIM DIMENSION TEXT
MAINTAIN DIM_TIME.LEVELDIM ADD 'DAY' 'MONTH' 'QUATER' 'YEAR'
DEFINE CUST_ID DIMENSION NUMBER (8,0)
DEFINE CUST_CITY DIMENSION TEXT
DEFINE CUST_STATE DIMENSION TEXT
DEFINE CUST_COUNTRY DIMENSION TEXT
DEFINE CUST_SUBREGION DIMENSION TEXT
DEFINE CUST_REGION DIMENSION TEXT
DEFINE CUST_WORLD DIMENSION TEXT
DEFINE DIM_CUSTOMER DIMENSION CONCAT (CUST_WORLD CUST_REGION CUST_SUBREGION CUST_COUNTRY CUST_STATE CUST_CITY -
CUST_ID)
DEFINE DIM_CUSTOMER.HIERARCHIES DIMENSION TEXT
MAINTAIN DIM_CUSTOMER.HIERARCHIES ADD 'STANDARD'
DEFINE DIM_CUSTOMER.LEVELDIM DIMENSION TEXT
MAINTAIN DIM_CUSTOMER.LEVELDIM ADD 'ID' 'CITY' 'STATE' 'COUNTRY' 'SUBREGION' 'REGION' 'WORLD'
DEFINE DIM_SALES COMPOSITE <DIM_PRODUCT DIM_CUSTOMER DIM_TIME DIM_CHANNEL DIM_PROMO>
DEFINE DIM_COSTS COMPOSITE <DIM_PRODUCT DIM_TIME>
DEFINE DIM_PRODUCT.PARENTREL RELATION DIM_PRODUCT <DIM_PRODUCT DIM_PRODUCT.HIERARCHIES>
DEFINE DIM_PRODUCT.LEVELREL RELATION DIM_PRODUCT.LEVELDIM <DIM_PRODUCT DIM_PRODUCT.HIERARCHIES>
DEFINE DIM_CHANNEL.PARENTREL RELATION DIM_CHANNEL <DIM_CHANNEL DIM_CHANNEL.HIERARCHIES>
DEFINE DIM_CHANNEL.LEVELREL RELATION DIM_CHANNEL.LEVELDIM <DIM_CHANNEL DIM_CHANNEL.HIERARCHIES>
DEFINE DIM_PROMO.PARENTREL RELATION DIM_PROMO <DIM_PROMO DIM_PROMO.HIERARCHIES>
DEFINE DIM_PROMO.LEVELREL RELATION DIM_PROMO.LEVELDIM <DIM_PROMO DIM_PROMO.HIERARCHIES>
DEFINE DIM_TIME.PARENTREL RELATION DIM_TIME <DIM_TIME DIM_TIME.HIERARCHIES>
DEFINE DIM_TIME.LEVELREL RELATION DIM_TIME.LEVELDIM <DIM_TIME DIM_TIME.HIERARCHIES>
DEFINE DIM_CUSTOMER.PARENTREL RELATION DIM_CUSTOMER <DIM_CUSTOMER DIM_CUSTOMER.HIERARCHIES>
DEFINE DIM_CUSTOMER.LEVELREL RELATION DIM_CUSTOMER.LEVELDIM <DIM_CUSTOMER DIM_CUSTOMER.HIERARCHIES>
DEFINE V_SALES_AMOUNT VARIABLE DECIMAL <DIM_SALES <DIM_PRODUCT DIM_CUSTOMER DIM_TIME DIM_CHANNEL DIM_PROMO>>
DEFINE V_SALES_QUANTITY VARIABLE DECIMAL <DIM_SALES <DIM_PRODUCT DIM_CUSTOMER DIM_TIME DIM_CHANNEL DIM_PROMO>>
DEFINE V_COSTS_COST VARIABLE DECIMAL <DIM_COSTS <DIM_PRODUCT DIM_TIME>>
DEFINE V_COSTS_PRICE VARIABLE DECIMAL <DIM_COSTS <DIM_PRODUCT DIM_TIME>>
DEFINE AGG.SALES AGGMAP
AGGMAP
RELATION DIM_PRODUCT.PARENT PRECOMPUTE (DIM_PRODUCT PROD_SUBCATEGORY)
RELATION DIM_TIME.PARENT PRECOMPUTE (DIM_TIME TIME_MONTH)
RELATION DIM_CUSTOMER.PARENT PRECOMPUTE (DIM_CUSTOMER CUST_STATE)
RELATION DIM_PROMO.PARENT PRECOMPUTE (DIM_PROMO PROMO_SUBCATEGORY)
RELATION DIM_CHANNEL.PARENT PRECOMPUTE (DIM_CHANNEL CHANNEL_CLASS)
END

Similar Messages

  • "Check point not complete"  & Shared pool performance

    Hi all,
    1. I am getting the error in alert log file as "Check point not complete" in oracle 11g. Its running in dataguard enviroment. Can you please help us on this.
    2. How to check the shared pool usage and shared pool free space and how to increase the shared pool performance.
    Thanks in advance.

    934413 wrote:
    Hi all,
    1. I am getting the error in alert log file as "Check point not complete" in oracle 11g. Its running in dataguard enviroment. Can you please help us on this.
    post Error Code & Message to support this claim
    2. How to check the shared pool usage and shared pool free space and how to increase the shared pool performance.
    post SQL & results that lead you to conclude the SGA size needs to be changed.

  • Do we need Oracle OLAP Engine for OBIEE

    Hi experts
    Do we need Oracle Olap Engine for OBIEE..
    Thanks in advance
    Reagrds
    Frnds

    Short answer: No.
    Long answer: No.

  • After review of Aggregation vs OLAP Engine: clarification of a statement

    Hi,
    2. I read the following:.
    u201C During aggregation, the OLAP Engine in BI proceeds as follows:
           1.      First, standard aggregation is executed. Possible aggregation types include summation (SUM), minimum (MIN) and maximum (MAX). Minimum and maximum can, for example, be used for date key figures.
           2.      Aggregation using a selected characteristic occurs after standard aggregation (exception aggregation). The available exception aggregation types include average, counter, first value, last value, minimum, maximum, no aggregation, standard deviation, summation, and variance.
    Application cases for exception aggregation include warehouse stock, for example, that cannot be totaled over time, or counters that count the number of characteristic values for a certain characteristic.
          3.      Lastly, aggregation using currencies and units is executed. A * is output when two numbers that are not equal to zero are aggregated with different currencies or units. u201C
    i. Where is this taking place? Is it in the process of producing the report output in BEx Anlyzer or Web Analyzer? Or, is this discussion about when the cube is being loaded with data?
    ii. Is it an u201Cif not step 1 then go to step 2u201D case? Or, do all three steps get executed each time?
    Can you give an example to walk me through a case where the u201COLAP  Engine in BIu201D, goes through all three steps in order?
    iii. Are these steps above in regards to the "OLAP processor" applicable only in the case of "non-cumulative" key figure?
    I ask this because I read in another document that
    "... Before you can even think about interpreting the result of a query on an InfoCube with a non-cumulative key figure, you need to know the sequence of the aggregations..."  (referring to sequence 1, 2, 3 above)
    If applicable only in the case of "non-cumulative" key figure, then what happens in the case of cumulative key figures?
    Any example to clarify this?
    Thanks
    Edited by: AmandaBaah on Sep 25, 2009 10:19 PM
    Edited by: AmandaBaah on Sep 29, 2009 2:54 PM

    N/A

  • Pools Performance

    Hello,
    I did a test JSP page to see the DB Pools performance, and I've notice that if I use DriverManager class or PooledConnection class, the performance is the same.
    Here is my code:
    <%@ page import="java.sql.*" %>
    <%@ page import="javax.sql.*" %>
    <%@ page import="oracle.jdbc.driver.*" %>
    <%@ page import="oracle.jdbc.pool.*" %>
    <%@ page import="java.util.Date" %>
    <%
    try {
    String sDbDriver = "oracle.jdbc.driver.OracleDriver";
    Class.forName(sDbDriver);
    out.println("Iniciando accesos con DriverManager class ("+(new Date()).toString()+")");
    for (int t=0;t<100;t++) {
    Connection conn = DriverManager.getConnection("url","usr","pssw");
    conn.close();
    out.println("Iniciando accesos con Pools ("+(new Date()).toString()+")");
    for (int t=0;t<100;t++) {
    OracleConnectionPoolDataSource ocpds = new OracleConnectionPoolDataSource();
    ocpds.setURL("url");
    ocpds.setUser("usr");
    ocpds.setPassword("pssw");
    PooledConnection pc = ocpds.getPooledConnection();
    Connection conn = pc.getConnection();
    conn.close();
    out.println("Listo ("+(new Date()).toString()+")");
    } catch (Exception e) {
    out.println(e.toString());
    %>
    The result was:
    Iniciando accesos con DriverManager class (Mon Apr 30 12:32:19 GMT-03:00 2001)
    Iniciando accesos con Pools (Mon Apr 30 12:32:42 GMT-03:00 2001)
    Listo (Mon Apr 30 12:33:04 GMT-03:00 2001)
    There isn't more speed on each test.
    Am I mising something? Is there something wrong?
    Do I have to make some configuration changes in the Database?
    Thanks for your help.
    Gustavo.

    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by Gustavo Spadari ([email protected]):
    Hello,
    I did a test JSP page to see the DB Pools performance, and I've notice that if I use DriverManager class or PooledConnection class, the performance is the same.
    Here is my code:
    <%@ page import="java.sql.*" %>
    <%@ page import="javax.sql.*" %>
    <%@ page import="oracle.jdbc.driver.*" %>
    <%@ page import="oracle.jdbc.pool.*" %>
    <%@ page import="java.util.Date" %>
    <%
    try {
    String sDbDriver = "oracle.jdbc.driver.OracleDriver";
    Class.forName(sDbDriver);
    out.println("Iniciando accesos con DriverManager class ("+(new Date()).toString()+")");
    for (int t=0;t<100;t++) {
    Connection conn = DriverManager.getConnection("url","usr","pssw");
    conn.close();
    out.println("Iniciando accesos con Pools ("+(new Date()).toString()+")");
    for (int t=0;t<100;t++) {
    OracleConnectionPoolDataSource ocpds = new OracleConnectionPoolDataSource();
    ocpds.setURL("url");
    ocpds.setUser("usr");
    ocpds.setPassword("pssw");
    PooledConnection pc = ocpds.getPooledConnection();
    Connection conn = pc.getConnection();
    conn.close();
    out.println("Listo ("+(new Date()).toString()+")");
    } catch (Exception e) {
    out.println(e.toString());
    %>
    The result was:
    Iniciando accesos con DriverManager class (Mon Apr 30 12:32:19 GMT-03:00 2001)
    Iniciando accesos con Pools (Mon Apr 30 12:32:42 GMT-03:00 2001)
    Listo (Mon Apr 30 12:33:04 GMT-03:00 2001)
    There isn't more speed on each test.
    Am I mising something? Is there something wrong?
    Do I have to make some configuration changes in the Database?
    Thanks for your help.
    Gustavo.<HR></BLOCKQUOTE>
    Get the PooledConnection outside of the loop then use that to get a Connection inside the loop so you are reusing the Connection.

  • Connection pool performance

     

    Barry,
    You should post this to Performance newsgroups (please dont cross post)
    This actually depends on how many execute threads you have - if you have 101
    clients / applications requesting connections concurrently, only the once
    that can get a Execute Thread will actually be able to use a connection and
    complete. Rest of the application requests will be queued up.
    sree
    "Barry Myles" <[email protected]> wrote in message
    news:40b47cdd$1@mktnews1...
    >
    Hi there I wonder if anyone could shine some light on the subject ofconnection
    pooling configuration for me
    In the connection pool If I have
    initial capacity = 50
    Maximum Capacity = 100
    and then get a period with 101 concurrent applications all wanting towrite information
    to the backend database at the same time, what happens to that oneapplication?
    is it forced to wait until a thread becomes available, does it just blockuntil
    this happens or does it give up right away? (if it does what exception
    is generated if any?)
    Also I'm trying to pin down some performance issues with the use ofconnection
    pooling, can someone guide me as to what a typically under performingconfiguration
    for a connection pool would be.

  • Buffer table pool performance with return code 64 in ST05

    Hey guys,I had been meeting an issue causing poor Performance in my SAP system.For a full buffered table,I did trace it with ST12 and ST05,I found it's perfectly buffered in AS(as blue background entries in ST05 tracelist),but there were entries with high 'duration' and return code '64'(I know that's mean no records found),instead,the return code '0' entries worked fine,the duration  was pretty low.I had learned from documents of IBM/SAP(we're running with DB2 on AIX) that if you are just checking the exists of some records of certain table(access frequently),you may buffer them in AS,for now it seems to be going into opposite direction,that does  make sense with long time 'duration'.Detailed information seen the attachments.

    Hi Yunze,
    this performance issue can be resolved by adding 2 more steps in the code
    step 1 is required so that we will not loose any data else you can go ahead with yourcode by adding the field TZONE for both sort and delete adjacent duplicates and step 2.
    1. Pass the data from table IT_TZONE into a dummy table
        SORT dummy table using TZONE  and DELETE ADJACENT DUPLICATES from dummy table           using the field TZONE.
    2. Check the dummy table is initial or not before the select query on ZTMT102.
        If this is not empty then process the select query else skip the select query.
    you did not mention whether the field ZTRZONE is key field in the table ZTMY102 or not.
    if it is not a key field, if huge amount of data has to be fetched then create an index.
    the index will create a load on the database but it will improve the performance of the select query.
    Hope this will help you to resolve your issue and let me know in case of any questions.
    Regards,
    Phani

  • Discoverer support Oracle 9i OLAP engine

    Hi,
    Do anyone knows Discoverer supports Oracle 9i OLAP?
    Pls advise.
    Thanks
    Winson

    Hi,
    Oracle OLAP is a multidimensional database on the top of the Oracle 9i database. It works like Oracle Express(MOLAP server).
    Anyway, you don't need to install Oracle OLAP option when you use Discoverer.
    Regards,
    Sangeun Ju

  • BAPI_BUS2054_CREATE_MULTI  pool performance

    hi!
        when i use BAPI_BUS2054_CREATE_MULTI to create WBS,it will read table PRPS.
         My project define  is big ,it now contain a lot of WBS ,
         WHEN create new WBS through fm BAPI_BUS2054_CREATE_MULTI ,it very slow , I trace SQL use ST05 ,
         fm BAPI_BUS2054_CREATE_MULTI  read all WBS infomation from DB,
         HOW can i promote the  performace   ????
    regards.

    Search for notes.
    Rob

  • SAP BW OLAP Universe performance issue

    Hi,
    Based on BO recommendation, I created a universe on top of a big BEX query which included all characteristics and key figures from a cube. I then created a webi report with 5 characteristics and 1 key figure from this universe. I also create the same report in BEX query designer(same number of objects). I ran both. The Bex query completed under a minute and the webi report took more than 12 minutes to run. I did a bunch of other test with different number of objects combination and saw similar differential in query performance.
    I did a trace using 'sm50' and saw the open SQL submitted to BW from BEx was very different from what got submitted from the webi report. Here is what I saw in pseudo code.
    Bex:
    select dimension1.blah, dimension2.blah, dimension3.blah, dimension..... sum(measure)
    from dimension1, dimension2, dimension3, dimension..... factTable
    where dimension1.SID = factTable.d1SID
    and dimension2.SID = factTable.d2SID
    and ......
    and (query filters)
    OLAP Universe
    select dimension1,blah, dimension1.SID
    from dimension1, factTable
    where dimension1.SID = factTable.d1SID
    select dimension2,blah, dimension2.SID
    from dimension2, factTable
    where dimension2.SID = factTable.d2SID
    select dimension3,blah, dimension3.SID
    from dimension3, factTable
    where dimension3.SID = factTable.d3SID
    It seems the OLAP universe was querying the BW cube one dimension at a time and then somehow piece the result sets together to form the final result set. Dimension tables join to the fact table without any filter definitely causes performance issue. Beside, I have no idea why the query requests are submitted to the BW system like that.
    I looked in varies forums and found no similar issues posted by others. Has anyone had any performance problem with there OLAP universe? Is this a matter of configurations?
    My envrionment:
    SAP BW 3.5
    BOXI 3.0
    ORACLE DB (version ??)

    Hi,
    You cannot compare a BEx query and a universe query by just comparing the trace.
    BEx query makes ABAP calls and universe query makes MDX calls;
    Moreover there is only one MDX call sent to SAP, what you have noticed is that there is one MDX call to retrieve member for a given characteristic whereas a prompt has been set on.
    Last, Web Intelligence consume only flattened data (row sets) wheras BEx consume data sets.
    That means that there is a flattening operation between SAP OLAP engine and the data sent to the Web Intelligence document.
    A fix have been developed for a customer to improve performance on Web Intelligence queries, this fix will be available for all customers in SP2 planned in June 09.
    Here is a a brief summary of what the fix provide:
      -  Provide data directly from SAP server to Web Intelligence document. Avoid unnecessary conversion steps: row set --> data set --> row set
      -  Eliminate redundant sort operations
      -  Other optimization operations
    Didier
    Edited by: Didier Mazoue on Nov 20, 2008 8:38 AM

  • Adapter Engine: Performance Monitoring?

    Dear Experts
    I am assessing the PI 7.1 Advanced Adapter Engine and execute a Performance Test against a standard
    scenario using the Adapter Engine processing.
    I would like to understand and see the 'performance boost', mentioned by SAP, but I am not able to find
    a tool which gives me this view on performance (e.g. message processing time) in the Adapter Engine.
    The Runtime Workbench tab 'Performance Monitoring' seems to be limited to the ABAP Integration Server.
    Is there any settings I can use to get this view? Or is there any other metric which could be used to
    validate the actual performance improvement by the AAE?
    Thanks for you ideas.
    Mathias.

    Hi,
    You use the Advanced Adapter Engine to connect the Integration Engine to SAP systems (RFC adapter) and non-SAP systems. You use the various adapters in the Adapter Engine to convert XML- and HTTP-based messages to the specific protocol and format required by these systems, and the other way around.
    Using Integrated Configuration you can define a local Message Processing on the Advanced Adapter Engine. The Advanced Adapter Engine provides mapping and routing for this locally. Message Processing is only executed on the Advanced Adapter Engine from one adapter to another without the involvement of the Integration Engine. You can gain a great improvement in performance.
    You can implement all adapters on the Adapter Engine for a local Message Processing on the Advanced Adapter Engine, apart from the RNIF adapters and the CIDX adapters.
    If you do not use Integrated Configuration, Message Processing will always be performed by the Integration Engine to execute the routing and mapping there.
    The Advanced Adapter Engine is a separate software component that is automatically installed on the Integration Server. In this case, it is the central Advanced Adapter Engine. However, you can also install the Advanced Adapter Engine separately on another host. This is then a non-central Advanced Adapter Engine.
    Please reward points if it helps
    Thanks
    Vikranth

  • How to improve the performance for integrating third party search engine

    hi,
    I have been working on integrating verity search engine with KM. the performance for retrieving search result totally depend on how many search result returned, for example if there is less than 10 records, it only takes 3 seconds, but if there are 200 records, it takes about 3 minutes, is it normal? anyway to improve it? Thanks!
    T.J.

    Thilo,
    thanks for the response, would you recommend some document for configuring KM cache service, I did change memory cache, and also dynamic web reposity, whatelse out there that I can change? right now, I have one instance(EP6.4 sp11) works well, it returns 200 records from Stellent within 6s. But when I put this KM global service on EP6.0 sp2 (our current system) it takes about 15s. I am not sure is this because of different EP version, or something else. I have tried my best to slim down SOAP component from Stellent. I don't think anything else can be done from that side anymore. before I changed the SOAP, it tooks about 60s. just wonder what else I can do on KM side to improve it performance? Thanks!
    T.J.

  • Error while performing Risk Analysis at user level for a cross system user

    Dear All,
    I am getting the below error, while performing the risk analysis at user level for a cross system (Oracle) user.
    The error is as follows:
    "ResourceException in method ConnectionFactoryImpl.getConnection(): com.sap.engine.services.connector.exceptions.BaseResourceException: Cannot get connection for 120 seconds. Possible reasons: 1) Connections are cached within SystemThread(can be any server service or any code invoked within SystemThread in the SAP J2EE Engine), 2) The pool size of adapter "SAPJ2EDB" is not enough according to the current load of the system or 3) The specified time to wait for connection is not enough according to the pool size and current load of the system. In case 1) the solution is to check for cached connections using the Connector Service list-conns command, in case 2) to increase the size of the pool and in case 3) to increase the time to wait for connection property. In case of application thread, there is an automatic mechanism which detects unclosed connections and unfinished transactions.RC:1
    Can anyone please help.
    Regards,
    Gurugobinda

    Hi..
    Check the note # SAP Note 1121978
    SAP Note 1121978 - Recommended settings to improve peformance risk analysis.
    Check for the following...
    CONFIGTOOL>SERVER>MANAGERS>THREADMANAGER
    ChangeThreadCountStep =50
    InitialThreadCount= 100
    MaxThreadCount =200
    MinThreadCount =50
    Regards
    Gangadhar

  • Performance issues of SQL access to AW

    Hi Experts:
    I wonder whether there is performance issues when using SQL to access AW. When using SQL to access cubes in AW, the SQL queries the relational views for AW objects. And the views are based on OLAP_TABLE function. We know that, views based on any table function are not able to make use of index. That is to query a subset of the data of a view, we would have to full scan the view and then apply the filter. Such query plan always lead to bad performance.
    I want to know, when I use SQL to retrieve a small part of data in an AW-cube, will Oracle OLAP engine retrieve all data in the cube and then apply the filter? If the Oracle OLAP engine only retrieves data needed from AW, how can she did it?
    Thanks.

    For most requests the OLAP_TABLE function can reduce the amount of data it produces by examining the rowsource tree , or WHERE clause. The data in Oracle OLAP is highly indexed. There are steps a user can take to optimize the index use. Specifically, pin down the dimension(s) defined in the OLAP_TABLE function LIMITMAP via (NOT)IN lists on the dimension, parent, level or GID columns. Use of valuesets for the INHIER object, instead of a boolean object.
    In 10g, WHERE clauses like SALES > 50 are also processed prior to sending data out.
    For large requests (thousands of rows) performance can be a problem because the data is being sent through the object layer. In 10 this can be ameliorated by wrapping the OLAP_TABLE function call with a SQL MODEL clause. The SQL MODEL knows a bit more about the Olap options and does not require use to pipe the data through the object layer.
    SQL MODEL example (note no ADT defintion, using of auto ADT) This can be wrapped in a CREATE VIEW statement :
    select * from olap_table('myaw duration session', null, null, 'measure sales as number from aw_sales_obj dimension d1 as varchar2(10) from geog ...rest of dimensions')
    sql model dimension by (d1, d2, d3, d4) measures (sales, any attributes, parent columns etc...) unique single reference rules update sequential order ()
    Example of WHERE clause with above select.
    SELECT *
    FROM (select * from olap_table('myaw duration session', null, null, 'measure sales as number from aw_sales_obj dimension d1 as varchar2(10) from geog ...rest of dimensions')
    sql model dimension by (d1, d2, d3, d4) measures (sales, any attributes, parent columns etc...) unique single reference rules update sequential order ()))
    WHERE GEOG NOT IN ('USA', 'CANADA')
    and GEOG_GID = 1
    and TIME_PARENT IN ('2004')
    and CHANNEL = 'CATALOG'
    and SALES > 50000;

  • Oracle 11g OLAP & SQL

    Hi All
    Our company is in the process of doing a POC warehouse where we are using Oracle OLAP extensively for summary management. I have been tasked with porting all our existing reports (Cognos) from using an Informix backend to start using Oracle. The OLAP team has created some cube views for me but Im struggling to get my around how Im going to use them for reporting purposes.
    Example
    1) Im using the following sql (abbreviated) to get my data:
    select
    v_product.product_description.
    v_product.level_name,
    v_sales.sales,
    v_sales.calc_measure
    from v_product, v_sales, v_location, v_time
    where ... all the joins....
    v_product.level_name in ('DEPARTMENT', 'CLASS')
    and v_location.level_name = 'TOTAL'
    and v_time.level_name = 'TOTAL'
    2) This brings back data that looks like:
    product_description level_name sales calc_measure
    MEAT DEPARTMENT 232323 23.56
    POULTRY DEPARTMENT 43444 35.23
    BEEF CLASS 232323 23.56
    CHICKEN CLASS 67455 35.23
    LAMB CLASS 73444 23.56
    PORK CLASS 55555 35.23
    3) I need to create a list report thats grouped by department and for each department shows all the classes but off the data above is very difficult. I cannot just select the all the class values and then do the aggregation in the report as there is a calculated measure so I need to select the value for that level from the cube view. Is it possible in one sql statement or will I need more?
    Thanks for any ideas

    Dave thanks for your reply. Please excuse my poor example this was my first day using cube views and I cannot login to my work setup from home so going by memory alone.
    To answer your question
    1. We are using 11g AW
    2. I don't remember the exact cube view names but not relevant to my question (I think)
    3. Alas the oracle forums don't support much formatting else I would have provided an ascii example. I have uploaded the sample report output here http://i279.photobucket.com/albums/kk145/angusgoosemiller/sample.gif
    Better Example
    1. From what I can gather if your query more that one level in the same dimension from a cube view you get the results denormalized as rows. So effectively for my report I what the department and class levels from the product hierarchy where class is a child of department and some relevant measures one of which is a calculated measure. If I select this from the cube view I am getting results in the form:
    DEPARTMENT LEVEL ... row values
    DEPARTMENT LEVEL ... row values
    CLASS LEVEL ... row values
    CLASS LEVEL ... row values
    2) My report is a list report that is grouped by department and for each department all the class records are displayed with the measures. There must also be a department total for every department level and a grand total for the report. If the calculated measure was not included I could just return all the class records as the there is a department attribute defined that is also in the cube view and calculate all the department values dynamically in the report. However due to the calculated measure and probably as a best practice from a performance/redundancy perspective I only want to select from the cube view in its aggregated form like is currently happening.
    3) From a report design perspective this provides some challenges as relationally hierarchy levels are normally modeled as columns and we use to process calculated measures dynamically in the report. Going forward we would like all the calculations etc happening in the OLAP engine.
    4) So basically the way I see it I need the following from the cube:
    4.1) The department records
    4.2) The class records
    4.3) the department total records
    4.4) the grand total record for all departments
    Can I get that in one SQL statement in such a manner that I can produce a report? How would an oracle based reporting solution get the data, via sql or directly from the cube via the olap api.
    Thanks for your help I really appreciate any advice!
    Cheers
    Angus

Maybe you are looking for