BAPI_BUS2054_CREATE_MULTI  pool performance

hi!
    when i use BAPI_BUS2054_CREATE_MULTI to create WBS,it will read table PRPS.
     My project define  is big ,it now contain a lot of WBS ,
     WHEN create new WBS through fm BAPI_BUS2054_CREATE_MULTI ,it very slow , I trace SQL use ST05 ,
     fm BAPI_BUS2054_CREATE_MULTI  read all WBS infomation from DB,
     HOW can i promote the  performace   ????
regards.

Search for notes.
Rob

Similar Messages

  • "Check point not complete"  & Shared pool performance

    Hi all,
    1. I am getting the error in alert log file as "Check point not complete" in oracle 11g. Its running in dataguard enviroment. Can you please help us on this.
    2. How to check the shared pool usage and shared pool free space and how to increase the shared pool performance.
    Thanks in advance.

    934413 wrote:
    Hi all,
    1. I am getting the error in alert log file as "Check point not complete" in oracle 11g. Its running in dataguard enviroment. Can you please help us on this.
    post Error Code & Message to support this claim
    2. How to check the shared pool usage and shared pool free space and how to increase the shared pool performance.
    post SQL & results that lead you to conclude the SGA size needs to be changed.

  • Pools Performance

    Hello,
    I did a test JSP page to see the DB Pools performance, and I've notice that if I use DriverManager class or PooledConnection class, the performance is the same.
    Here is my code:
    <%@ page import="java.sql.*" %>
    <%@ page import="javax.sql.*" %>
    <%@ page import="oracle.jdbc.driver.*" %>
    <%@ page import="oracle.jdbc.pool.*" %>
    <%@ page import="java.util.Date" %>
    <%
    try {
    String sDbDriver = "oracle.jdbc.driver.OracleDriver";
    Class.forName(sDbDriver);
    out.println("Iniciando accesos con DriverManager class ("+(new Date()).toString()+")");
    for (int t=0;t<100;t++) {
    Connection conn = DriverManager.getConnection("url","usr","pssw");
    conn.close();
    out.println("Iniciando accesos con Pools ("+(new Date()).toString()+")");
    for (int t=0;t<100;t++) {
    OracleConnectionPoolDataSource ocpds = new OracleConnectionPoolDataSource();
    ocpds.setURL("url");
    ocpds.setUser("usr");
    ocpds.setPassword("pssw");
    PooledConnection pc = ocpds.getPooledConnection();
    Connection conn = pc.getConnection();
    conn.close();
    out.println("Listo ("+(new Date()).toString()+")");
    } catch (Exception e) {
    out.println(e.toString());
    %>
    The result was:
    Iniciando accesos con DriverManager class (Mon Apr 30 12:32:19 GMT-03:00 2001)
    Iniciando accesos con Pools (Mon Apr 30 12:32:42 GMT-03:00 2001)
    Listo (Mon Apr 30 12:33:04 GMT-03:00 2001)
    There isn't more speed on each test.
    Am I mising something? Is there something wrong?
    Do I have to make some configuration changes in the Database?
    Thanks for your help.
    Gustavo.

    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by Gustavo Spadari ([email protected]):
    Hello,
    I did a test JSP page to see the DB Pools performance, and I've notice that if I use DriverManager class or PooledConnection class, the performance is the same.
    Here is my code:
    <%@ page import="java.sql.*" %>
    <%@ page import="javax.sql.*" %>
    <%@ page import="oracle.jdbc.driver.*" %>
    <%@ page import="oracle.jdbc.pool.*" %>
    <%@ page import="java.util.Date" %>
    <%
    try {
    String sDbDriver = "oracle.jdbc.driver.OracleDriver";
    Class.forName(sDbDriver);
    out.println("Iniciando accesos con DriverManager class ("+(new Date()).toString()+")");
    for (int t=0;t<100;t++) {
    Connection conn = DriverManager.getConnection("url","usr","pssw");
    conn.close();
    out.println("Iniciando accesos con Pools ("+(new Date()).toString()+")");
    for (int t=0;t<100;t++) {
    OracleConnectionPoolDataSource ocpds = new OracleConnectionPoolDataSource();
    ocpds.setURL("url");
    ocpds.setUser("usr");
    ocpds.setPassword("pssw");
    PooledConnection pc = ocpds.getPooledConnection();
    Connection conn = pc.getConnection();
    conn.close();
    out.println("Listo ("+(new Date()).toString()+")");
    } catch (Exception e) {
    out.println(e.toString());
    %>
    The result was:
    Iniciando accesos con DriverManager class (Mon Apr 30 12:32:19 GMT-03:00 2001)
    Iniciando accesos con Pools (Mon Apr 30 12:32:42 GMT-03:00 2001)
    Listo (Mon Apr 30 12:33:04 GMT-03:00 2001)
    There isn't more speed on each test.
    Am I mising something? Is there something wrong?
    Do I have to make some configuration changes in the Database?
    Thanks for your help.
    Gustavo.<HR></BLOCKQUOTE>
    Get the PooledConnection outside of the loop then use that to get a Connection inside the loop so you are reusing the Connection.

  • Connection pool performance

     

    Barry,
    You should post this to Performance newsgroups (please dont cross post)
    This actually depends on how many execute threads you have - if you have 101
    clients / applications requesting connections concurrently, only the once
    that can get a Execute Thread will actually be able to use a connection and
    complete. Rest of the application requests will be queued up.
    sree
    "Barry Myles" <[email protected]> wrote in message
    news:40b47cdd$1@mktnews1...
    >
    Hi there I wonder if anyone could shine some light on the subject ofconnection
    pooling configuration for me
    In the connection pool If I have
    initial capacity = 50
    Maximum Capacity = 100
    and then get a period with 101 concurrent applications all wanting towrite information
    to the backend database at the same time, what happens to that oneapplication?
    is it forced to wait until a thread becomes available, does it just blockuntil
    this happens or does it give up right away? (if it does what exception
    is generated if any?)
    Also I'm trying to pin down some performance issues with the use ofconnection
    pooling, can someone guide me as to what a typically under performingconfiguration
    for a connection pool would be.

  • Buffer table pool performance with return code 64 in ST05

    Hey guys,I had been meeting an issue causing poor Performance in my SAP system.For a full buffered table,I did trace it with ST12 and ST05,I found it's perfectly buffered in AS(as blue background entries in ST05 tracelist),but there were entries with high 'duration' and return code '64'(I know that's mean no records found),instead,the return code '0' entries worked fine,the duration  was pretty low.I had learned from documents of IBM/SAP(we're running with DB2 on AIX) that if you are just checking the exists of some records of certain table(access frequently),you may buffer them in AS,for now it seems to be going into opposite direction,that does  make sense with long time 'duration'.Detailed information seen the attachments.

    Hi Yunze,
    this performance issue can be resolved by adding 2 more steps in the code
    step 1 is required so that we will not loose any data else you can go ahead with yourcode by adding the field TZONE for both sort and delete adjacent duplicates and step 2.
    1. Pass the data from table IT_TZONE into a dummy table
        SORT dummy table using TZONE  and DELETE ADJACENT DUPLICATES from dummy table           using the field TZONE.
    2. Check the dummy table is initial or not before the select query on ZTMT102.
        If this is not empty then process the select query else skip the select query.
    you did not mention whether the field ZTRZONE is key field in the table ZTMY102 or not.
    if it is not a key field, if huge amount of data has to be fetched then create an index.
    the index will create a load on the database but it will improve the performance of the select query.
    Hope this will help you to resolve your issue and let me know in case of any questions.
    Regards,
    Phani

  • OLAP engine is of pool performance

    I create my analytical worksheet and populate data from SH sample schema according to <<Olap DML Guide>>, and define a AGGMAP as:
    DEFINE AGG.SALES AGGMAP
    AGGMAP
         RELATION DIM_PRODUCT.PARENT PRECOMPUTE (DIM_PRODUCT PROD_SUBCATEGORY)
         RELATION DIM_TIME.PARENT PRECOMPUTE (DIM_TIME TIME_MONTH)
         RELATION DIM_CUSTOMER.PARENT PRECOMPUTE (DIM_CUSTOMER CUST_STATE)
         RELATION DIM_PROMO.PARENT PRECOMPUTE (DIM_PROMO PROMO_SUBCATEGORY)
         RELATION DIM_CHANNEL.PARENT PRECOMPUTE (DIM_CHANNEL CHANNEL_CLASS)
    END
    when I execute the following command, it result an ora-03113 errors after two hours, the temp tablespace grows up to 3200M when the error occured. I have tried several times, and result the same error.
    The exported eif file is of 100M size ,while I build the same cube on Microsoft Analyze Service, it is of 25M only.
    Can any one help me?
    My machine:
    IBM Thinkpad T23
    PIII 1 X 1.1G
    MEMORY 512M
    Oracle config:
    SHARED_POOL 32M
    JAVA_POOL 32M
    DB_CACHE 64M
    OLAP_CACHE 128M

    DEFINE PROD_ID DIMENSION NUMBER (6,0)
    DEFINE PROD_SUBCATEGORY DIMENSION TEXT
    DEFINE PROD_CATEGORY DIMENSION TEXT
    DEFINE PROD_ALL DIMENSION TEXT
    DEFINE DIM_PRODUCT DIMENSION CONCAT (PROD_ALL PROD_SUBCATEGORY PROD_CATEGORY PROD_ID)
    DEFINE DIM_PRODUCT.HIERARCHIES DIMENSION TEXT
    MAINTAIN DIM_PRODUCT.HIERARCHIES ADD 'STANDARD'
    DEFINE DIM_PRODUCT.LEVELDIM DIMENSION TEXT
    MAINTAIN DIM_PRODUCT.LEVELDIM ADD 'ID' 'SUBCATEGORY' 'CATEGORY' 'ALL'
    DEFINE CHANNEL_ID DIMENSION TEXT
    DEFINE CHANNEL_CLASS DIMENSION TEXT
    DEFINE CHANNEL_ALL DIMENSION TEXT
    DEFINE DIM_CHANNEL DIMENSION CONCAT (CHANNEL_ALL CHANNEL_CLASS CHANNEL_ID)
    DEFINE DIM_CHANNEL.HIERARCHIES DIMENSION TEXT
    MAINTAIN DIM_CHANNEL.HIERARCHIES ADD 'STANDARD'
    DEFINE DIM_CHANNEL.LEVELDIM DIMENSION TEXT
    MAINTAIN DIM_CHANNEL.LEVELDIM ADD 'ID' 'CLASS' 'ALL'
    DEFINE PROMO_ID DIMENSION NUMBER (6,0)
    DEFINE PROMO_SUBCATEGORY DIMENSION TEXT
    DEFINE PROMO_CATEGORY DIMENSION TEXT
    DEFINE PROMO_ALL DIMENSION TEXT
    DEFINE DIM_PROMO DIMENSION CONCAT (PROMO_ALL PROMO_CATEGORY PROMO_SUBCATEGORY PROMO_ID)
    DEFINE DIM_PROMO.HIERARCHIES DIMENSION TEXT
    MAINTAIN DIM_PROMO.HIERARCHIES ADD 'STANDARD'
    DEFINE DIM_PROMO.LEVELDIM DIMENSION TEXT
    MAINTAIN DIM_PROMO.LEVELDIM ADD 'ID' 'SUBCATEGORY' 'CATEGORY' 'ALL'
    DEFINE TIME_DAY DIMENSION TEXT
    DEFINE TIME_MONTH DIMENSION TEXT
    DEFINE TIME_QUATER DIMENSION TEXT
    DEFINE TIME_YEAR DIMENSION TEXT
    DEFINE DIM_TIME DIMENSION CONCAT (TIME_YEAR TIME_QUATER TIME_MONTH TIME_DAY)
    DEFINE DIM_TIME.HIERARCHIES DIMENSION TEXT
    MAINTAIN DIM_TIME.HIERARCHIES ADD 'STANDARD'
    DEFINE DIM_TIME.LEVELDIM DIMENSION TEXT
    MAINTAIN DIM_TIME.LEVELDIM ADD 'DAY' 'MONTH' 'QUATER' 'YEAR'
    DEFINE CUST_ID DIMENSION NUMBER (8,0)
    DEFINE CUST_CITY DIMENSION TEXT
    DEFINE CUST_STATE DIMENSION TEXT
    DEFINE CUST_COUNTRY DIMENSION TEXT
    DEFINE CUST_SUBREGION DIMENSION TEXT
    DEFINE CUST_REGION DIMENSION TEXT
    DEFINE CUST_WORLD DIMENSION TEXT
    DEFINE DIM_CUSTOMER DIMENSION CONCAT (CUST_WORLD CUST_REGION CUST_SUBREGION CUST_COUNTRY CUST_STATE CUST_CITY -
    CUST_ID)
    DEFINE DIM_CUSTOMER.HIERARCHIES DIMENSION TEXT
    MAINTAIN DIM_CUSTOMER.HIERARCHIES ADD 'STANDARD'
    DEFINE DIM_CUSTOMER.LEVELDIM DIMENSION TEXT
    MAINTAIN DIM_CUSTOMER.LEVELDIM ADD 'ID' 'CITY' 'STATE' 'COUNTRY' 'SUBREGION' 'REGION' 'WORLD'
    DEFINE DIM_SALES COMPOSITE <DIM_PRODUCT DIM_CUSTOMER DIM_TIME DIM_CHANNEL DIM_PROMO>
    DEFINE DIM_COSTS COMPOSITE <DIM_PRODUCT DIM_TIME>
    DEFINE DIM_PRODUCT.PARENTREL RELATION DIM_PRODUCT <DIM_PRODUCT DIM_PRODUCT.HIERARCHIES>
    DEFINE DIM_PRODUCT.LEVELREL RELATION DIM_PRODUCT.LEVELDIM <DIM_PRODUCT DIM_PRODUCT.HIERARCHIES>
    DEFINE DIM_CHANNEL.PARENTREL RELATION DIM_CHANNEL <DIM_CHANNEL DIM_CHANNEL.HIERARCHIES>
    DEFINE DIM_CHANNEL.LEVELREL RELATION DIM_CHANNEL.LEVELDIM <DIM_CHANNEL DIM_CHANNEL.HIERARCHIES>
    DEFINE DIM_PROMO.PARENTREL RELATION DIM_PROMO <DIM_PROMO DIM_PROMO.HIERARCHIES>
    DEFINE DIM_PROMO.LEVELREL RELATION DIM_PROMO.LEVELDIM <DIM_PROMO DIM_PROMO.HIERARCHIES>
    DEFINE DIM_TIME.PARENTREL RELATION DIM_TIME <DIM_TIME DIM_TIME.HIERARCHIES>
    DEFINE DIM_TIME.LEVELREL RELATION DIM_TIME.LEVELDIM <DIM_TIME DIM_TIME.HIERARCHIES>
    DEFINE DIM_CUSTOMER.PARENTREL RELATION DIM_CUSTOMER <DIM_CUSTOMER DIM_CUSTOMER.HIERARCHIES>
    DEFINE DIM_CUSTOMER.LEVELREL RELATION DIM_CUSTOMER.LEVELDIM <DIM_CUSTOMER DIM_CUSTOMER.HIERARCHIES>
    DEFINE V_SALES_AMOUNT VARIABLE DECIMAL <DIM_SALES <DIM_PRODUCT DIM_CUSTOMER DIM_TIME DIM_CHANNEL DIM_PROMO>>
    DEFINE V_SALES_QUANTITY VARIABLE DECIMAL <DIM_SALES <DIM_PRODUCT DIM_CUSTOMER DIM_TIME DIM_CHANNEL DIM_PROMO>>
    DEFINE V_COSTS_COST VARIABLE DECIMAL <DIM_COSTS <DIM_PRODUCT DIM_TIME>>
    DEFINE V_COSTS_PRICE VARIABLE DECIMAL <DIM_COSTS <DIM_PRODUCT DIM_TIME>>
    DEFINE AGG.SALES AGGMAP
    AGGMAP
    RELATION DIM_PRODUCT.PARENT PRECOMPUTE (DIM_PRODUCT PROD_SUBCATEGORY)
    RELATION DIM_TIME.PARENT PRECOMPUTE (DIM_TIME TIME_MONTH)
    RELATION DIM_CUSTOMER.PARENT PRECOMPUTE (DIM_CUSTOMER CUST_STATE)
    RELATION DIM_PROMO.PARENT PRECOMPUTE (DIM_PROMO PROMO_SUBCATEGORY)
    RELATION DIM_CHANNEL.PARENT PRECOMPUTE (DIM_CHANNEL CHANNEL_CLASS)
    END

  • SHARED POOL에 대한 점검 사항들

    제품 : ORACLE SERVER
    작성날짜 : 2003-08-26
    SHARED POOL에 대한 점검 사항들
    ======================
    PURPOSE
    다음은 shared pool에 관련된 performance 에 대한 점검 사항들이다.
    Explanation
    1. literal SQL Statements
    SELECT      substr(sql_text,1,40) "SQL", count(*) ,
    sum(executions) "TotExecs"
    FROM      v$sqlarea
    WHERE      executions < 5
         GROUP BY substr(sql_text,1,40)
         HAVING count(*) > 30
    ORDER BY 2
    shared SQL문들 중에서 literal SQL문들을 찾아내어 bind variable을
    사용할 수 있는 경우 bind variable로 전환하도록 한다.
    ORACLE cost based optimizer는 bind variable 보다 literal value를
    사용하는 SQL에 대하여 보다 최적화된 execution plan을 결정하게 된다.
    하지만 과도한 literal SQL문들을 사용하게 되면 hard parsing 이
    빈번하게 되고 library cache와 dictionary cache의 사용율을 높이게 된다.
    2. Library cahe hit ratio
    SELECT to_char(trunc(sum(reloads)/sum(pins)*100, 5),99.99999)||
    '%(less than 1%)' "Library Cache MISS RATIO"
    FROM v$librarycache
    만일 miss ratio가 1%보다 큰 경우 library cache miss를 줄이는 노력이
    필요하다. 예를 들어 적절한 크기의 shared pool을 사용하거나 dynamic SQL
    (literal SQL) 사용을 줄이도록 한다.
    3. Checking hash chain lengths
    SELECT hash_value, count(*)
    FROM v$sqlarea
    GROUP BY hash_value
    HAVING count(*) > 5
    위 query에 대한 결과가 없어야 한다. 만일 동일한 HASH_VALUE를 갖는
    sql 수가 많다면 다음의 query를 이용하여 이 hach chain에 의하여 관리되는
    sql 들을 확인하여 본다.
    대부분 literal sql문들에 의하여 이런 문제가 발생하는 경우가 많다.
    SELECT sql_text FROM v$sqlarea WHERE hash_value= <XXX>;
    4. Checking for high version counts
         SELECT      address, hash_value,
         version_count ,
    users_opening ,
    users_executing
    sql_text
    FROM v$sqlarea
    WHERE version_count > 10
    SQL의 version은 문장 상으로 완벽히 일치하지만 참조 object가 틀리는
    SQL문들을 의미한다. 만일 이해할 수 없을 정도의 version count를 갖는
    row가 있다면 한국 오라클 기술지원팀으로 문의하도록 한다.
    5. Finding statement/s which use lots of shared pool memory
    SELECT      substr(sql_text,1,40) "Stmt", count(*),
    sum(sharable_mem) "Mem",
    sum(users_opening) "Open",
    sum(executions) "Exec"
    FROM v$sql
    GROUP BY substr(sql_text,1,40)
    HAVING sum(sharable_mem) > <MEMSIZE>
    6. Allocations causing shared pool memory to be 'aged' out
    SELECT      *
    FROM      sys.x$ksmlru
    WHERE      ksmlrnum>0
    x$ksmlru는 shared pool의 object에 대한 list로 object 할당 시 age out된
    object 수에 정보를 담고 있어 age out으로 인한 응답율 저하나 latch
    병합들의 원인을 추적하는 데 유용하다.
    이 table은 8i부터 sys user로만 조회가 가능하며 한번 조회되면 reset된다.
    (x$ksmlru.ksmlrnum : Number of items flushed from the shared pool)
    Example
    Reference Documents
    -------------------

  • LARGE POOL의 개념 및 사용 방법

    제품 : ORACLE SERVER
    작성날짜 : 2004-05-03
    LARGE POOL의 개념 및 사용 방법
    1. 개념 및 사용 목적
    Oracle8.0에서 large pool이라는 새로운 SGA 내의 구성이 도입되었다. large pool
    이란 shared pool과 유사하나 실제 shared pool 내의 일부분을 사용하는 것은
    아니고 SGA 내에 직접 할당하며, 이 pool을 사용할 수 있는 작업들을 제한하고 있다.
    Oracle8에서 large pool을 사용하는 주요 목적은 다음과 같이 두가지로 볼 수 있다.
    - MTS connection에서 session 정보, 즉 UGA를 할당하기 위해서
    - sequential file IO의 buffering을 위해서
    (예를 들어, multiple I/O slave를 사용하는 경우의 recovery)
    MTS connection이 large pool을 사용하는 경우, 일단 fixed UGA라고 하는 부분은
    shared large pool을 사용하고, 나머지 session memory(UGA)는 large pool에
    할당된다. 만약 large pool이 충분하지 않으면 ORA-4031이 발생한다.
    2. 사용 방법
    large pool을 지정하기 위해서는 init.ora file 내에 large_pool_size라는
    parameter를 사용하고, larlge_pool_min_alloc parameter는 이 large pool에
    할당될 최소한의 memory chunk size를 제한한다. 이 large_pool_min_alloc의
    값이 클수록 large pool의 fragmentation은 줄어든다.
    만약 이 parameter를 지정하지 않으면 large pool은 사용하지 않는다. 그러나 다음
    중 하나라도 만족되는 상황에서는 large pool에 관한 parameter가 설정되어 있지
    않다 하더라도 Oracle이 자동으로 필요한 large pool을 계산하여 할당하여 사용한다.
    - parallel_automatic_tuning = true
    - parallel_min_servers=n
    - dbwr_io_slaves=n
    large pool size에 대한 계산은 다음 값에 의해서 결정된다.
    - parallel_max_servers
    - parallel_threads_per_cpu
    - parallel_server_instances
    - mts_dispatchers
    - dbwr_io_slaves
    이러한 default에 의해 계산된 large pool이 너무 크게 되면, performance에
    문제가 생기거나 db startup이 안 될 수 있다. 그러므로 이러한 때에는 적당
    한 값의 large_pool_size로 지정한 후 다시 startup하여야 한다.

  • IAS Enterprise Manager App Pool Statistics - available for Tomcat?

    Is the Enterprise Manager feature that manages / displays statistics for the application module pooling performance available for Tomcat?
    How can you monitor application module pooling performance (critical for any production application) in a non-iAS environment?
    Thanks

    The strange part is even Metalink has nothin of such for 10.1.2.3.0 server problems.

  • Custom report region (Conditionaly Split over two rows)

    I have a request, to take an existing report that displays data like this:
    Pool Performing vs
    Non-Performing Current Balance Low Value High Value Value Cumulative Cumulative
    WA Low WA High Loss Low Loss High
    AA Performing $XXX,XXX,XXX $XXX,XXX,XXX $XXX,XXX,XXX XX.XX% XX.XX% XX.XX% XX.XX%
    Non-Performing $XXX,XXX,XXX $XXX,XXX,XXX $XXX,XXX,XXX XX.XX% XX.XX% XX.XX% XX.XX%
    Sub-Total $XXX,XXX,XXX $XXX,XXX,XXX $XXX,XXX,XXX XX.XX% XX.XX% XX.XX% XX.XX%
    and to have it instead display like this:
    AA
    Performing $XXX,XXX,XXX $XXX,XXX,XXX $XXX,XXX,XXX XX.XX% XX.XX% XX.XX% XX.XX%
    Non-Performing $XXX,XXX,XXX $XXX,XXX,XXX $XXX,XXX,XXX XX.XX% XX.XX% XX.XX% XX.XX%
    Sub-Total $XXX,XXX,XXX $XXX,XXX,XXX $XXX,XXX,XXX XX.XX% XX.XX% XX.XX% XX.XX%
    I tried building a custom Named Column (row template), but ran into the issue that ALL ROWS being offset, not just the ones with value for the Pool..
    Is it possible to make a conditional row template that will allow for a row to be split ONLY if there is a value in the column?
    Thank you,
    Tony Miller
    Dallas, TX

    TexasApexDeveloper wrote:
    Paul,
    SO glad to hear from you again.... Here is the query that I am using:
    SELECT ASET_CAT_POOL_FST_TXT Pool,
    DECODE(Display_Order, 0, ASET_PERF_STAT_TXT, 1, 'Sub-Total', 2, 'Grand Total') AS Perf_Non_Perf,
    DECODE(Display_Order, 0, 'Normal', 1, 'Bold', 2, 'Bold') AS Css_Tag,
    CurrBal,
    LowVal,
    HighVal,
    LowPercent,
    HighPercent,
    LossHighPercent,
    LossLowPercent,
    Display_Order
    FROM
    SELECT Grouping(ASET_CAT_POOL_FST_TXT) + Grouping(ASET_PERF_STAT_TXT) AS Display_Order
    , ASET_CAT_POOL_FST_TXT
    , ASET_PERF_STAT_TXT
    , Round(Sum(ASET_CUR_BAL_AMT), 2)  AS CurrBal
    , Round(Sum(ASET_LW_VAL_AMT),       2)  AS LowVal
    , Round(Sum(ASET_HIGH_VAL_AMT),      2)  AS HighVal
    , Round(100 * DECODE(Sum(ASET_CUR_BAL_AMT),0, 0, (Sum(ASET_LW_VAL_AMT) / Sum(ASET_CUR_BAL_AMT))), 2) AS LowPercent
    , Round(100 * DECODE(Sum(ASET_CUR_BAL_AMT),0, 0, (Sum(ASET_HIGH_VAL_AMT)                / Sum(ASET_CUR_BAL_AMT))), 2) AS HighPercent
    , Round(100 * DECODE(Sum(ASET_CUR_BAL_CALC_CUM_LOS_AMT), 0, 0, (Sum(ASET_CALC_CUM_LOS_HIGH_AMT) / Sum(ASET_CUR_BAL_CALC_CUM_LOS_AMT))), 2) AS LossHighPercent
    , Round(100 * DECODE(Sum(ASET_CUR_BAL_CALC_CUM_LOS_AMT), 0, 0, (Sum(ASET_CALC_CUM_LOS_LW_AMT)  / Sum(ASET_CUR_BAL_CALC_CUM_LOS_AMT))), 2) AS LossLowPercent
    FROM ASET_VLTN_RVW_INSTSMRY
    WHERE USR_RPT_PBLSH_IND = 'Y' AND Etl_Src_File_Typ_Ind = 'C'
    GROUP BY ROLLUP (ASET_CAT_POOL_FST_TXT, ASET_PERF_STAT_TXT)
    ORDER BY POOL,
    Display_Order,
    Perf_Non_Perf DESCAs you can see I am doing a roll-up to get my sub-totaling.. Wouldn't anticipate any problems in going with a conditional named column template with that.
    In the report I am using a standard report, with breaks on Columns 1 & 2 (Pool,Perf_Non_Perf). Have you tried using the inscrutable Break Formatting options in the report attributes? Something like:
    Break Columns: First and Second Columns
    Break Column Style: Repeat Headings on Break
    For repeat heading breaks use this format, use #COLUMN_VALUE# subs
    <tbody><tr><th colspan="9" class="t17ReportHeader" align="left">#COLUMN_VALUE# #COLUMN_VALUE#</th></tr>(Match up the <tt>colspan</tt> and </tt>class</tt> with your displayed columns and theme.)
    Posting data MIGHT be an issue since it is proprietary data, I can try posting bogus data to my hosted area on Oracle's site though..That and access to the workspace would help, or post a <tt>CREATE TABLE</tt> script for <tt>ASET_VLTN_RVW_INSTSMRY</tt> and some inserts for the bogus data here.

  • ORA-12853 and ORA-04031 simultaneously

    Hello,
    I am getting the following errors when I try to start Oracle
    ORA-12853: insufficient memory for PX buffers: current 0K, max needed 2640K
    ORA-04031: unable to allocate 21544 bytes of shared memory ("large pool","unknown object","large pool","PX msg pool")
    My init.ora file is as follows -
    EDA.__db_cache_size=1174405120
    EDA.__java_pool_size=33554432
    EDA.__large_pool_size=0
    EDA.__shared_pool_size=1174405120
    EDA.__streams_pool_size=0
    *._OPTIM_PEEK_USER_BINDS=FALSE
    *.background_dump_dest='/oracle/EDA/saptrace/background'
    *.compatible='10.2.0'
    *.control_file_record_keep_time=30
    *.control_files='/oracle/EDA/origlogA/cntrl/cntlrEDA.dbf','/oracle/EDA/origlogB/cntrl/cntrlEDA.dbf','/oracle/EDA/sapdata1/cntrl/cntrlEDA.dbf'
    *.core_dump_dest='/oracle/EDA/saptrace/background'
    *.db_block_size=8192
    *.db_cache_size=1159641169
    *.db_files=254
    *.db_name='EDA'
    *.dml_locks=4000
    *.event='10191 trace name context forever, level 1'
    *.FILESYSTEMIO_OPTIONS='setall'
    *.job_queue_processes=1
    *.log_archive_dest='/oracle/EDA/oraarch/EDAarch'
    *.log_buffer=1048576
    *.log_checkpoint_interval=0
    *.log_checkpoints_to_alert=true
    *.open_cursors=800
    *.optimizer_features_enable='10.2.0.1'
    *.pga_aggregate_target=1546188226
    *.processes=80
    *.recyclebin='off'
    *.remote_login_passwordfile='exclusive'
    *.remote_os_authent=true
    *.sessions=96
    *.sga_max_size=2319282339
    *.sga_target=2399141888
    *.shared_pool_reserved_size=115964116
    *.shared_pool_size=1159641169
    *.sort_area_retained_size=0
    *.sort_area_size=2097152
    *.statistics_level='typical'
    *.undo_management='AUTO'
    *.undo_retention=43200
    *.undo_tablespace='PSAPUNDO'
    *.user_dump_dest='/oracle/EDA/saptrace/usertrace'
    *.workarea_size_policy='AUTO'
    According to me everything looks fine. Kindly help me sort this error.
    Thanks in advance.

    1. Play with Shared Pool....
    a. The following query determines the available memory for SHARED_POOL_SIZE in Oracle sga
    select sum (bytes)/1024/1024 from v$sgastat where pool=u2019shared poolu2019
    b. The following query determines the total used memory by shared_Pool in Oracle SGA.
    select sum (bytes)/1024/1024 from v$sgastat where pool=u2019shared poolu2019 and name not in (u2019free memoryu2019)
    c. This is the most important query
    select
    sum(a.bytes)/(1024*1024))shared_pool_used,
    max(b.value)/(1024*1024) shared_pool_size,
    sum(a.bytes)/(1024*1024))-
    (sum(a.bytes)/(1024*1024)) shared_pool_avail,
    ((sum(a.bytes)/(10241024))/(max(b.value)/(10241024)))*100
    pct_shared_pool_avl
    from v$sgastat a, v$parameter b
    where (a.pool=u2019shared poolu2019
    and a.name not in (u2019free memoryu2019))
    and
    b.name=u2019shared_pool_sizeu2019
    You need to continously monitor the shared Pool with the above query at differnet times. During Peak times and Non peak times to have glance of shared pool usage in the Oracle database.
    if the available pct_shared_pool_avl crosses 95% then i think you should re-consider the Process of increasing the shared_pool_size.
    2. There are many way to improve Shared Pool performance.
    a. Ask ABAPers to write more generic and reusablecode.
    b. Using of right block size.
    c. Proper design of the database.
    Comment on ORA-04031 :
    This error should not appear in any of the application logs, the alert log or any trace files. Do not depend on ORA-04031 errors being written to the alert log, as 4031 errors only appear in the alert log if they affect background process operations (such as PMON activities). 4031u2019s are not internal errors and so could be trapped and handled by the application (this is not recommended).
    From 10gR1 onwards, a 4031 trace file is written to the user_dump_dest (or background_dump_dest) directory; this trace file is useful in diagnosing the nature of problem
    Hope this will help you.
    Regards,
    SK
    OCP DBA -9i,10g

  • NNMi monitoring of APs

    Hi there.  We are monitoring various Cisco Aps through NNMi.  All was fine till a few days ago, when a threshold was set, and all the APs of a certain model started showing alarms and not clearing.
    The model in question is mainly a 1262, and NNMI has started monitoring these OIDs
    hajap177-0204:
    Policy: Cisco Memory Pool Performance Monitoring
    Target: 10.230.4.84
    Poller: NnmSnmpPoller, Target Responding: true, Poll Successful: true, Poll Duration: 15 mSec
    sysUpTime 520969360
    Object    | memoryUtil | freeMemory | ciscoMemoryPoolUsed | ciscoMemoryPoolFree
    ----------|------------|------------|---------------------|--------------------
    I/O.8     | 0.51       | 16300.51   | 85496               | 16691720          
    Processor | 39.87      | 40440.8    | 27457288            | 41411380          
    I/O.2     | 85.94      | 4608.1     | 28835736            | 4718696    
    hajap217-0601:
    Policy: Cisco Memory Pool Performance Monitoring
    Target: 10.230.4.124
    Poller: NnmSnmpPoller, Target Responding: true, Poll Successful: true, Poll Duration: 34 mSec
    sysUpTime 1470244695
    Object    | memoryUtil | freeMemory | ciscoMemoryPoolUsed | ciscoMemoryPoolFree
    ----------|------------|------------|---------------------|--------------------
    Processor | 39.81      | 40483.46   | 27413604            | 41455064          
    I/O.2     | 85.94      | 4608.1     | 28835736            | 4718696           
    I/O.8     | 0.51       | 16300.51   | 85496               | 16691720    
    Namely it's the I/O.2 that is causing the problem, as the threshold is set at 85%, and its sitting at 85.9.  I've looked at several APs, and the all show the same memory usage, and it doesn't change hour on hour,
    which leads me to believe that that's how it's meant to behave, there's nothing to worry about, and we should just set our threshold to 90%.  The problem lies in the fact that we are a big cumbersome company, and the NOC won't change the threshold without documentary evedence that the threshold should be changed (they can't tell me on what information they based the decision to chose 85% for this particular OID.....). 
    Can anyone tell me what the difference between the I/O.2 and the I/O.8 is?  and it there a Cisco recommended baseline to monitor against?
    Cheers, Al.

    My team has
    been working a similar issue where our application compliance was slowly changing from success to error using SCCM 2012 (no SP1). After 3 weeks of
    working with Microsoft we finally resolved this issue. We are using task sequences to deploy application stacks. There is a known bug that causes issues
    when changing the task sequence to update the rev of an application in it (allegedly fixed in SP1). The TS was still referencing an old version of the application despite our updates and from the console you can't tell. When the machine runs its policy evaluation
    and hits one of these apps it stops fails the evaluation of all applications. So, App "A" has the issue in the TS, you deploy App "B" and initial compliance looks good until the App Policy runs and hits the issue for App "A" and
    report an error for your App "B" deployment
    CIAgent.log
    (The prod version of this app is "12", below you can see the TS was still using version "10"
    CIAgentJob({BB81C59E-0D68-4F80-8C26-F3672FDA6993}): CAgentJob::VersionInfoTimedOut for ModelName
    ScopeId_FBAD85CC-8425-4A82-9A2A-A69D6941A909/RequiredApplication_826f927c-b7b4-446b-9767-9e03529677ff,
    version 10 not available.
    CCIInfo::SetError - Setting CI level error to (0x87d00314). <- this repeated many times
    VersionInfo ModelName=ScopeId_FBAD85CC-8425-4A82-9A2A-A69D6941A909/RequiredApplication_826f927c-b7b4-446b-9767-9e03529677ff,
    Version=10, IsVersionSpecific=TRUE timed out
    The results of this SQL query give you a link to XML to reveal the
    friendly name of the offending application.
    select
    top 10 *
    from CI_ConfigurationItems where CI_UniqueID
    like '%/Application_826f927c-b7b4-446b-9767-9e03529677ff%'
    Once we had this we removed it from the task sequence then added it back in. Once all the clients check in and run the App Policy again they will see the update and magically all the errors
    for unrelated applications changed from error to success

  • UCS C24M3 with UCS-RAID9270CV-8I battery missing ???

    Hello,
    We bought 5 UCS C24M3 standalone servers for one of our customer with raid controller UCS-RAID9270CV-8I but the Supercap (battery) seems to be missing so only write through cache policy is possible (instead of write back cache policy) ==> result : pool performance on VMware VMFS...
    How should I contact Cisco for this problem (I think of a forget during the factory assembly of the servers) ?
    Thank you for your feedback.
    Best Regards,
    Boris
    MegaRAID9270 CV with 8
    internal SAS/SATA por
    MegaRAID9270 CV with 8
    internal SAS/SATA ports
    with Supercap

    Hi Boris, 
    Could you please unicast me your contact information along with the Sales Order Number? I had the platform product manager reach out to me about this already and I can put you in contact with her. 
    My contact e-mail is: jeffoste at cisco dot com
    Jeff

  • Need class to create task

    HI experts:
    I need some classes to create task in cproject 4.5 .
    I used BAPI to create task ,but three are multiple transaction with DB , then pool performance.
    I had asked this question to Simon who come from Germany sap company Headquarters , he advanced me to use class to create task,and this mechanism will reduce transaction with DB .
    But I can not find suitable Class , other words , I need example to create task through class.
    please help me .

    0000000

  • Need some help understanding the way materialized views are applied through

    Hi, I need some help understanding the way materialized views are applied through adpatch.
    In patch 1, we have a mv with build mode immediate. When applying it PTS hang due to pool performance of mv refresh.
    So we provide patch 2, with that mv build mode deferred, hoping it'll go through. But patch 2 hang too on the same mv.
    How does this work? Is that because mv already exists in the database with build immediate, patch 2 will force it to refresh first before changing build mode? How to get over this?
    Thanks,
    Wei

    Hi Hussein,
    Thank you for the response.
    Application release is 11.5.10.
    Patch 1 is MSC11510: 8639586 ASCP ENGINE RUP#38 PATCH FOR 11.5.10 BRANCH
    Patch 2 is MSC11510: 9001833 APCC MSC_PHUB_CUSTOMERS_MV WORKER IS STUCK ON "DB FILE SEQUENTIAL READ" 12 HOURS
    The MV is APPS.MSC_PHUB_CUSTOMERS_MV
    This happens at customer environment but not reproducable in our internal environment, as our testing data is much smaller.
    Taking closer look in the logs, I saw actually when applying both patch 1 and patch 2, MV doesn't exist in the database. So seems my previous assumption is wrong. Still, strange that patch 2 contains only one file which is the MV.xdf, it took 7 hours and finally got killed.
    -- patch 1 log
    Materialized View Name is MSC_PHUB_CUSTOMERS_MV
    Materialized View does not exist in the target database
    Executing create Statement
    Create Statement is
    CREATE MATERIALIZED VIEW "APPS"."MSC_PHUB_CUSTOMERS_MV"
    ORGANIZATION HEAP PCTFREE 10 PCTUSED 40 INITRANS 10 MAXTRANS 255 LOGGING
    STORAGE(INITIAL 4096 NEXT 131072 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 4 FREELIST GROUPS 4 BUFFER_POOL DEFAULT)
    TABLESPACE "APPS_TS_SUMMARY"
    BUILD IMMEDIATE
    USING INDEX
    REFRESH FORCE ON DEMAND
    WITH ROWID USING DEFAULT LOCAL ROLLBACK SEGMENT
    DISABLE QUERY REWRITE
    AS select distinct
    from
    dual
    AD Worker error:
    The above program failed. See the error messages listed
    above, if any, or see the log and output files for the program.
    Time when worker failed: Tue Feb 02 2010 10:01:46
    Manager says to quit.
    -- patch 2 log
    Materialized View Name is MSC_PHUB_CUSTOMERS_MV
    Materialized View does not exist in the target database
    Executing create Statement
    Create Statement is
    CREATE MATERIALIZED VIEW "APPS"."MSC_PHUB_CUSTOMERS_MV"
    ORGANIZATION HEAP PCTFREE 10 PCTUSED 40 INITRANS 10 MAXTRANS 255 LOGGING
    STORAGE(INITIAL 4096 NEXT 131072 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 4 FREELIST GROUPS 4 BUFFER_POOL DEFAULT)
    TABLESPACE "APPS_TS_SUMMARY"
    BUILD DEFERRED
    USING INDEX
    REFRESH COMPLETE ON DEMAND
    WITH ROWID USING DEFAULT LOCAL ROLLBACK SEGMENT
    DISABLE QUERY REWRITE
    AS select distinct
    from dual
    Start time for statement above is Tue Feb 02 10:05:06 GMT 2010
    Exception occured ORA-00028: your session has been killed
    ORA-00028: your session has been killed
    ORA-06512: at "APPS.AD_MV", line 116
    ORA-06512: at "APPS.AD_MV", line 258
    ORA-06512: at line 1
    java.sql.SQLException: ORA-00028: your session has been killed
    ORA-00028: your session has been killed
    ORA-06512: at "APPS.AD_MV", line 116
    ORA-06512: at "APPS.AD_MV", line 258
    ORA-06512: at line 1
    Exception occured :No more data to read from socket
    AD Run Java Command is complete.
    Copyright (c) 2002 Oracle Corporation
    Redwood Shores, California, USA
    AD Java
    Version 11.5.0
    NOTE: You may not use this utility for custom development
    unless you have written permission from Oracle Corporation.
    AD Worker error:
    The above program failed. See the error messages listed
    above, if any, or see the log and output files for the program.
    Time when worker failed: Tue Feb 02 2010 19:51:27
    Start time for statement above is Tue Feb 02 12:44:52 GMT 2010
    End time for statement above is Tue Feb 02 19:51:29 GMT 2010
    Thanks,
    Wei

Maybe you are looking for

  • ILOK and CS4 problems

    Hello. I know this is an older version, but I am at a complete loss as to understand this. I need to run up this version in particular, and on launch I get a message telling me that Auto Tune 7 cannot be loaded as it "requires a valid license on my i

  • Trouble with Materialized View

    Hello, I'm in need of some help enforcing data integrity using a materialized view. In my example, each system may consist of three types of components and this is modeled using supertype/subtype relationships. I'm trying to enforce that, for a given

  • Two internal tables with different structures

    Hi Experts, I Have two internal tables. Table 1 structure: Name Age Table 2 structure : Name age branch Now the table 1 has some 5 data's and table 2 is empty. First i want to move that 5 data's from table 1 to table 2. Then the branch field is same

  • Duplicate AddressBook Groups

    I am running Lion Clients who are logged into domain accounts on a Snow Leopard Server. Within each users AddressBook, 'groups' are automatically being added, many without any contacts in them. The groups are consistently named the same "addressbook"

  • How to set my contact's pictures

    Hi all, Is it possible to set the picture displayed for my contacts? Many of them don't have any (yes, I know, how rude...) or have irrelevant one that don't show who the person is.