Performance problem on query

As part of a large batch process, we load data sent from a client into our systems. The data consists of information representing their organization structure (departments, users etc.). I need to identify all users that are currently in the existing system that are not part of the new file. The second step of the process just won't perform, so I"m looking for suggestions. Here is the psuedocode and then the SQL:<br/><br/>
* 1. Load all existing users for the client into a <i>global temporary table</i>.<br/>
* 2. Delete all users from the temp table that are not on the file.<br/>
* 3. Delete all top level users from the temp table (they can't be deleted).<br/>
* 4. Delete all users from the table that already have the deleted role.<br/>
* 5. Any remaining rows are the users that need to be deleted.<br/><br/>
DDL to create table<br/>
--------------------------------<br/><br/>
CREATE GLOBAL TEMPORARY TABLE DELETE_USERS_TEMP (
EMPLOYEE_ID VARCHAR2 (100),
USER_ID VARCHAR2 (100)
) ON COMMIT DELETE ROWS;<br/><br/>
1. insert into DELETE_USERS_TEMP select employee_id, user_id from cadreadmin.orderentry_usr_data where user_id like ? and employee_id is not null;<br/>
2, delete from DELETE_USERS_TEMP where employee_id in (select emplid from client_dataset_detail where cd_id = ?);<br/>
3. delete from DELETE_USERS_TEMP where user_id in (select USER_ID from CADREADMIN.DPS_USER_ORG where USER_ID like ? and ORGANIZATION = ?);<br/>
4. delete from DELETE_USERS_TEMP where user_id in (select user_id from cadreadmin.dps_user_roles where atg_role = ? and user_id in (select user_id from DELETE_USERS_TEMP));<br/><br/>
I have also tried the following for step 2:
delete from DELETE_USERS_TEMP T
where exists (select D.EMPLID from CLIENT_DATASET_DETAIL D
where D.CD_ID = 170 and D.EMPLID = T.EMPLOYEE_ID);<br/><br/>
Here is the explain plan for the original step 2 SQL, the one just above is about the same:<br/><br/>
"Optimizer"     "Cost"     "Cardinality"     "Bytes"     "Partition Start"     "Partition Stop"     "Partition Id"     "ACCESS PREDICATES"     "FILTER PREDICATES"
"UPDATE STATEMENT"     "ALL_ROWS"     "7"     "1"     "76"<br/>     ""     ""     ""     ""     ""
"UPDATE FILEUPLOAD.DELETE_USERS_TEMP"<br/>     ""     ""     ""     ""     ""     ""     ""     ""     ""
"NESTED LOOPS(SEMI)"     ""     "7"     "1"     "76"<br/>     ""     ""     ""     ""     ""
"TABLE ACCESS(FULL) FILEUPLOAD.DELETE_USERS_TEMP"     "ANALYZED"     "2"     "1"     "65"<br/>     ""     ""     ""     ""     ""
"TABLE ACCESS(BY INDEX ROWID) FILEUPLOAD.CLIENT_DATASET_DETAIL"     "ANALYZED"     "5"     "10770"     "118470"<br/>     ""     ""     ""     ""     ""D"."CD_ID"=170"
"INDEX(RANGE SCAN) FILEUPLOAD.CLIENT_DATASET_DETAIL_INX2"     "ANALYZED"     "2"     "3"<br/>     ""     ""     ""     ""     ""D"."EMPLID"="T"."EMPLOYEE_ID""     ""
Message was edited by:
user620465

Pls, take a look at When your query takes too long ... . <br>
Also, in step 1 you can avoid the copy of top level users and the ones that already have the deleted role. With a smaller table, the response time of step 2 can be reduced.
Regards,
Miguel

Similar Messages

  • Performance problem with query on bkpf table

    hi good morning all ,
    i ahave a performance problem with a below query on bkpf table .
    SELECT bukrs
               belnr
               gjahr
          FROM bkpf
          INTO TABLE ist_bkpf_temp 
         WHERE budat IN s_budat.
    is ther any possibility to improve the performanece by using index .
    plz help me ,
    thanks in advance ,
    regards ,
    srinivas

    hi,
    if u can add bukrs as input field or if u have bukrs as part of any other internal table to filter out the data u can use:
    for ex:
    SELECT bukrs
    belnr
    gjahr
    FROM bkpf
    INTO TABLE ist_bkpf_temp
    WHERE budat IN s_budat
        and    bukrs in s_bukrs.
    or
    SELECT bukrs
    belnr
    gjahr
    FROM bkpf
    INTO TABLE ist_bkpf_temp
    for all entries in itab
    WHERE budat IN s_budat
        and bukrs = itab-bukrs.
    Just see , if it is possible to do any one of the above?? It has to be verified with ur requirement.

  • Interesting Performance problem..Query runs fast in TOAD and Reports 6i..

    Hi All,
    I have a query which runs with in 4 mins in Toad and also report 6i. But when ran through applications takes 3 to 4 hrs to complete normal.This report fetches huge amount of data will that be the reason for poor performance?? I am unable to figure that out. I was able to avoid full table scan on the tables used. But still have the problem.
    Any suggestions please.
    Thank you in advance.
    Prathima

    If you want to have a look at the query. This report gives a way to monitor the receipts entered for pay from receipt POs.
    SELECT
    hou.name "OPERATING_UNIT_NAME"
    ,glc.segment1 "UEC"
    ,glc.segment2 "DEPT"
    ,pov.vendor_name "VENDOR_NAME"
    ,msi.SEGMENT1 "ITEM_NUM"
    ,rcvs.receipt_num "RECEIPT_NUM"
    ,poh.segment1 "PO_NUMBER"
    ,pol.line_num "PO_LINE_NUM"
    ,por.RELEASE_NUM "RELEASE_NUMBER"
    ,poll.shipment_num "SHIPMENT_NUM"
    ,hrou.name "SHIP_TO_ORGANIZATION"
    ,trunc(rcv.transaction_date) "TRANSACTION_DATE"
    ,decode (transaction_type,'RECEIVE', 'ERS', 'RETURN TO VENDOR','RTS') "RECEIPT_TYPE"
    ,decode (rcv.transaction_type,'RECEIVE', 1, 'RETURN TO VENDOR', -1)* rcv.quantity "RECEIPT_QTY"
    ,rcv.po_unit_price "PO_UNIT_PRICE"
    ,decode (rcv.transaction_type,'RECEIVE', 1, 'RETURN TO VENDOR', -1)*rcv.quantity*po_unit_price "RECEIPT_AMOUNT"
    ,rcvs.packing_slip "PACKING_SLIP"
    ,poll.quantity "QUANTITY_ORDERED"
    ,poll.quantity_received "QUANTITY_RECEIVED"
    ,poll.quantity_accepted "QUANTITY_ACCEPTED"
    ,poll.quantity_rejected "QUANTITY_REJECTED"
    ,poll.quantity_billed "QUANTITY_BILLED"
    ,poll.quantity_cancelled "QUANTITY_CANCELLED"
    ,(poll.quantity_received - (poll.quantity - poll.quantity_cancelled)) "QUANTITY_OVER_RECEIVED"
    ,(poll.quantity_received - (poll.quantity - poll.quantity_cancelled))*po_unit_price "OVER_RECEIVED_AMOUNT"
    ,poh.currency_code "CURRENCY_CODE"
    ,perr.full_name "RECEIVER"
    ,perb.full_name "BUYER"
    FROM
    po.po_vendors pov
    ,po.po_headers_all poh
    ,po.po_lines_all pol
    ,po.po_line_locations_all poll
    ,po.po_distributions_all pod
    ,po.po_releases_all por
    ,hr.hr_all_organization_units hou
    ,hr.hr_all_organization_units hrou
    ,po.rcv_transactions rcv
    ,po.rcv_shipment_headers rcvs
    ,gl.gl_code_combinations glc
    ,hr.per_all_people_f perr
    ,hr.per_all_people_f perb
    ,inv.mtl_system_items_b msi
    where
    poh.org_id = hou.organization_id
    and pov.vendor_id (+) = poh.vendor_id
    and pod.po_header_id = poh.po_header_id
    and pod.po_line_id = pol.po_line_id
    and pod.line_location_id = poll.line_location_id
    and poll.po_header_id = poh.po_header_id
    and poll.po_line_id = pol.po_line_id
    and pol.po_header_id = poh.po_header_id
    and poh.pay_on_code like 'RECEIPT'
    and pod.po_header_id = rcv.po_header_id
    and pod.po_line_id = rcv.po_line_id
    and pod.po_release_id = rcv.po_release_id
    and pod.po_release_id = por.po_release_id
    and por.po_header_id = poh.po_header_id
    and hrou.organization_id = poll.ship_to_organization_id
    and pod.line_location_id = rcv.po_line_location_id
    and pod.po_distribution_id = rcv.po_distribution_id
    and rcv.transaction_type in ('RECEIVE','RETURN TO VENDOR')
    and rcv.shipment_header_id = rcvs.shipment_header_id (+)
    and pod.code_combination_id = glc.code_combination_id
    and rcvs.employee_id = perr.person_id
    and por.agent_id = perb.person_id (+)
    and perr.person_type_id = 1
    and perb.person_type_id = 1
    and msi.organization_id = 1 --poll.ship_to_organization_id
    and msi.inventory_item_id = pol.item_id
    and poh.type_lookup_code = 'BLANKET'
    and hou.organization_id = nvl(:p_operating_unit,hou.organization_id)
    and trunc(rcv.transaction_date) between :p_transaction_date_from and :p_transaction_date_to
    Message was edited by:
    Prathima

  • Performance problem with query

    Hi,
    I have a query.
    SELECT
    CDW_DIM_DATUM_VW.DATUM
    ,sum(CDW_FT031_DISCO_VW.RATE_CHARGEABLE_DURATION/60)
    FROM
    CDW_FT031_DISCO_VW,
    CDW_DIM_DATUM_VW
    WHERE CDW_DIM_DATUM_VW.DATUM >= to_date('20110901','yyyymmdd')
    AND CDW_DIM_DATUM_VW.DATUM < to_date('20110911','yyyymmdd')
    AND CDW_DIM_DATUM_VW.DATUM_KEY=CDW_FT031_DISCO_VW.DATUM_KEY
    GROUP BY
    CDW_DIM_DATUM_VW.DATUM
    This query takes 2hrs on production normally which has to take 2 mts.
    On checking Explain plan i understood that it is doing full table scan on cdw_ft031 which is the table used in view(cdw_ft031_disco_vw) which has got 30 lakh records.
    I analyzed tables and indexes. I rebuilt all indexes . I used hints also but even then it is doing full table scan.
    There are indexes created on columns refered in where clause.
    Kindly help me.

    As the source are VIEWS your code is even not showing the full code. (Beside not showing the tables, indexes etc)
    which has got 30 lakh recordsfrom wikipedia:
    lakh is a unit in the Indian numbering system equal to one hundred thousandSo if you what that we do understand what your problem is, give us as much information as you have in a language the world understands.
    I guess "CDW_DIM_DATUM_VW" is a "time dimension" so it has 1 row per date? Why is it a view?
    Then the only join is to CDW_FT031_DISCO_VW where "FT" stands for "Fact"? (Why is it a views?)
    What you want is that it picks the index (which it has on DATUM_KEY??) instead of a full table scan?
    Is cdw_ft031 parititioned? (You often partition a fact via date, and you see "full table scan" which stand for "full partition scan".
    So your fact has 3 Million rows? And the query takes 2 hours? My laptop would do that in less without any index.... there is sometime completely wrong.
    I discourage the join of views, but based on what you gave us, hard to give advices.

  • Performance Problem with Query load

    Hello,
    after the upgrade to SPS 23, we have some problems with loading a Query. Before the Upgrade, the Query runs 1-3 minutes, and now more then 40 minutes.
    Does anyone have an idea?
    Regards
    Marco

    Hi,
    Suggest executing the Query in RSRT transaction by choosing the option ' Execute+Debugger' to further analyze where exactly the query is taking time.
    Make sure choosing the appropriate 'Query Display' (List/BEx Analyzer/ HTML) option before Executing the query in the Debugger mode since the display option also effect the query run time.
    Hope this info helps!
    Bala Koppuravuri

  • Performance problem of query in 9i DB

    We upgraded our database from 8i to 9i recently. We have a set of same synonyms in both 8i and 9i Database. These synonyms created on Data warehouse tables. We have some queries to return data by joining several of those synonyms. We have the same number of the data from each of those synonyms either in 8i or 9i database. Our problem is that the queries take much longer time in 9i database than that in 8i database. Any body met with the same problem? Any idea about what the problem comes from? Please give me some help.
    Thanks a lot in Advance.

    How the upgradation took plance? Is it export/import? Is so, have you rebuild the indexes after upgradation? Which version of oracle do you upgrade it? 9.2.0.3 or 9.2.0.5? Because, 9.2.0.3 has a lot of optimizer problems. Also, optimizer behaves very differently in 9i. Try to to apply patch set 9.2.0.5.
    Put the optimizer_features_enable = 8.1.7 to force optimizer to behave like 8i.
    What about the SGA and pga_aggrigate_target and other import parameter that you set? Are they same as 8i or not?
    If none of those things works then define the following parameter in your init.ora file.
    complexview_merging=FALSE
    unnestsubquery=FALSE

  • A performance problem about query bseg, help,help

    thanks
    we have a new report use such a sql which runs a long time: 
    SELECT bukrs gjahr bzdat anln1 bschl dmbtr
        INTO CORRESPONDING FIELDS OF TABLE bsegtab
        FROM bseg
       WHERE               gjahr = gjahr
         AND bschl IN ('70' , '75')
         AND anln1 IN anln1
         AND meins = ''
         AND anln1 LIKE 'G%'
         AND hkont LIKE '1502%'
         AND bzdat >= dat
         AND bzdat <= dat1.
    in se11,I can see bseg primary key is "mandt,bukrs,belnr,gjahr,buzei",and it is a cluster table.
    Could I create an index on this culster table.
    Could you give me some advices?
    the environment is ecc6.0 and oracle 10g.
    Edited by: victor on Sep 17, 2010 2:25 AM
    Edited by: victor on Sep 17, 2010 2:25 AM

    Hello,
    Below link will be helpful.
    http://wiki.sdn.sap.com/wiki/pages/viewpage.action?pageId=180356001
    Regards,
    Yoganand.V

  • Performance problem querying multiple CLOBS

    We are running Oracle 8.1.6 Standard Edition on Sun E420r, 2 X 450Mhz processors, 2 Gb memory
    Solaris 7. I have created an Oracle Text indexes on several columns in a large table, including varchar2 and CLOB. I am simulating search engine queries where the user chooses to find matches on the exact phrase, all of the words (AND) and any of the words (OR). I am hitting performance problems when querying on multiple CLOBs using the OR, e.g.
    select count(*) from articles
    where contains (abstract , 'matter OR dark OR detection') > 0
    or contains (subject , 'matter OR dark OR detection') > 0
    Columns abstract and subject are CLOBs. However, this query works fine for AND;
    select count(*) from articles
    where contains (abstract , 'matter AND dark AND detection') > 0
    or contains (subject , 'matter AND dark AND detection') > 0
    The explain plan gives a cost of 2157 for OR and 14.3 for AND.
    I realise that multiple contains are not a good thing, but the AND returns sub-second, and the OR is taking minutes! The indexes are created thus:
    create index article_abstract_search on article(abstract)
    INDEXTYPE IS ctxsys.context parameters ('STORAGE mystore memory 52428800');
    The data and index tables are on separate tablespaces.
    Can anyone suggest what is going on here, and any alternatives?
    Many thanks,
    Geoff Robinson

    Thanks for your reply, Omar.
    I have read the performance FAQ already, and it points out single CONTAINS clauses are preferred, but I need to check 2 columns. Also, I don't just want a count(*), I will need to select field values. As you can see from my 2 queries, the first has multiple CLOB columns using OR, and the second AND, with the second taking that much longer. Even with only a single CONTAINS, the cost estimate is 5 times more for OR than for AND.
    Add an extra CONTAINS and it becomes 300 times more costly!
    The root table is 3 million rows, the 2 token tables have 6.5 and 3 million rows respectively. All tables have been fully analyzed.
    Regards
    Geoff

  • Performance of the query

    Hello Guys,
             iam having performance problem with query .when i run the the query with intial variables its displaying report quickly but when i go drilling with filter values its taking 10 minutes to display report.can anybody suggest me possible solutions for the performance improvement.
    Regards
    Priya

    Hi Priya,
    First, you have to check what is causing the performance issue. You can do this by running the query in transaction RSRT. Execute the query in debug mode with the option "Display Statistics Data". You can navigate the query as you would normally. After that, check the statistics information and see what causes the performance issue. My guess is that you need to build an aggregate.
    If rhe Data Manager time is high (a large % of the total runtime) and the ratio of the number of records selected VS the number of records transfered is high (e.g. > 10), then try to build an aggregate to help on the performance. To check for aggregate suggestions, run RSRT again with the option "Display Aggregates Found". It will show you what characteristics and characteristics selections would help (note that the suggestion might not always be the optimal one).
    If OLAP Data Transfer time is high, then try optimizing the query design (e.g. try reducing the amount of restricted KFs or try calculating some KFs during the data flow instead of calculating them in the query).
    Hope this helps.

  • Performance Problems on Faces Navigation Diagram and Hyperthreading query

    Am I the only one having performance problems when dealing with Faces-Config Diagrams of about 35 JSPs displayed on the sheet. using Jdev 10.1.3 It's taking my workstation about a full minute and a half to update the name of an arrow. The most stressed component during this task seems to be the CPU.
    And just another question has anybody investigated how is the performance of Jdev affected by either enabling or disabling hyperthreading? In my case my CPU usage manages to reach only 50%. I'm tempted to switch HT off to let JDev use all the cpu power. if that would be the case.

    Hello Diego,
    you mentioned that you compared a BEx Query with the Web INtelligence report. Could you provide more details here ?
    - what are the elements in the rows, columns and free characterisitcs in the BEx Query ?
    - was the query execute as designed in the BEx Query Designer with BEx Web Reporting ?
    - what are the elements in the WebIntelligence Query panel ?
    thanks
    Ingo

  • Performance Problem in Select query

    Hi,
    I have performance Problem in following Select Query :
    SELECT VBELN POSNR LFIMG VRKME VGBEL VGPOS
      FROM LIPS INTO CORRESPONDING FIELDS OF TABLE GT_LIPS
       FOR ALL ENTRIES IN GT_EKPO1
       WHERE VGBEL = GT_EKPO1-EBELN
         AND VGPOS = GT_EKPO1-EBELP.
    as per trace i have analysed that it is fetch the complete table scan from the LIPS table and table contants almost 3 lakh records.
    Kindly Suggest what we can do to optimize this query.
    Regards,
    Harsh

    types: begin of line,
              vbeln type lips-vbeln
              posnr type lips-posnr
              lfimg type lips-lfimg
             vrkme type lips-vrkme
             vgbel type lips- vgbel
             vgpos type lips-vgpos
             end of line.
    data: itab type standard table of line,
             wa type line.
    IF GT_EKPO1[] IS NOT INITIAL.
    SELECT VBELN POSNR LFIMG VRKME VGBEL VGPOS
    FROM LIPS INTO  TABLE ITAB
    FOR ALL ENTRIES IN GT_EKPO1
    WHERE VGBEL = GT_EKPO1-EBELN
    AND VGPOS = GT_EKPO1-EBELP.
    ENDIF.

  • Query performance problem

    I am having performance problems executing a query.
    System:
    Windows 2003 EE
    Oracle 9i version 9.2.0.6
    DETAIL table with 120Million rows partitioned in 19 partitions by SD_DATEKEY field
    We are trying to retrieve the info from an account (SD_KEY) ordered by date (SD_DATEKEY). This account has about 7000 rows and it takes about 1 minute to return the first 100 rows ordered by SD_DATEKEY. This time should be around 5 seconds to be acceptable.
    There is a partitioned index by SD_KEY and SD_DATEKEY.
    This is the query:
    SELECT * FROM DETAIL WHERE SD_KEY = 'xxxxxxxx' AND ROWNUM < 101 ORDER BY SD_DATEKEY
    The problem is that all the 7000 rows are read prior to be ordered. I think that it is not necessary for the optimizer to access all the partitions to read all the rows because only the first 100 are needed and the partitions are bounded by SD_DATEKEY.
    Any idea to accelerate this query? I know that including a WHERE clause for SD_DATEKEY will increase the performance but I need the first 100 rows and I don't know the date to limit the query.
    Anybody knows if this time is a normal response time for tis query or should it be improved?
    Thank to all in advance for the future help.

    Thank to all for the replies.
    - We have computed statistics and no changes in the response time.
    - We are discussing about restrict the query to some partitions but for the moment this is not the best solution because we don't know where are the latest 100 rows.
    - The query from Maurice had the same response time (more or less)
    select * from
    (SELECT * FROM DETAIL WHERE SD_KEY = 'xxxxxxxx' ORDER BY SD_DATEKEY)
    where ROWNUM < 101
    - We have a local index on SD_DATEKEY. Do we need another one on SD_KEY? Should it be created as BITMAP?
    I can't test inmediately your sugestions because this is a problem with one of our customers. In our test system (that has only 10Million records) the indexes accelerate the query but this is not the same in the customer system. I think the problem is the total records on the table.

  • Query Performance Problem!! Oracle 25 minutes || SQLServer 3 minutes

    Hi all,
    I'm having a performance problem with this query bellow. It runs in 3 minutes on SQLServer and 25 minutes in Oracle.
    SELECT
    CASE WHEN (GROUPING(a.estado) = 1) THEN 'TOTAL'
    ELSE ISNULL(a.estado, 'UNKNOWN')
    END AS estado,
    CASE WHEN (GROUPING(m.id_plano) = 1) THEN 'GERAL'
    ELSE ISNULL(m.id_plano, 'UNKNOWN')
    END AS id_plano,
    sum(m.valor_2s_parcelas) valor_2s_parcelas,
    convert(decimal(15,2),convert(int,sum(convert(int,(m.valor_2s_parcelas+.0000000001)*100)*
    isnull(e.percentual,0.0))/100.0+.0000000001))/100 BB_Educar
    FROM
    movimento_dco m ,
    evento_plano e,
    agencia_tb a
    WHERE
    m.id_plano = e.id_plano
    AND m.agencia *= a.prefixo
    --AND  m.id_plano LIKE     'pm60%'
    AND m.data_pagamento >= '20070501'
    AND m.data_pagamento <= '20070531'
    AND m.codigo_retorno = '00'
    AND m.id_parcela > 1
    AND m.valor_2s_parcelas > 0.
    AND e.id_evento = 'BB-Educar'
    AND a.banco_id = '001'
    AND a.ordem = '00'
    group by m.id_plano, a.estado WITH ROLLUP
    order by a.estado, m.id_plano DESC
    Can anyone help me with this query?

    What version of Oracle, what version of SQL? Are the tables the same exact size? are they both indexed the same? Are you running on the some or similar hardware? Are the Oracle parameters similar like SGA size and PGA_AGGREGATE Target? Did you run statistics in Oracle?
    Did you compare execution plans in SQL Server vs Oracle to see if SQl Servers execution plan is more superior than the one Oracle is trying to use? (most likely stale statistics).
    There are many variables and we need more information than just the Query : ).

  • Performance problems - query on the copy of a table is faster than the orig

    Hi all,
    I have sql (select) performance problems with a specific table (costs 7800)(Oracle 10.2.0.4.0, Linux).
    So I copied the table 1:1 (same structure, same data, same indexes etc. ) under the same area (database, user, tablespace etc.) and gathered the table_stats with the same parameters. The same query on this copied table is faster and the costs are just 3600.
    Why for gods sake is the query on this new table faster??
    I appreciate any idea.
    Thank you!
    Fibo
    Edited by: user954903 on 13.01.2010 04:23

    Could you please share more information/link which can elaborate the significance of using SHRINK clause.
    If this is so useful and can shrink the unused space , why not my database architect has suggested this :).
    Any disadvantage also?It moves the highwater mark back to the lowest position, therefore full tables scans would work faster in some cases. Also it can reduce number of migrated rows and number of used blocks.
    Disadvantage is that it involves row movement, so operations which based on rowid are permitted during the shrinking.
    I think it is even better to stop all operations on the table when shrinking and disable all triggers. Another problem is that this process can take a long time.
    Guru's, please correct me if I'm mistaken.
    Edited by: Oleg Gorskin on 13.01.2010 5:50

  • Query performance problem - events 2505-read cache and 2510-write cache

    Hi,
    I am experiencing severe performance problems with a query, specifically with events 2505 (Read Cache) and 2510 (Write Cache) which went up to 11000 seconds on some executions. Data Manager (400 s), OLAP data selection (90 s) and OLAP user exit (250 s) are other the other event with noticeable times. All other events are very quick.
    The query settings (RSRT) are
    persistent cache across each app server -> cluster table,
    update cache in delta process is checked ->group on infoprovider type
    use cache despite virtual characteristics/key figs checked (one info-cube has1 virtual key figure which should have a static result for a day)
    =>Do you know how I can get more details than what's in 0TCT_C02 to break down the read and write cache events time or do you have any recommandation?
    I have checked and no dataloads were in progres on the info-providers and no master data loads (change run). Overall system performance was acceptable for other queries.
    Thanks

    Hi,
    Looks like you're using BDB, not BDB JE, and this is the BDB JE forum. Could you please repost here?:
    Berkeley DB
    Thanks,
    mark

Maybe you are looking for

  • Time Capsule with existing router Q's

    I've been looking through a bunch of earlier posts here and I just want to make sure I understand my options correctly for my 3rd Gen TC before I start fiddling with settings... I need to change my setup I've used successfully for years because I jus

  • Commodity code / Import code

    Hi everybody; I'm trying to use the field commodity code/import code that appears in the article master data, as an import code so that number can determine the custom duties. If I go to the f1 button in the field it says "Depending on the system set

  • PP integration areas

    Hi Friends Can you please suggest a link where I can find some good documenation for integration of PP with (a)QM (b)SD (c)MM/WM (d)FICO . I am  Looking for detailed info on Configuration , Master Data & other check points For all integration areas o

  • How to edit the installation scripts in the supporting objects

    I wanted to update one of the installation scripts in the supporting objects. I clicked into that script and use the "script editor" tab. After editing the script, I clicked "Apply Changes" button. It redirected me to the upper level of the page. Whe

  • Could anyone help me? can start J2EE but cannot stop it..in Window 98

    hi, I have finally started the J2EE server for window 98 However, it said: J2EE server listen port: 1050 Redirecting the output and error streams to the following files: c:\j2sdkee13\logs\Pentium11\j2ee\j2ee\system.out c:\j2sdkee13\logs\Pentium11\j2e