Taking more than 30 min for Delete Index

Hi Experts,
I am trying to delete Index for one of my cube thorugh Process Chain, but every time it is taking morethan 30 mins, even we dont have much data in the cube.
Job Details:
Job started
Step 001 started (program RSPROCESS, variant &0000000003309, user ID ALEREMOTE)
Start process DROPINDEX ZCSDC303_DEL_IDX in run DBWIM5VFE7YNJOOGV1YK6KTCQ of chain ZCSD_PWR_CONS_CON
please help me out on this
Thanks in advance
David

Hello David
As you said you have very less data in that cube that means you are not loading too much data on that cube on regular basis.
Deletion and recreation is not absolute necessity. You just need to check if loading without deleting index is not very slow.
Then just drop this steps of deleting index..you can save your 30mins of deletion and recreation.
by the way...what is your database? In MS SQL for some cubes we do not delete index before loading data, which saves lot of time.
You can also check database statistics...and please do a RSRV check for this cube to check if there is any inconsistency,
Regards
Anindya

Similar Messages

  • While using Status for Object button it is taking more than 15 mins to open

    Hi Gurus,
    We are trying to attach documents to ZBOS & OR types sales documents , while opening the Status for Object button of the sales order it is taking more than 15 mins to open , once it is opened it is working as normal.
    can you please let us know is it the system functionality because of which it is taking so much time to open or the problem  is with  something else.
    please let us also know whether it is  system impacting process.
    We are using 4.6C.
    Thank You,
    Boyeni.

    Hi Syed ,
    Greetings!!!...
    Thank you very much for your Swift response!.
    could you be so kind to let me know The Program that needs to be refreshed.
    Thank You once again for your Assistance.
    Boyeni.

  • Taking more than 2 min T-SQL

    Hi Friends,
    SELECT
    DATEPART(YEAR, SaleDate) AS [PrevYear],
    DATENAME(MONTH, SaleDate) AS [PrevMonth],
    SaleDate as SaleDate,
    Sum(Amount) as PrevAmount
    FROM TableA A
    WHERE SaleDate >= DATEADD(yy, DATEDIFF(yy, 0, GETDATE()) - 1, 0)
    AND SaleDate <= DATEADD(dd, -1, DATEADD(yy, DATEDIFF(yy, 0, GETDATE()), 0))
    -----'2013-12-31 00:00:00.000'
    GROUP BY
    SaleDate
    This Query taking more than 2 min to pull the results .... basically I was passing last year first date and last date (should derive based on getdate())
    if I pass static values  like this  WHERE SaleDate >= ''2013-01-01 00:00:00.000''
          AND SaleDate <= '2013-12-31 00:00:00.000'
    then it is pulling results in fraction of seconds.....
    Note: I was keeping this code in View I have to use only View ( I know we can write store procedure for this but I dont want sp I need only View)
    any idea please how to improve my view performance?
    Thanks,
    RK

    Do you have an index on SaleDate column ? If so , is this NCI or CI? How much data does it return? Can you show an execution plan of the query?
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Problems with failover  which is taking more than 5 min

    Hi,
    I'm new to Weblogic world and facing some problems in failover.
    I have plugin configuration as below.
    In the httpd.conf file
    # Configure Performance settings
    Timeout 45
    ListenBacklog 1000
    KeepAlive On
    MaxKeepAliveRequests 0
    KeepAliveTimeout 15
    <IfModule mpm_winnt.c>
    ThreadLimit 500
    ServerLimit 6
    StartServers 2
    MaxClients 3000
    MinSpareThreads 500
    MaxSpareThreads 3000
    ThreadsPerChild 500
    MaxRequestsPerChild 0
    </IfModule>
    # EnableMMAP off
    # EnableSendfile off
    # Configure DoS Settings
    # Max Body size in HTTP Request of 100Mb
    # Max HTTP headers set to 50
    # Max HTTP header size to 8Kb
    LimitRequestLine 8190
    LimitRequestFields 50
    LimitRequestFieldSize 8096
    LimitRequestBody 104857600
    RLimitCPU 300 300
    RLimitMem 8192000 8192000
    #RLimitNPROC 20 20
    # EnableMMAP off
    # EnableSendfile off
    this Configuration is in Weblogic plugin as we have seperate the configuration and included this in the httpd.conf
    <IfModule mod_weblogic.c>
    WebLogicCluster ********************************
    KeepAliveEnabled ON
    KeepAliveSecs 30
    CookieName PSJSESSION_ID
    DebugConfigInfo ON
    WLIOTimeoutSecs 4000
    # Debug ALL
    # WLLogFile /u01/PWCapache/logs/staging-orbit-emea.pwcinternal.com/wlproxy_http.log
    </IfModule>
    Asuming 01 & 02
    We stopped one side of the weblogic server(01) and tried the failovering, requests are going to the 02 side absolutely fine but still the plugin is sending the requests to the 01 side and its taking more than 5 min to route to available (02 which is up).
    My thoughts are to reduce the WLIOTimeoutSecs 4000 to 300 value... and introduce DynamicServerList value(at the moment this value is not set not sure how the apache behaves if this value is not set any suggestion on this pls?)
    as well as to introduce ConnectTimeoutSecs (10)& ConnectRetrySecs(2)
    Could somebody please point to the solution for this?
    Edited by: user2659864 on 14-Dec-2010 01:55

    Hi,
    If you are completely shutting down the server-01 then once the server-02 sends the response back to the proxy then it would have the fresh list of which servers are running properly, hence if still the proxy is sending the request to the server-01 then this mean that the Apache process have not shared the dynamic server list with the other process or it seems to be a bug in the WLS plugin.
    Suggestion try to use the latest WLS plugin and see if you can solve this issue as most of the time it would not be a an issue with WLS plugin.
    And if you are suspending the server-01 then the below link would give you a better idea on it as how things works.
    Topic: 404 error with Apache and Suspended Weblogic managed server
    http://middlewaremagic.com/weblogic/?p=4567
    Secondly by default the value of "WLIOTimeoutSecs " is 300 seconds so if you don't give this value then it would take 300 seconds which goes with "Idempotent" value which is by default "on", hence if the server is not able to send the response then it would fail over the second server after waiting 300 sec . Thus if you decrease the value of "WLIOTimeoutSecs " it would help you in case. More information can be seen below.
    Topic: General Parameters for Web Server Plug-Ins
    http://download.oracle.com/docs/cd/E12840_01/wls/docs103/plugins/plugin_params.html#wp1143055
    Hope that would answer all your questions :)
    Regards,
    Ravish Mody
    http://middlewaremagic.com/weblogic/
    Come, Join Us and Experience The Magic…

  • Procedure is taking more than 25 hours for execution

    Hi,
    The below procedure is taking more tahn 25 hours for execution.
    The Table CA.CR_L_D is having around 15 crores records.
    Please suggest me the valuable tips to reduce the the execution time
    CREATE OR REPLACE PROCEDURE NM.L_de_pro
    IS
    Type W_Bk_1 Is Table of Number Index By Pls_Integer;
    Type W_Bk_2 Is Table of W_Bk_1 Index By Pls_Integer;
    Type W_Bk_Ct Is Table of W_Bk_2 Index By Pls_Integer;
    Type Lo_Ac Is Table of Number Index By Pls_Integer;
    Type Lo_Ac_2 Is Table of Lo_Ac Index By Pls_Integer;
    Wo_BK_Co W_Bk_Ct;
    L_L_Ac Lo_Ac_2;
    Begin
    Delete From NM.L_WO_C_B;
    For Sim in 1..10 Loop
    For j in 1..17 Loop
    Select /* + FIRST_ROWS */ CS.LAL+CS.LNAL
    BULK COLLECT INTO Wo_BK_Co(Sim)(j)
    from CA.CR_L_D CS, NM.CR_L_D_PD PD
    Where CS.INS = PD.INS_NBR
    and PD.C_B_N <> j
    and CS.Sc = Sim;
    End Loop;
    End Loop;
    For Sim in 1..10 Loop
    For j in 1..17 Loop
    L_L_Ac(Sim)(j) := 0;
    For i in 1..Wo_BK_Co(Sim)(j).Last Loop
    L_L_Ac(Sim)(j) := L_L_Ac(Sim)(j) + Wo_BK_Co(Sim)(j)(i);
    End Loop;
    --DBMS_Output.Put_Line(L_L_Ac(Sim)(j));
    End Loop;
    Insert Into NM.L_WO_C_B
    (Sc, W_Bk_1, W_Bk_2, WO_Bkt_3, WO_Bkt_4,
    WO_Bkt_5, WO_Bkt_6, WO_Bkt_7, WO_Bkt_8, WO_Bkt_9, W_Bk_10, W_Bk_11, W_Bk_12, W_Bk_13,
    W_Bk_14, W_Bk_15, W_Bk_16, W_Bk_17)
    Select Sim, L_L_Ac(Sim)(1), L_L_Ac(Sim)(2), L_L_Ac(Sim)(3),
    L_L_Ac(Sim)(4), L_L_Ac(Sim)(5), L_L_Ac(Sim)(6),
    L_L_Ac(Sim)(7), L_L_Ac(Sim)(8), L_L_Ac(Sim)(9),
    L_L_Ac(Sim)(10), L_L_Ac(Sim)(11), L_L_Ac(Sim)(12),
    L_L_Ac(Sim)(13), L_L_Ac(Sim)(14), L_L_Ac(Sim)(15),
    L_L_Ac(Sim)(16), L_L_Ac(Sim)(17) From Dual;
    Commit;
    End Loop;
    End;
    /

    Well...
    No guarantees and completely untested as I don't have your tables, data or know what indexes you have on the table or even whether I've understood the purpose of what you are trying to do...
    CREATE OR REPLACE PROCEDURE NM.L_de_pro IS
    BEGIN
      INSERT INTO NM.L_WO_C_B
                (Sc
                ,W_Bk_1
                ,W_Bk_2
                ,WO_Bkt_3
                ,WO_Bkt_4
                ,WO_Bkt_5
                ,WO_Bkt_6
                ,WO_Bkt_7
                ,WO_Bkt_8
                ,WO_Bkt_9
                ,W_Bk_10
                ,W_Bk_11
                ,W_Bk_12
                ,W_Bk_13
                ,W_Bk_14
                ,W_Bk_15
                ,W_Bk_16
                ,W_Bk_17)
      WITH sim AS (select rownum sim from dual connect by rownum <= 10)
          ,j   AS (select rownum j from dual connect by rownum <= 17)
      SELECT sim.sim
            ,SUM(DECODE(j.j,1,SUM(cs.lal + cs.lnal)))
            ,SUM(DECODE(j.j,2,SUM(cs.lal + cs.lnal)))
            ,SUM(DECODE(j.j,3,SUM(cs.lal + cs.lnal)))
            ,SUM(DECODE(j.j,4,SUM(cs.lal + cs.lnal)))
            ,SUM(DECODE(j.j,5,SUM(cs.lal + cs.lnal)))
            ,SUM(DECODE(j.j,6,SUM(cs.lal + cs.lnal)))
            ,SUM(DECODE(j.j,7,SUM(cs.lal + cs.lnal)))
            ,SUM(DECODE(j.j,8,SUM(cs.lal + cs.lnal)))
            ,SUM(DECODE(j.j,9,SUM(cs.lal + cs.lnal)))
            ,SUM(DECODE(j.j,10,SUM(cs.lal + cs.lnal)))
            ,SUM(DECODE(j.j,11,SUM(cs.lal + cs.lnal)))
            ,SUM(DECODE(j.j,12,SUM(cs.lal + cs.lnal)))
            ,SUM(DECODE(j.j,13,SUM(cs.lal + cs.lnal)))
            ,SUM(DECODE(j.j,14,SUM(cs.lal + cs.lnal)))
            ,SUM(DECODE(j.j,15,SUM(cs.lal + cs.lnal)))
            ,SUM(DECODE(j.j,16,SUM(cs.lal + cs.lnal)))
            ,SUM(DECODE(j.j,17,SUM(cs.lal + cs.lnal)))
      FROM   CA.CR_L_D CS JOIN NM.CR_L_D_PD PD ON (CS.INS = PD.INS_NBR)
                          JOIN sim ON (CS.SC = sim.sim)
                          JOIN j ON (PD.C_B_N != j.j)
      GROUP BY sim.sim;
      COMMIT;
    END;My understanding is that your PL/SQL code was loading a list of numbers (lal + nlal) into a 2D array (effectively making it a 3D array), then processing that array to add up the list of numbers in each location of the 2D array to store in another 2D array and then looping through that array, inserting the records.
    Hopefully, I've got the same calculation achieved in just SQL. ;)
    Edited by: BluShadow on Oct 3, 2008 9:53 AM
    forgot some commas

  • Webi query created on olap UNV 3.1,taking more than 20 min of time to execute

    While creating universe 3.1
    1-for all measure its ( Database delegate ) fun
    2-i hide the l00 objects
    while creating the Webi query is taking long time ,what exactly will be the error I am working on( webintelligenc rich client)
    help me out with performance tuning doc n
    steps

    Check this thread. I think you asked almost the same question there as well.
    http://scn.sap.com/thread/3268487

  • Firefox is taking more than 10 mins or never loads home page on mac Mac OS X 10.6.3 (10D573). Pls advise thanks

    Firefox never loads home page... on Mac OS X 10.6.3 (10D573)
    or The connection has timed out
    I have no problem with Safari or IE. They are damn fast-instant! Thanks!
    == This happened ==
    Every time Firefox opened
    == Few months back ==
    == User Agent ==
    Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_3; en-us) AppleWebKit/531.22.7 (KHTML, like Gecko) Version/4.0.5 Safari/531.22.7

    I encountered the same type of problem. Firefox running terribly slowly and slowing down my entire machine (Core i5 with 256GB SSD). Searching the forums, I found a couple of things about troubleshooting performance issues, one of which was to use '''hardware acceleration''', that is on by default. It was turned on on my PC, '''so I tried deactivating it, and it worked!'''
    So doing the exact opposite as Mozilla support said solved the problem. It is really a pain now to work with Firefox. I'm using it because I have no choice, but I'd recommend IE and Chrome over Firefox... Whatever, the market will decide once Firefox has become to crappy...

  • Query taking more than 1/2 hour for 80 million rows in fact table

    Hi All,
    I am stuck in this query as it it taking more than 35 mins to execute for 80 million rows. My SLA is less than 30 mins for 160 million rows i.e. double the number.
    Below is the query and the Execution Plan.
    SELECT txn_id AS txn_id,
    acntng_entry_src AS txn_src,
    f.hrarchy_dmn_id AS hrarchy_dmn_id,
    f.prduct_dmn_id AS prduct_dmn_id,
    f.pstng_crncy_id AS pstng_crncy_id,
    f.acntng_entry_typ AS acntng_entry_typ,
    MIN (d.date_value) AS min_val_dt,
    GREATEST (MAX (d.date_value),
    LEAST ('07-Feb-2009', d.fin_year_end_dt))
    AS max_val_dt
    FROM Position_Fact f, Date_Dimension d
    WHERE f.val_dt_dmn_id = d.date_dmn_id
    GROUP BY txn_id,
    acntng_entry_src,
    f.hrarchy_dmn_id,
    f.prduct_dmn_id,
    f.pstng_crncy_id,
    f.acntng_entry_typ,
    d.fin_year_end_dt
    Execution Plan is as:
    11 HASH JOIN Cost: 914,089 Bytes: 3,698,035,872 Cardinality: 77,042,414      
                                                                                    9 TABLE ACCESS FULL TABLE Date_Dimension Cost: 29 Bytes: 94,960 Cardinality: 4,748
                                                                                    10 TABLE ACCESS FULL TABLE Position_Fact Cost: 913,693 Bytes: 2,157,187,592 Cardinality: 77,042,414
    Kindly suggest, how to make it faster.
    Regards,
    Sid

    The above is just a part of the query that is taking the maximum time.
    Kindly find the entire query and the plan as follows:
    WITH MIN_MX_DT
    AS
    ( SELECT
    TXN_ID AS TXN_ID,
    ACNTNG_ENTRY_SRC AS TXN_SRC,
    F.HRARCHY_DMN_ID AS HRARCHY_DMN_ID,
    F.PRDUCT_DMN_ID AS PRDUCT_DMN_ID,
    F.PSTNG_CRNCY_ID AS PSTNG_CRNCY_ID,
    F.ACNTNG_ENTRY_TYP AS ACNTNG_ENTRY_TYP,
    MIN (D.DATE_VALUE) AS MIN_VAL_DT,
    GREATEST (MAX (D.DATE_VALUE), LEAST (:B1, D.FIN_YEAR_END_DT))
    AS MAX_VAL_DT
    FROM
    proj_PSTNG_FCT F, proj_DATE_DMN D
    WHERE
    F.VAL_DT_DMN_ID = D.DATE_DMN_ID
    GROUP BY
    TXN_ID,
    ACNTNG_ENTRY_SRC,
    F.HRARCHY_DMN_ID,
    F.PRDUCT_DMN_ID,
    F.PSTNG_CRNCY_ID,
    F.ACNTNG_ENTRY_TYP,
    D.FIN_YEAR_END_DT),
    SLCT_RCRDS
    AS (
    SELECT
    M.TXN_ID,
    M.TXN_SRC,
    M.HRARCHY_DMN_ID,
    M.PRDUCT_DMN_ID,
    M.PSTNG_CRNCY_ID,
    M.ACNTNG_ENTRY_TYP,
    D.DATE_VALUE AS VAL_DT,
    D.DATE_DMN_ID,
    D.FIN_WEEK_NUM AS FIN_WEEK_NUM,
    D.FIN_YEAR_STRT AS FIN_YEAR_STRT,
    D.FIN_YEAR_END AS FIN_YEAR_END
    FROM
    MIN_MX_DT M, proj_DATE_DMN D
    WHERE
    D.HOLIDAY_IND = 0
    AND D.DATE_VALUE >= MIN_VAL_DT
    AND D.DATE_VALUE <= MAX_VAL_DT),
    DLY_HDRS
    AS (
    SELECT
    S.TXN_ID AS TXN_ID,
    S.TXN_SRC AS TXN_SRC,
    S.DATE_DMN_ID AS VAL_DT_DMN_ID,
    S.HRARCHY_DMN_ID AS HRARCHY_DMN_ID,
    S.PRDUCT_DMN_ID AS PRDUCT_DMN_ID,
    S.PSTNG_CRNCY_ID AS PSTNG_CRNCY_ID,
    SUM
    DECODE
    PNL_TYP_NM,
    :B5, DECODE (NVL (F.PSTNG_TYP, :B2),
    :B2, NVL (F.PSTNG_AMNT, 0) * (-1),
    NVL (F.PSTNG_AMNT, 0)),
    0))
    AS MTM_AMT,
    NVL (
    LAG (
    SUM (
    DECODE (
    PNL_TYP_NM,
    :B5, DECODE (NVL (F.PSTNG_TYP, :B2),
    :B2, NVL (F.PSTNG_AMNT, 0) * (-1),
    NVL (F.PSTNG_AMNT, 0)),
    0)))
    OVER (
    PARTITION BY S.TXN_ID,
    S.TXN_SRC,
    S.HRARCHY_DMN_ID,
    S.PRDUCT_DMN_ID,
    S.PSTNG_CRNCY_ID
    ORDER BY S.VAL_DT),
    0)
    AS YSTDY_MTM,
    SUM (
    DECODE (
    PNL_TYP_NM,
    :B4, DECODE (NVL (F.PSTNG_TYP, :B2),
    :B2, NVL (F.PSTNG_AMNT, 0) * (-1),
    NVL (F.PSTNG_AMNT, 0)),
    0))
    AS CASH_AMT,
    SUM (
    DECODE (
    PNL_TYP_NM,
    :B3, DECODE (NVL (F.PSTNG_TYP, :B2),
    :B2, NVL (F.PSTNG_AMNT, 0) * (-1),
    NVL (F.PSTNG_AMNT, 0)),
    0))
    AS PAY_REC_AMT,
    S.VAL_DT,
    S.FIN_WEEK_NUM,
    S.FIN_YEAR_STRT,
    S.FIN_YEAR_END,
    NVL (TRUNC (F.REVSN_DT), S.VAL_DT) AS REVSN_DT,
    S.ACNTNG_ENTRY_TYP AS ACNTNG_ENTRY_TYP
    FROM
    SLCT_RCRDS S,
    proj_PSTNG_FCT F,
    proj_ACNT_DMN AD,
    proj_PNL_TYP_DMN PTD
    WHERE
    S.TXN_ID = F.TXN_ID(+)
    AND S.TXN_SRC = F.ACNTNG_ENTRY_SRC(+)
    AND S.HRARCHY_DMN_ID = F.HRARCHY_DMN_ID(+)
    AND S.PRDUCT_DMN_ID = F.PRDUCT_DMN_ID(+)
    AND S.PSTNG_CRNCY_ID = F.PSTNG_CRNCY_ID(+)
    AND S.DATE_DMN_ID = F.VAL_DT_DMN_ID(+)
    AND S.ACNTNG_ENTRY_TYP = F.ACNTNG_ENTRY_TYP(+)
    AND SUBSTR (AD.ACNT_NUM, 0, 1) IN (1, 2, 3)
    AND NVL (F.ACNT_DMN_ID, 1) = AD.ACNT_DMN_ID
    AND NVL (F.PNL_TYP_DMN_ID, 1) = PTD.PNL_TYP_DMN_ID
    GROUP BY
    S.TXN_ID,
    S.TXN_SRC,
    S.DATE_DMN_ID,
    S.HRARCHY_DMN_ID,
    S.PRDUCT_DMN_ID,
    S.PSTNG_CRNCY_ID,
    S.VAL_DT,
    S.FIN_WEEK_NUM,
    S.FIN_YEAR_STRT,
    S.FIN_YEAR_END,
    TRUNC (F.REVSN_DT),
    S.ACNTNG_ENTRY_TYP,
    F.TXN_ID)
    SELECT
    D.TXN_ID,
    D.VAL_DT_DMN_ID,
    D.REVSN_DT,
    D.TXN_SRC,
    D.HRARCHY_DMN_ID,
    D.PRDUCT_DMN_ID,
    D.PSTNG_CRNCY_ID,
    D.YSTDY_MTM,
    D.MTM_AMT,
    D.CASH_AMT,
    D.PAY_REC_AMT,
    MTM_AMT + CASH_AMT + PAY_REC_AMT AS DLY_PNL,
    SUM (
    MTM_AMT + CASH_AMT + PAY_REC_AMT)
    OVER (
    PARTITION BY D.TXN_ID,
    D.TXN_SRC,
    D.HRARCHY_DMN_ID,
    D.PRDUCT_DMN_ID,
    D.PSTNG_CRNCY_ID,
    D.FIN_WEEK_NUM || D.FIN_YEAR_STRT || D.FIN_YEAR_END
    ORDER BY D.VAL_DT)
    AS WTD_PNL,
    SUM (
    MTM_AMT + CASH_AMT + PAY_REC_AMT)
    OVER (
    PARTITION BY D.TXN_ID,
    D.TXN_SRC,
    D.HRARCHY_DMN_ID,
    D.PRDUCT_DMN_ID,
    D.PSTNG_CRNCY_ID,
    D.FIN_YEAR_STRT || D.FIN_YEAR_END
    ORDER BY D.VAL_DT)
    AS YTD_PNL,
    D.ACNTNG_ENTRY_TYP AS ACNTNG_PSTNG_TYP,
    'EOD ETL' AS CRTD_BY,
    SYSTIMESTAMP AS CRTN_DT,
    NULL AS MDFD_BY,
    NULL AS MDFCTN_DT
    FROM
    DLY_HDRS D
    Plan
    SELECT STATEMENT ALL_ROWSCost: 11,950,256 Bytes: 3,369,680,886 Cardinality: 7,854,734
    25 WINDOW SORT Cost: 11,950,256 Bytes: 3,369,680,886 Cardinality: 7,854,734
    24 WINDOW SORT Cost: 11,950,256 Bytes: 3,369,680,886 Cardinality: 7,854,734
    23 VIEW Cost: 10,519,225 Bytes: 3,369,680,886 Cardinality: 7,854,734
    22 WINDOW BUFFER Cost: 10,519,225 Bytes: 997,551,218 Cardinality: 7,854,734
    21 SORT GROUP BY Cost: 10,519,225 Bytes: 997,551,218 Cardinality: 7,854,734
    20 HASH JOIN Cost: 10,296,285 Bytes: 997,551,218 Cardinality: 7,854,734
    1 TABLE ACCESS FULL TABLE proj_PNL_TYP_DMN Cost: 3 Bytes: 45 Cardinality: 5
    19 HASH JOIN Cost: 10,296,173 Bytes: 2,695,349,628 Cardinality: 22,841,946
    5 VIEW VIEW index$_join$_007 Cost: 3 Bytes: 84 Cardinality: 7
    4 HASH JOIN
    2 INDEX FAST FULL SCAN INDEX (UNIQUE) proj_ACNT_DMN_PK Cost: 1 Bytes: 84 Cardinality: 7
    3 INDEX FAST FULL SCAN INDEX (UNIQUE) proj_ACNT_DMN_UNQ Cost: 1 Bytes: 84 Cardinality: 7
    18 HASH JOIN RIGHT OUTER Cost: 10,293,077 Bytes: 68,925,225,244 Cardinality: 650,237,974
    6 TABLE ACCESS FULL TABLE proj_PSTNG_FCT Cost: 913,986 Bytes: 4,545,502,426 Cardinality: 77,042,414
    17 VIEW Cost: 7,300,017 Bytes: 30,561,184,778 Cardinality: 650,237,974
    16 MERGE JOIN Cost: 7,300,017 Bytes: 230,184,242,796 Cardinality: 650,237,974
    8 SORT JOIN Cost: 30 Bytes: 87,776 Cardinality: 3,376
    7 TABLE ACCESS FULL TABLE proj_DATE_DMN Cost: 29 Bytes: 87,776 Cardinality: 3,376
    15 FILTER
    14 SORT JOIN Cost: 7,238,488 Bytes: 25,269,911,792 Cardinality: 77,042,414
    13 VIEW Cost: 1,835,219 Bytes: 25,269,911,792 Cardinality: 77,042,414
    12 SORT GROUP BY Cost: 1,835,219 Bytes: 3,698,035,872 Cardinality: 77,042,414
    11 HASH JOIN Cost: 914,089 Bytes: 3,698,035,872 Cardinality: 77,042,414
    9 TABLE ACCESS FULL TABLE proj_DATE_DMN Cost: 29 Bytes: 94,960 Cardinality: 4,748
    10 TABLE ACCESS FULL TABLE proj_PSTNG_FCT Cost: 913,693 Bytes: 2,157,187,592 Cardinality: 77,042,414

  • Data Load taking more than 24 hrs

    Hi All,
    We are in the process of loading the data from R3 to BW. It’s taking more than 24hrs to load 1 yr data. This is because of complexity of ABAP code its taking this much time. We did all the possible ways to improve the performance of the code. But no luck.
    In case if the same thing happened in the production system, then how should I forward with the data load as it’s taking more than 24 hrs for 1 yr data.
    I m Planning to load int with out data transfer 1st and then Full load.
    Please correct if I m wrong.
    Thanks,
    RS.

    Hi,
    where is your ABAP code complexity loacted? in R/3 or in BW?
    Are you talking about loading in empty cube taking long time? extraction time?
    Analyze the different steps in your monitor and tell us where is the bottleneck;
    If you already know the above and you have performed all the tunings (e.g. number range buffereing when filling an empty cube...) then you're correct; init without data and then full loads.
    As suggested you could segment your full loads and even run them in paralel...
    hope this helps...
    Olivier.

  • Update table commands it is taking more than 40

    When I am firing an update table commands it is taking more than 40 seconds for it.
    In the explain plan physical reads is around 250000
    Thanks,
    Mohammed

    If you update billion rows, it can be appear as fast. If you update 1 row, it can be appear like slow.
    Without seeing the explain plan... and the query neither... it's difficult to put valuable advice.
    Nicolas.

  • HT201274 My iphone4 is taking more than 5 hours to erase all the data still it is under process , still how much more time i have to wait for my mobile to ON

    My iphone4 is taking more than 5 hours to erase all the data still it is under process , still how much more time i have to wait for my mobile to ON ?

    I'm having this EXACT same problem with my iPhone 4, and I have the same computer stats (I have a Samsung Series 7)

  • Bought a ipod nano 6th and is not playing for more than 2 min?

    bought a ipod nano 6th and is not playing for more than 2 min?

    The most likely problem is that the headset is not inserted fully. When you insert the headset, you should feel and hear a "Click" to let you know it is inserted all the way.
    The problem is that if it is not inserted all the way, the nano does not detect it. If it is not detected, it will pause playback when the screen goes dark. This is to keep your battery from draining when there is nothing connected to listen to the music.
    i

  • My 3rd generation ipad even though my battery is at 100 % the screen goes black after sitting for more than 15 mins. The only way to get it to come is by resetting the ipad. Is there a fix for this. It all started after the last two updates.

    my 3rd generation ipad even though my battery is at 100 % the screen goes black after sitting for more than 15 mins. The only way to get it to come is by resetting the ipad. Is there a fix for this. It all started after the last two updates.

    Try a Reset...
    Press and hold the Sleep/Wake button and the Home button at the same time for at least ten seconds, until the Apple logo appears. Release the Buttons.
    If no joy... Try a Restore...
    1: Connect the device to your computer and open iTunes.
    2: If the device appears in iTunes, select and click Restore on the Summary pane.
    3: If the device doesn't appear in iTunes, try using the steps in this article to force the device into recovery mode.
    From Here
    Unresponsive iPad
    http://support.apple.com/kb/TS3281

  • 6 million + records, query takes more than 50 min to execute.

    Hi
    I am trying to get records from a table which has more than 6 million of records.
    The value set of the particular col IND can be
    NULL
    '0'
    '1'
    and other value like A B '6'
    The data type of IND is varchar.
    I want all the records where the value is other than NULL, 0 and 1
    I tried this simple query
    SELECT ID, IND
    FROM tablename
    WHERE
    IND IS NOT NULL
    AND IND <> '0'
    AND IND <> '1'
    Now this query is taking more than 30-40 min. Is this a way I can speed up this query. Also I can't index on the column.
    any suggestions ?

    I don't know anything about your tables or hardware (nor your Oracle version because you didn't post it) but 30 - 40 minutes seems excessive for a full table scan on only 6 million rows.
    On my lowly test instance, this full table scan takes a little over a minute:
    | Id  | Operation          | Name                  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |                       |     1 |     4 |  4800   (2)| 00:01:11 |
    |   1 |  SORT AGGREGATE    |                       |     1 |     4 |            |          |
    |   2 |   TABLE ACCESS FULL| ATLAS_SALES_HISTORY   |  6618K|    25M|  4800   (2)| 00:01:11 |
    Statistics
            631  recursive calls
              0  db block gets
          55740  consistent gets
          55609  physical reads
              0  redo size
            415  bytes sent via SQL*Net to client
            346  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
             15  sorts (memory)
              0  sorts (disk)
              1  rows processedAre you pulling all 6 million rows across the network to your client machine? (And waiting for the rows to scroll?)

  • Sync and Create project operation from DTR is taking more than one hour

    Hi All.
    Recently basis team has implemented the track for  ESS/MSS application.So When we import the track to NWDS its showing 500 Dcs.
    I have successfully done the Sync and create project operation from DTR for 150 DCS and its take 5 min per Dcs.
    However after that when i am trying to sync Dc or create project DC from DTR the operation is taking more than 3 hour per DC.Which should not be the case because for rest 150 DC that i ahve done Sync operation adn Create project operation from DTR it hardly takes 5 min per Dc.As this operataion is taking so much time finally i have close the NWDS to stop this operation.
    I am using NWDS 2.0.15 and EP7.0 portal SP15 and NWDI is 7.0
    Can any body tell how to solve this issue so that i can Sync and Create project from DTR for a DC within 5 min?
    Thanks
    Susmita

    Hi Susmita,
    If the DCs are fine in CBS build, then I feel there is no need to test all of them locally in NWDS.
    You can verify some certain applications in these DCs, then you sync & create project for these DCs & test run these applications.
    As I get you only need to check ( no changes will be done ), yes you can verify them in small groups (say 20-25 DCs/group) in different workspaces so that no workspace is overloaded.
    But why do you want to keep a copy of them locally as you are not making any changes, you can Unsync & Remove these projects once verified & use the same workspace to work on the next set of DCs.
    Hope this clarifies your concerns.
    Kind Regards,
    Nitin
    Edited by: Nitin Jain on Apr 23, 2009 1:55 PM

Maybe you are looking for

  • Computer Screen Changes Colors

    When I open Adobe Acrobat 8 my computer screen changes color and the dots on the screen show.  Do you know what setting is wrong?

  • About execute java procedure in C#

    Hello, I have create java class in oracle lite, and the procedure is ok, I have use the sytax "select procedure_name from dual" to execute my procedure successfully, My question is how can I execute it in C#, and my procedure may have array paramente

  • How does Hs.con function work

    <p>Hi:</p><p>   I want to know the hs.con funtion workprocess.Where does the function result store,and what is the resultcalculating formula.</p><p> </p><p>Thanks & Regards!</p><p> </p><p>Jasun</p>

  • I hav to upload bdc for vk11 trans and table konv.

    hi experts, i m new in BDC... i hav to upload bdc for vk11 trans and table konv. frst doubt is i hv to run shdb transc?? or just i hv to write program?? nd cn any1 send me teps as hw to create upload.. Condition Type Sales Organisation Distribution c

  • WLS JMS bridge not working

    Hi All, I have been looking for a solution for the last two days and this is my third post regarding JMS bridge. I configured ome JMS bridge which initially ,on the monitoring tab, was giving information "Failed to look uo the source adapter" then I