Procedure is taking more time for execution

hi,
when i am trying to execute the below procedure it is taking more time for
execution.
can you pls suggest the possaible ways to tune the query.
PROCEDURE sp_sel_cntr_ri_fact (
po_cntr_ri_fact_cursor OUT t_cursor
IS
BEGIN
OPEN po_cntr_ri_fact_cursor FOR
SELET c_RI_fAt_id, c_RI_fAt_code,c_RI_fAt_nme,
     case when exists (SELET 'x' FROM A_CRF_PARAM_CALIB t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
     then 'Yes'
               when exists (SELET 'x' FROM A_EMPI_ERV_CALIB_DETAIL t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
               then 'Yes'
               when exists (SELET 'x' FROM A_IC_CNTRY_IC_CRF_MPG_DTL t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
               then 'Yes'
               when exists (SELET 'x' FROM A_IC_CRF_CNTRYIDX_MPG_DTL t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
               then 'Yes'
               when exists (SELET 'x' FROM A_IC_CRF_RESI_COR t WHERE t.x_axis_c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
               then 'Yes'
               when exists (SELET 'x' FROM A_IC_CRF_RESI_COR t WHERE t.y_axis_c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
               then 'Yes'
               when exists (SELET 'x' FROM A_PAR_MARO_GAMMA_PRIME_CALIB t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
               then 'Yes'
               when exists (SELET 'x' FROM D_ANALYSIS_FAT t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
               then 'Yes'
               when exists (SELET 'x' FROM D_CALIB_CNTRY_RI_FATOR t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
               then 'Yes'
               when exists (SELET 'x' FROM E_BUSI_PORT_DTL t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
               then 'Yes'
               when exists (SELET 'x' FROM E_CNTRY_LOSS_DIST_RSLT t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
               then 'Yes'
               when exists (SELET 'x' FROM E_CNTRY_LOSS_RSLT t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
               then 'Yes'
               when exists (SELET 'x' FROM E_CRF_BUS_PORTFOL_CRITERIA t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
               then 'Yes'
               when exists (SELET 'x' FROM E_CRF_CORR_RSLT t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
               then 'Yes'
               when exists (SELET 'x' FROM E_HYPO_PORTF_DTL t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
               then 'Yes'
          else
               'No'
     end used_analysis_ind,
     creation_date, datetime_stamp, user_id
     FROM A_IC_CNTR_RI_FAT
ORDER BY c_RI_fAt_id_nme DESC;
END sp_sel_cntr_ri_fact;

[When your query takes too long...|http://forums.oracle.com/forums/thread.jspa?messageID=1812597]

Similar Messages

  • ME2O taking more time for execution

    This has been found sometime ME2O takes very long time for execution. Please refer the below screen shot for selection.
    If you provide the component no, the search time is little less. But still sometimes the execution take more than expected time. This is because SAP standard program search all the deliveries even though these are completed.
    There is SAP note:1815460 - ME2O: Selection of delivery very slow. This will help to improve the execution time a lot.
    Regards,
    Krishnendu.

    THanks for sharing this information,

  • We are running a report ? it is  taking long time for  execution. what step

    we are running a report ? it is  taking long time for  execution. what steps will we do to reduce the execution time?

    Hi ,
    the performance can be improved thru many ways..
    First try to select based on the key fields if it is a very large table.
    if not then create a Secondary index for selection.
    dont perform selects inside a loop , instead do FOR ALL ENTRIES IN
    try to perform may operations in one loop rather than calling different loops on the same internal table..
    All these above and many more steps can be implemented to improve the performance.
    Need to look into your code, how it can be improved in your case...
    Regards,
    Vivek Shah

  • ADF application taking more time for first time and less from second time

    Hi Experts,
    We are using ADF 11.1.1.2.
    Our application contains 5 jsp pages, 10 - 12 taskflows, and 50 jsff pages.
    For the first time in the day if we use the application it is taking more than 60 sec on some actions.
    And from the next time onwords it is taking 5 to 6 sec.
    Same thing is happening daily.
    Can any one tell me why this application is taking more time for first time and less time from second time.
    Regards
    Gayaz

    Hi,
    If you don't restart you WLS every day, then you should read about Tuning Application Module Pools and Connection Pools
    http://docs.oracle.com/cd/E15523_01/web.1111/b31974/bcampool.htm#sm0301
    And pay attention to the parameter: Maximum Available Size, Minimum Available Size
    http://docs.oracle.com/cd/E15523_01/web.1111/b31974/bcampool.htm#sm0314
    And adjust them to suit your needs.

  • Self Service Password Registration Page taking more time for loading in FIM 2010 R2

    Hi,
    I have beeen successfullly installed FIM 2010 R2 SSPR and it is working fine
    but my problem is that Self Service Password Registration Page taking more time for loading when i provide Window Credential,it is taking approximate 50 to 60 Seconds for loading a page in FIM 2010 R2
    very urgent requirement.
    Regards
    Anil Kumar

    Double check that the objectSid, accountname and domain is populated for the users in the FIM portal, and each user is connected to their AD counterparts
    Check here for more info:
    http://social.technet.microsoft.com/wiki/contents/articles/20213.troubleshooting-fim-sspr-error-3003-the-current-user-account-is-not-recognized-by-forefront-identity-manager-please-contact-your-help-desk-or-system-administrator.aspx

  • Taking more time for retreving data from nested table

    Hi
    we have two databases db1 and db2,in database db2 we have number of nested tables were there.
    Now the problem is we had link between two databases,whenever u firing the any query in db1 internally it's going to acces nested tables in db2.
    For feching records it's going to take much more time even records are less in table . what would be the reason.
    plz help me daliy we are facing the problem regarding the same

    Please avoid duplicate thread :
    quaries taking more time
    Nicolas.
    +< mod. action : thread locked>+

  • Taking more time for loading Real Cost estimates

    Dear Experts,
    It is taking more time to load data into cude CO-PC: Product Cost Planning - Released Cost Estimates(0COPC_C09).The update mode is "Full Update"There are only 105607 records.For other areas there are more than this records,but it gets loaded easily.
    I have problem only to this 0COPC_C09.Could  anybody can guide me?.
    Rgds
    ACE

    suresh.ratnaji wrote:
    NAME                                 TYPE        VALUE
    _optimizer_cost_based_transformation string      OFF
    filesystemio_options                 string      asynch
    object_cache_optimal_size            integer     102400
    optimizer_dynamic_sampling           integer     2
    optimizer_features_enable            string      10.2.0.4
    optimizer_index_caching              integer     0
    optimizer_index_cost_adj             integer     100
    optimizer_mode                       string      choose
    optimizer_secure_view_merging        boolean     TRUE
    plsql_optimize_level                 integer     2
    please let me know why it taking more time in INDEX RANGE SCAN compare to the full table scan?Suresh,
    Any particular reason why you have a non-default value for a hidden parameter, optimizercost_based_transformation ?
    On my 10.2.0.1 database, its default value is "linear". What happens when you reset the value of the hidden parameter to default?

  • Recover Database is taking more time for first archived redo log file

    Hai,
    Environment Used :
    Hardware : IBM p570 machine with P6 processor Lpar of .5 CPU and 2.5 GB Ram
    OS : AIX 5.3 ML 07
    Cluster: HACMP 5.4.1.2
    Oracle Version: 9.2.0.4 RAC
    SAN : DS8100 from IBM
    I have used flash copy option to copy the database from production to test machine. Then tried to recover the database to an consistent state using the command "recover automatic database until cancel". System is taking long time and after viewing the alert log it was found that, for the first time, if it is reading all the datafiles and it is taking 3 seconds for each datafile. Since i have more than 500 datafiles, it is nearly taking 25 mins for applying the first archived redo log file. All other log files are applied immeidately without any delay. Any suggession to improve the speed will be highly appreciated.
    Regards
    Sridhar

    After chaning the LPAR settings with 2 CPU and 5GB RAM, the problem solved.

  • Query on DSO is taking more time for current month

    Hi All,
    I have query bulit on DSO is running fine for all the month, but only the problem with ccurrent month. For April is taking more than 15 min to run query though it got less records. For other month is taking less than one sec. Could you help me to resolve the issue.
    Regards,
    J B

    Hi JB,
      Same problem we are facing. Last month users ran till from 01.03.2011 to 15.03.2011 around 17.03.2011 even the data volume is less it is running slow or some times not coming. This month it is slow for 01.04.2011 to 11.04.2011 ...
    I am also weird, and this is happening after latest patch implementation.I am not exactly sure why it happens.
    The fix we have done is restricted in the backend on that date from 01.01.1900 -  31.12.9999 and now allowed filter again on the same date. Now if we use the same date range it is running fine.
    Also try if you inpit 01.01.2011 -  31.12.2011 and check the time whether it is running fast or not....it is running normal ...
    Regards
    vamsi

  • Re : BSP App. taking More Time for Processing

    Hi,
       We are using WAS 620 , created a bsp application to create trips thru bdc's.
       In our development system its going fine but in our quality system its taking more that 10 mins for posting.
    When we check thru SM04 the icm session and icm req session are there for long time.
       Is there any setting to do with icm for executing bdc's/posting programs.
    Looking  for help from u all to solve this issue....
    Thanks and Regards,
    K Vijayasekar.

    Check out this weblog to get you started:
    /people/mark.finnern/blog/2003/09/24/bsp-in-depth-confusion-between-stateless-stateful-and-authentication

  • Taking more time for job execution

    Hi experts,
    there is one job executing on daily.
    generally it takes around 5 min.
    but yesterday it took 50 min.
    can anybody tell  why it is happened on that particular day

    hi,
    u cannot tell exactly y it happened.
    it depends on several reasons like
    1.  the server may be down
    2. the data may be large
    also somtimes even the data is same the time taken will be different.it depends on the load on work process also.if workprocess is free it may finish quickly or it may delay.
    u can check workprocess overview in sm50.
    <REMOVED BY MODERATOR>
    Edited by: Alvaro Tejada Galindo on Jun 12, 2008 4:12 PM

  • Every 3rd data package taking long time for execution

    Hi Everyone
    We are facing a strange situation. Our scenario involves doing a full load from DSO to CUBE.
    Start routines are not very database intensive and care has been taken to write them in a optimized way.
    But strangely every 3rd data package is taking exceptionally longer time than other data packages.
    a) DTP is having 3 parallal processes.
    b)time spent in extraction , rule, and updation is constant for every data package.
    c)start routine time is larger for every 3rd data package and keeps on increasing. for e.g. 5 mins, 10 mins, 24 mins, 33 mins etc it increases by each 3rd package.
    I tried to anlayze the data which was taking so much time but found no difference in terms of data in normal and longer time taking DTP (i.e. there was not logical difference in data for start routine to behave like this).
    I was wondering what can be the possible reasons for it and may be some other external system factors can be responsible for it. If someone can help in this regard that will be highly appreciated.

    Hi Hemanth,
    In your start routine, are you by any chance adding or multiplying the number of records to the source_package? Something like copy source package into an internal table, add records to internal table and then copy it back to source package? If some logic of this sorts is in your start routine, you need to refresh your internal table. Otherwise, the internal table records goes on increasing with every data package. So, the processing time might increase as the load progresses. This is one common mistake I have seen. Please check your code if you have something like that and refresh the internal tables. See if this makes any difference.
    Thanks and Regards
    Subray Hegde

  • Query using progressive relaxation take more time for execution

    HI Gurus,
    I am creating a query using context index and progressive relaxation
    I had started using progressive relaxation after getting inputs from forum {thread:id=2333942} . Using progressive relaxation takes more than 7 seconds for every query. Is there any way we can improve the performance of the query?
    create table test_sh4 (text1 clob,text2 clob,text3 clob);
    begin
       ctx_ddl.create_preference ('nd_mcd', 'multi_column_datastore');
       ctx_ddl.set_attribute
          ('nd_mcd',
           'columns',
           'replace (text1, '' '', '''') nd1,
            text1 text1,
            replace (text2, '' '', '''') nd2,
            text2 text2');
       ctx_ddl.create_preference ('test_lex1', 'basic_lexer');
       ctx_ddl.set_attribute ('test_lex1', 'whitespace', '/\|-_+');
       ctx_ddl.create_section_group ('test_sg', 'basic_section_group');
       ctx_ddl.add_field_section ('test_sg', 'text1', 'text1', true);
       ctx_ddl.add_field_section ('test_sg', 'nd1', 'nd1', true);
       ctx_ddl.add_field_section ('test_sg', 'text2', 'text2', true);
       ctx_ddl.add_field_section ('test_sg', 'nd2', 'nd2', true);
    end;
    create index IX_test_sh4 on test_sh4 (text3)   indextype is ctxsys.context   parameters    ('datastore     nd_mcd   lexer test_lex1 section group     test_sg') ;
    alter index IX_test_sh4 REBUILD PARAMETERS ('REPLACE SYNC (ON COMMIT)') ;-- sync index on every commit.
    SELECT SCORE(1) score,t.* FROM test_sh4 t WHERE CONTAINS (text3,  '
    <query>
    <textquery>
    <progression>
    <seq>{GIFT GRILL STAPLES CARD} within text1</seq>
    <seq>{GIFTGRILLSTAPLESCARD} within nd1</seq>
    <seq>{GIFT GRILL STAPLES CARD} within text2</seq>
    <seq>{GIFTGRILLSTAPLESCARD} within nd2</seq>
    <seq>((%GIFT% and %GRILL% and %STAPLES% and %CARD%)) within text1</seq>
    <seq>((%GIFT% and %GRILL% and %STAPLES% and %CARD%)) within text2</seq>
    <seq>((%GIFT% and %GRILL% and %STAPLES%) or (%GRILL% and %STAPLES% and %CARD%) or (%GIFT% and %STAPLES% and %CARD%) or (%GIFT% and %GRILL% and %CARD%)) within text1</seq>
    <seq>((%GIFT% and %GRILL% and %STAPLES%) or (%GRILL% and %STAPLES% and %CARD%) or (%GIFT% and %STAPLES% and %CARD%) or (%GIFT% and %GRILL% and %CARD%)) within text2</seq>
    <seq>((%STAPLES% and %CARD%) or (%GIFT% and %GRILL%) or (%GRILL% and %CARD%) or (%GIFT% and %CARD%) or (%GIFT% and %STAPLES%) or (%GRILL% and %STAPLES%)) within text1</seq>
    <seq>((%STAPLES% and %CARD%) or (%GIFT% and %GRILL%) or (%GRILL% and %CARD%) or (%GIFT% and %CARD%) or (%GIFT% and %STAPLES%) or (%GRILL% and %STAPLES%)) within text2</seq>
    <seq>((%GIFT% , %GRILL% , %STAPLES% , %CARD%)) within text1</seq>
    <seq>((%GIFT% , %GRILL% , %STAPLES% , %CARD%)) within text2</seq>
    <seq>((!GIFT and !GRILL and !STAPLES and !CARD)) within text1</seq>
    <seq>((!GIFT and !GRILL and !STAPLES and !CARD)) within text2</seq>
    <seq>((!GIFT and !GRILL and !STAPLES) or (!GRILL and !STAPLES and !CARD) or (!GIFT and !STAPLES and !CARD) or (!GIFT and !GRILL and !CARD)) within text1</seq>
    <seq>((!GIFT and !GRILL and !STAPLES) or (!GRILL and !STAPLES and !CARD) or (!GIFT and !STAPLES and !CARD) or (!GIFT and !GRILL and !CARD)) within text2</seq>
    <seq>((!STAPLES and !CARD) or (!GIFT and !GRILL) or (!GRILL and !CARD) or (!GIFT and !CARD) or (!GIFT and !STAPLES) or (!GRILL and !STAPLES)) within text1</seq>
    <seq>((!STAPLES and !CARD) or (!GIFT and !GRILL) or (!GRILL and !CARD) or (!GIFT and !CARD) or (!GIFT and !STAPLES) or (!GRILL and !STAPLES)) within text2</seq>
    <seq>((!GIFT , !GRILL , !STAPLES , !CARD)) within text1</seq>
    <seq>((!GIFT , !GRILL , !STAPLES , !CARD)) within text2</seq>
    <seq>((?GIFT and ?GRILL and ?STAPLES and ?CARD)) within text1</seq>
    <seq>((?GIFT and ?GRILL and ?STAPLES and ?CARD)) within text2</seq>
    <seq>((?GIFT and ?GRILL and ?STAPLES) or (?GRILL and ?STAPLES and ?CARD) or (?GIFT and ?STAPLES and ?CARD) or (?GIFT and ?GRILL and ?CARD)) within text1</seq>
    <seq>((?GIFT and ?GRILL and ?STAPLES) or (?GRILL and ?STAPLES and ?CARD) or (?GIFT and ?STAPLES and ?CARD) or (?GIFT and ?GRILL and ?CARD)) within text2</seq>
    <seq>((?STAPLES and ?CARD) or (?GIFT and ?GRILL) or (?GRILL and ?CARD) or (?GIFT and ?CARD) or (?GIFT and ?STAPLES) or (?GRILL and ?STAPLES)) within text1</seq>
    <seq>((?STAPLES and ?CARD) or (?GIFT and ?GRILL) or (?GRILL and ?CARD) or (?GIFT and ?CARD) or (?GIFT and ?STAPLES) or (?GRILL and ?STAPLES)) within text2</seq>
    <seq>((?GIFT , ?GRILL , ?STAPLES , ?CARD)) within text1</seq>
    <seq>((?GIFT , ?GRILL , ?STAPLES , ?CARD)) within text2</seq>
    </progression>
    </textquery>
    <score datatype="FLOAT" algorithm="default"/>
    </query>',1) >0 ORDER BY score(1) DESC

    Progressive relaxation works best when you're only selecting a limited number of rows. If you fetch ALL the rows which satisfy the query, then all the steps in the relaxation will have to run regardless.
    If you fetch - say - the first 10 results, then if the first step of the relaxation provides 10 results then there is no need to execute the next step (in fact, due to internal buffering, that won't be exactly true but it's conceptually correct).
    The simplest way to do this is reword the query as
    SELECT * FROM (
    ( SELECT SCORE(1) score,t.* FROM test_sh4 t WHERE CONTAINS (text3, '
    <query>
    <textquery>
    </textquery>
    <score datatype="FLOAT" algorithm="default"/>
    </query>',1) >0 ORDER BY score(1) DESC
    WHERE ROWNUM <= 10
    You've discovered that leading wild cards don't work too well unless you use SUBSTRING_INDEX. I would encourage you to avoid them altogether if possible, or push them down much lower in the progressive relaxation. Usually, GIFT% is a useful expression (matches GIFTS, GIFTED, etc), %GIFT% is generally no more effective.
    There are a lot of steps in your progressive relaxation. It you wanted to reduce the number of steps, you could change:
    <seq>((%GIFT% and %GRILL% and %STAPLES% and %CARD%)) within text1</seq>
    <seq>((%GIFT% and %GRILL% and %STAPLES% and %CARD%)) within text2</seq>
    to
    <seq>((%GIFT% and %GRILL% and %STAPLES% and %CARD%)*2) within text1 ACCUM ((%GIFT% and %GRILL% and %STAPLES% and %CARD%)) within text2</seq>
    I don't know if this would have any performance benefits - but it's worth trying it to see.

  • Whats the reason for taking more time

    Hi,
    I'm into production support, previously all loadings are fine from the past one month there is a problem in many infopackages taking more time for same no of records, normal wait time is 1 hr, but it is taking 2 to 3 hrs to complete.
    Can any body tell the reasons for this and any resolutions will be a graet help for me.
    Thanks in advance.
    Siddhu

    HI,
    Try to analyse those cube for which loading is taking more time by RSRV Transaction especially see whether dimension table size is more than 20% of fact table size.
    Another reason might be the Table Space Problem or also ask ur basis guys about the Redolog management , ask whether there is enough space while peak loading is going on....
    Assign points if helps....
    Regards,
    VIjay.

  • PGI Taking more time approximate 30 to 45 minutes

    Dear Sir,
    While doing post goods issue against delivery document system is taking lots of time, this issue is very very urgent can any one resolved or provide suitable solution for solving this issue.
    We creates every day approximate 160 sales order / delivery and goods issue against the same by using transaction code VL06O system is taking more time for PGI.
    Kindly provide suitable solution for the same.
    Regards,
    Vijay Sanguri

    Hello Vijay,
    I've just found the SAP Note 1459217 which definitively refers to your issue. Please have a look on it (see below the respective SAP note text).
    In case you have question let me know!
    Best Regards,
    Marcel Mizt
    Symptom
    Long runtimes occur when using transaction VL06G or VL06O in order to post goods issue (PGI) deliveries.
    Poor response times occur when using transaction VL06G or VL06O in order to PGI deliveries.
    Poor performance occurs with transaction VL06G / VL06O.
    Performance issues occur with transaction VL06G / VL06O.
    Environment
    SAP R/3 All Release Levels
    Reproducing the Issue
    Execute transaction VL06O.
    Choose "For Goods Issue". (Transaction VL06G).
    Long runtimes occur.
    Cause
    There are too many documents in the database that need to be accessed.
    The customising settings in the activity "set updating of partner index" are not activated.
    (IMG -> Logistics Execution -> Shipping -> Delivery List -> Set Updating Of Partner Index).                                                                               
    Resolution
    If there are too many documents in the database to access, archiving them improves the performance of VL06G.
    The customising settings in the activity "set updating of partner index" can be updated to improve the performance of VL06G. (IMG -> Logistics Execution -> Shipping -> Delivery List -> Set Updating Of Partner Index). In this transaction, check the entries for the transaction group 6 (= delivery). The effect of these settings is that the table VLKPA (SD index: deliveries by partner functions) is only filled with entries based on the partner functions listed (for example WE = ship-to party). In transaction VL060 the system is checking this customising in order to access the table VBPA or VLKPA.
    If you change the settings of the activity "updating of partner index", run the report RVV05IVB to reorganize the index seleting only partner index in the delivery section of the screen. (see note 128947).
    Flag the checkbox "display forwarding agent" (available in the display options section of the selection screen). When the list is generated, use the "set filter" functionality (menu path: edit -> set filter) in order to select the deliveries correspondng to one forwarding agent.

Maybe you are looking for

  • Ipod Nano error message when trying to synch

    Im having problems with my ipod nano it keeps giving me the error "The ipod cannot be synched.The required folder cannot be found" Also gives me "some of the items in the itunes library were not copied to the ipod because one or more errors occurred"

  • How can i sync music from different computer without losing pictures??

    my phone randomly "deleted" all of my music. It all shows up in itunes when I plug it into a computer. The only problem is MY compuer is currently out of commision and wont be fixed for at least a month. What Im trying to do is plug it into my friend

  • What case works with the dock

    Hey, I'm looknig for a case that will work with the dock. Are there any out there? I really like using the dock to sync my phone and I'm not happy loosing that feature in order to use a case. I'm already aware that the iSee case will not work with th

  • Vendor Discount on Invoice

    Hi, On the vendor invoice we would like to change the cash discount tolerance amount. It only accepts 2% discount but we would like to remove the limit. Also we would like to be able to enter the discount as a dollar amount if paid within a certain t

  • I need help with setting 2 decimal places

    I am using netbeans to create a GUI. My math works I need help formatting the result to 2 decimal places. Thanks for any help. This is part of my code.. private void jButton1ActionPerformed(java.awt.event.ActionEvent evt) {