SYS_CONTEXT usage taking time for first run

Hi all,
we have a screen(search) with conditions like make, manufacturing date, region, etc upto 20 conditions. User can restrict the result he wants based on the filters specified. As there are more parameters, a static query was takign a lot of time to return the result.
We went with dynamic query(only adding the conditions that user has specified in the screen dynamically) and to avoid hard parsing we introduced sys_context.
we found the result amazing with the subsequent run taking very less time(3-10 secs max) to fetch the results. But the first run (which goes for hard parsing done) is taking a quite a lot of time (close to 10mins).
we are using Oracle 11g.
thought this query is used on a regular basis, sometimes the sqlID gets cleared off from the cache due to LRU algorithm. Also sometimes, I see there are more than one SQL ID getting generated for the same set of parameter.
we cannot use the sql pin option as we would need to pin the sql for every combination of parameter(around 30).
Could anyone suggest a way of reducing the first run time?
Thanks in advance..
Regards,
Ela

>
we cannot use the sql pin option as we would need to pin the sql for every combination of parameter(around 30).
>
Then try specifying 'something' for every condition and using '%' for the context values that you don't care about.
If you don't care about 'empno' use '%' for the context value.
select * from emp where empno like '%'If you do care use an actual value
select * from emp where empno like '7369'The 'like' will be treated as '=' for the second case and in the first case it should be optimized out in the actual query.
That way every query has the same 30 placeholders but the '%' will optimize out the ones you don't want to use.

Similar Messages

  • Interface is taking lots of time for first time execution

    Hello all,
    I have one inbound interface in which i am updating T-code CO01 ,CO15 , MIGO and MB1A respectively
    depending upon the condition in the incoming file.
    When ever i run this interface for the first time it takes lots of time for execution, but when i run the same interface again with the same file it is taking half time of the first time execution.
    I am not able to understand why it is taking lots of time for first execution and after that execution time is redued very much with the same file
    Kindly help
    Thanks
    Sachin Yadav

    thank you santhosh for your reply..
    as this program is in Production system it it taking 3 to 4 hour to execute.
    Previously it was OK but from last 2 to 3 moths it is taking more time.
    when i debug the code it is working fine. I din't find any point at which it is taking more time.
    I am not aware about buffers and SAP memory. can you plz help
    Do you have any idea why this is happening.
    Or how can i rectify the problem?
    Thanks
    Sachin

  • ADF application taking more time for first time and less from second time

    Hi Experts,
    We are using ADF 11.1.1.2.
    Our application contains 5 jsp pages, 10 - 12 taskflows, and 50 jsff pages.
    For the first time in the day if we use the application it is taking more than 60 sec on some actions.
    And from the next time onwords it is taking 5 to 6 sec.
    Same thing is happening daily.
    Can any one tell me why this application is taking more time for first time and less time from second time.
    Regards
    Gayaz

    Hi,
    If you don't restart you WLS every day, then you should read about Tuning Application Module Pools and Connection Pools
    http://docs.oracle.com/cd/E15523_01/web.1111/b31974/bcampool.htm#sm0301
    And pay attention to the parameter: Maximum Available Size, Minimum Available Size
    http://docs.oracle.com/cd/E15523_01/web.1111/b31974/bcampool.htm#sm0314
    And adjust them to suit your needs.

  • I installed mountain lion over snow leopard and my macbook pro 13" taking time for login and logout,

    i installed mountain lion over snow leopard and my macbook pro 13" taking time for login and logout.. any solution

    Hi JoeyR.  Well, according to this link at the Apple Store, OS X Moutain Lion became available in July and I downloaded it for $19.99.  I figured I would do that before renewing my Norton security SW.  Are we talking about the same thing?
    http://www.apple.com/osx/

  • How to know which sql query is taking time for concurrent program

       Hi sir,
    I am running concurrent program,that is taking time to execute ,i want to know which sql query causing performance
    Thanaks,
    Sreekanth

    Hi,
    My Learning: Diagnosing Oracle Applications Concurrent Programmes - 11i/R12
    How to run a Trace for a Concurrent Program? (Doc ID 415640.1)
    FAQ: Common Tracing Techniques in Oracle E-Business Applications 11i and R12 (Doc ID 296559.1)
    How To Get Level 12 Trace And FND Debug File For Concurrent Programs (Doc ID 726039.1)
    How To Trace a Concurrent Request And Generate TKPROF File (Doc ID 453527.1)
    Regards
    Yoonas

  • Application takes lot of time for first time..

    Hai guys,
              we made an application on a 3rd party software for use on a tool which connects to SAP with RFC.
    the applicaiton updates ORACLE databse after getting relevant information from SAP.
    Now,for information on SAP side we have RFC.
    for updates on oracle we write SQL.
    the unique problem is..this applicaiton  takes lot of time when used the first time..
    once the first use is over it takes much lesser time for any subsequent use(by any user)..
    I DOUBT THE SQLS(ESP UPDATE) TAKE MUCH TIME THE FIRST TIME.

    I DOUBT THE SQLS(ESP UPDATE) TAKE MUCH TIME THE FIRST TIME.
    Why don't you trace the call and be sure?  Don't guess.
    once the first use is over it takes much lesser time for any subsequent use(by any user)..
    Because the statement is in the cursor cache - clear the cache each time and the statement will run long every time.  Again, trace the call and determine the issue - most likely you need an index or have an improperly coded SELECT statement.

  • Initial download taking time for CTParts in syclo inventory manager 3.2

    Hi All,
    While doing the Initial download in syclo inventory manager 3.2 we have observed that it is taking a lot of time for fetching the data from the complex table CTParts.
    In agentry diagram CTParts complex table is showing nine fields, out of this nine fields few fields like UOM, BatchIndicator etc does not have any dependency.So can i delete those fields?
    If yes, what will be the impact on application after deleting those fields .
    Thanks for your help
    -Garima
    Tags edited by: Michael Appleby

    Garima,
    You  need to analyze couple of things before making any program changes:-
    a) Can you check if you have set filter for ctparts MDO object in SAP ?  if MDO filter for plant points to user parameter "WRK'  , look at value of WRK in SU3.  Make sure that you have plant value maintained for WRK parameter.
    b) if indeed WRK value is maintained then go to MARC and check the number of  materials that exists for WRK  plant. if it is too many then do you  really all those materials downloaded to Mobile device ? check if you can maintain other filters values to restrict material records downloaded like material type , material group etc.
    c) Check  the point of bottlecheck a) whether it takes more  time to execute query in SAP . b) whether it takes time to transfer data from SAP to Java layer.  if so try to increase Java heap size.
    d)  Also look at MDO  field selections for ctparts in SAP. Only  select fields that you want to do.
    e) Did you create additional indexes for ctparts  complex table ?
    f)  Finally if nothing works then look at option of replacing output structure in BAPI  which return CTparts with Z structure with only required 9 fields which also  requires Z Java code changes for ctparts complex table.
    Thanks
    Manju.

  • Recover Database is taking more time for first archived redo log file

    Hai,
    Environment Used :
    Hardware : IBM p570 machine with P6 processor Lpar of .5 CPU and 2.5 GB Ram
    OS : AIX 5.3 ML 07
    Cluster: HACMP 5.4.1.2
    Oracle Version: 9.2.0.4 RAC
    SAN : DS8100 from IBM
    I have used flash copy option to copy the database from production to test machine. Then tried to recover the database to an consistent state using the command "recover automatic database until cancel". System is taking long time and after viewing the alert log it was found that, for the first time, if it is reading all the datafiles and it is taking 3 seconds for each datafile. Since i have more than 500 datafiles, it is nearly taking 25 mins for applying the first archived redo log file. All other log files are applied immeidately without any delay. Any suggession to improve the speed will be highly appreciated.
    Regards
    Sridhar

    After chaning the LPAR settings with 2 CPU and 5GB RAM, the problem solved.

  • Z30 New Handset is taking long time for first boot

    hold power button down for 30 seconds to reboot it

    Hi, I purchased Z30 handset yesterday and tried switching it ON. It has taken 15hrs for booting and still showing 99%.Is this issue related to my handset particularly or BB10 related?

  • Broadband over-usage - being charged for first mon...

    Hi there,
    Last month I went over my broadband usage on option 2, but was informed at the time that I wouldn't be charged for this as you only get charged if you go over again. I've now upgraded to option 3 so this won't happen again. However, I received an email this morning saying that I will have to pay £40 for going over during that first month period.
    The chap I spoke to on the phone said not to worry as I've already been told I won't be charged but I would really like confirmation that this is the case as I cannot afford to pay this money. As far as I'm concerned, that was first month usage and I shouldn't be charged.
    Many thanks, Adam.
    Solved!
    Go to Solution.

    BarnabyMoo wrote:
    Hi there,
    Last month I went over my broadband usage on option 2, but was informed at the time that I wouldn't be charged for this as you only get charged if you go over again. I've now upgraded to option 3 so this won't happen again. However, I received an email this morning saying that I will have to pay £40 for going over during that first month period.
    The chap I spoke to on the phone said not to worry as I've already been told I won't be charged but I would really like confirmation that this is the case as I cannot afford to pay this money. As far as I'm concerned, that was first month usage and I shouldn't be charged.
    Many thanks, Adam.
    Hi Adam. You are correct, you should not be charged the first month:
    "If you exceed your usage allowance, you'll be charged for additional usage in units of five gigabytes (GB), at £5 per 5GB. Charges will apply from the second month you exceed your allowance and will be shown on your BT bill."
    You can read all about it here:
    http://bt.custhelp.com/app/answers/detail/a_id/10495/~/broadband-usage-policy
    toekneem
    http://www.no2nuisancecalls.net
    (EASBF)

  • Consolidation taking time for Specific POV

    When users are running Consol for the following Entity Structure with Entity A and Contribution Total in POV
    - Entity A
    Entity B
    Entity C
    Entity D
    (Here B is Child of A and C is child of B and D is child of C)
    It takes around 16 min
    however when POV is changed to Entity B and Contribution Total
    It takes 4 min . Can anyone let me know why there can be such huge difference in consol timing with change in POV.
    Edited by: Mugdha Shidhore on May 24, 2012 8:18 PM
    Edited by: Mugdha Shidhore on May 24, 2012 8:23 PM

    The first thing to understand is that running a consolidation on the Contribution Total member not only consolidates data to that entity it also consolidates data to the <Entity Currency> member of the parent of that entity which means that the siblings of that entity would also be consolidated. In your example, by selecting Contribution Total on Entity A you would consolidate all data to the parent of Entity A include any siblings of Entity A. You have not indicated if Entity A has any siblings.
    The difference in your consolidation time could, therefore, be explained by the parent of Entity A having a much larger number of descendants than the number of descendants of just Entity A.
    If you only wanted to consolidate data only up to Entity A then you should choose <Entity Currency> or <Entity Curr Total> of Entity A. That should give you a clearer picture of the difference in consolidation times.
    There are also other possibilities such as rules that only run on certain entities which could also be related.
    Brian Maguire

  • Query takes long time for first time

    I have a table with 100 million records and another tables with many rows.
    When I ran a query - it's takes about 1 minute to complete, but when I ran it again it takes less than 1 second to complete.
    Why it is happening?
    Thanks,
    Tz.

    Welcome to the forum.
    When you post a question always provide your 4 digit Oracle version. Different versions have different functionality and this can affect your results and the advice you need.
    For performance tuning questions see the FAQ (upper right corner of page) for the information needed for tuning requests.
    >
    When I ran a query
    >
    How did you run it? Did you use sql*plus, sqldeveloper, some other tool?
    What command did you enter?
    Using sql*plus you can get an execution plan for the query by
    SQL> set serveroutput on
    SQL> set autotrace traceonly
    SQL> select * from emp;
    Execution Plan
    Plan hash value: 3956160932
    | Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |      |    14 |   546 |     3   (0)| 00:00:01 |
    |   1 |  TABLE ACCESS FULL| EMP  |    14 |   546 |     3   (0)| 00:00:01 |
    SQL>

  • Execution time for first transaction

    Hi,
    We recently implemented SCM SNC 5.1- Work Order Collaboration module. In the process we built several custom display screens to act as real time reports since SNC does not have a reporting framework of its own.
    However it appears that while executing the custom screens the performance is really poor while executing the screen for the first time after loggin in to the web screen. Thereafter it seems to work well.
    Is the system filling the buffer or cache when executing for the first time?
    It is observed that all the Work order related standardweb  transactions also perform in a similar fashion.
    Is there a parameter that controls this?
    Please advice.
    Thanks,
    Kedar

    Hi Kedar,
    For the standard transactions SGEN transaction is used to generate all the program in the system.
    As you are saying yours is custom development and working fine after the first execution, there is not any option to fasten the process. Try to optimization in your custom screen generation like,
    1. Avoid extracting all the screen data extraction when first screen display.
    2. Avoid unncessary data read during screen generation.
    Regards,
    Saravanan V

  • Can you purchase face time for first generation ipads?

    recently purchased an ipad first generation; can face time be installed on this device?

    thanks to all who responded; it was great
    did not realize it did not have a camera but not a deal breaker for me;
    again i appreciate all of you who took time to answer what was obviously a dumb question

  • Adpatch 9239089 taking time for Updating Snapshot Tables

    Hi,
    I am upgrading Apps R12.1.1 to Apps R12.1.3 . Applying patch 9239089 as prerequisite of patch 9239090.
    This patch is taking much time and Updating Snapshot Tables as below
    No of records processed =205032 Updating Snapshot Tables...Start time:Sun Nov 20 2011 11:04:15
    Done Updating Snapshot Tables for the above rows...End Time:Sun Nov 20 2011 11:04:17
    No of records processed =210033 Updating Snapshot Tables...Start time:Sun Nov 20 2011 11:06:37
    Done Updating Snapshot Tables for the above rows...End Time:Sun Nov 20 2011 11:06:38
    No of records processed =215033 Updating Snapshot Tables...Start time:Sun Nov 20 2011 11:08:57
    Done Updating Snapshot Tables for the above rows...End Time:Sun Nov 20 2011 11:08:58
    No of records processed =220034 Updating Snapshot Tables...Start time:Sun Nov 20 2011 11:11:17
    Done Updating Snapshot Tables for the above rows...End Time:Sun Nov 20 2011 11:11:18
    No of records processed =225034 Updating Snapshot Tables...Start time:Sun Nov 20 2011 11:13:36
    Done Updating Snapshot Tables for the above rows...End Time:Sun Nov 20 2011 11:13:37
    No of records processed =230035 Updating Snapshot Tables...Start time:Sun Nov 20 2011 11:15:56
    Done Updating Snapshot Tables for the above rows...End Time:Sun Nov 20 2011 11:15:57
    No of records processed =235035 Updating Snapshot Tables...Start time:Sun Nov 20 2011 11:18:16
    Done Updating Snapshot Tables
    Please see if there is any issue or how can avoid to updating snapshot tables.
    Regards,
    Raj

    Hi Raj,
    I am using shared appl_top on NFS file system , and this patch is using the NFS file system . So this issue could be NFS file system.
    when I am performing any write intensive operation on NFS file system it is taking huge time. adadmin utility is also taking muchtime to invoke.I also applied the same patch on a shared APPL_TOP but never had any performance issue.
    So I need some oracle recommendation to use NFS file system or how performance can be tune of nfs file system.?Have you tried to maintain snapshot via adadmin before applying the patch and see how it behaves?
    Oracle Applications Maintenance Utilities
    http://www.oracle.com/technetwork/documentation/applications-167706.html
    Thanks,
    Hussein

Maybe you are looking for