Performance issue related to OWM? Oracle version is 10.2.0.4

The optimizer picks hash join instead of nested loop for the queries with OWM tables, which causes full table scan everywhere. I wonder if it happens in your databases as well, or just us. If you did and knew what to do to solve this, it would be great appriciated! I did log in a SR to Oracle but it usually takes months to reach the solution.
Thanks for any possible answers!

Ha, sounded like you knew what I was talking about :)
I thought the issue must've had something to do with OWM because some complicate queries have no performance issue while they're regular tables. There's a batch job which took an hour to run now it takes 4.5 hours. I just rewrote the job to move the queries from OWM to regular tables, it takes 20 minutes. However today when I tried to get explain plans for some queries involve regular tables with large amount of data, I got the same full table scan problem with hash join. So I'm convinced that it probably is not OWM. But the patch for removing bug fix didn't help with the situation here.
I was hoping that other companies might have this problem and had a way to work around. If it's not OWM, I'm surprised that this only happens in our system.
Thanks for the reply anyway!

Similar Messages

  • Performance issues -- related to printing

    Hi All,
    I am haviing production system performance issues related to printing. endusers are telling the printing is slow for almost printers. We are having more that 40 to 50 printers in landscape.
    As per my primary investigation I didnt find any issues in TSP01 & TSP02 tables. But I can see the table TST01 and TST03 table having many number of entries (more that lakh). I dont have idead about this table. Is ther eany thing related to this table where the print causes slowness or any other factors also makes this printing issue .. Please advice ..
    thanks in advance

    Hai,
    Check the below link...
    http://help.sap.com/saphelp_nw70/helpdata/en/c1/1cca3bdcd73743e10000000a114084/content.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/fc/04ca3bb6f8c21de10000000a114084/frameset.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/86/1ccb3b560f194ce10000000a114084/content.htm
    TemSe cannot administer objects that require more than two gigabytes of storage space, regardless of whether the objects are stored in the database or in the file system. Spool requests of a size greater than two gigabytes must therefore be split into several smaller requests.
    It is enough if you perform the regular background jobs and Temse consistency checks for the tables.
    This will help in controlling the capacity problems.
    If you change the profile parameter rspo/store_location parameter value to 'G' this will make the performance better. The disadvantages are TemSe data must be backed up and restored separately from the database using operating system tools, In the event of problems, it can be difficult to restore consistency between the data held in files and the TemSeu2019s object management in the database. Also you have take care of the Hard disk requirements because in some cases, spool data may occupy several hundred megabytes of disk storage. If you use the G option, you must ensure that enough free disk space is available for spool data.
    Regards,
    Yoganand.V

  • Performance issues related to logging (ForceSingleTraceFile option)

    Dear SDN members,
    I have a question about logging.
    I like to place my logs/traces for every application in different log files. By doing this you have to set the ForceSingleTraceFile option to NO (in the config tool).
    But in a presentation of SAP, named SAP Web Application Server 6.40; SAP Logging and Tracing API, is stated:
    - All traces by default go to the default trace file.
         - Good for performance
              - On production systems, this is a must!!!
    - Hard to find your trace messages
    - Solution: Configure development systems to pipe traces and logs for applications to their own specific trace file
    But I want the logs/traces also by our customers (production systems) in separate files. So my question is:
    What are the performance issues we face, if we turn the ForceSingleTraceFile option to NO by our customers?
    and
    If we turn the ForceSingleTraceFile to NO will the logs/traces of the SAP applications also go to different files? If so, then I can imagine that it will be difficult to find the logs of the different SAP applications.
    I hope that someone can clarify the working of the ForceSingleTraceFile setting.
    Kind regards,
    Marinus Geuze

    Dear Marinus,
    The performance issues with extensive logging are related to high memory usage (for concatenation/generation of the messages which are written to the log files) and as result increased garbare collection frequency, as well as high disk I/O and CPU overhead for the actual logging.
    Writing to same trace file, if logging is extensive can become a bottleneck.
    Anyway it is not related to if you should write the logs to the default trace of a standard location. I believe that the recommendation in the documentation is just about using the standard logging APIs of the SAP Java Server, because they are well optimized.
    Best regards,
    Sylvia

  • Performance Issue: Retrieving records from Oracle Database

    While retrieving data from Oracle database we are facing performance issues.
    The query is returning 890 records and while displaying it on the jsp page, the page is taking almost 18 minutes for displaying records.
    I have observed that cpu usage is 100% while processing the request.
    Could any one advise what are the methods at DB end or Java end we can think of to avoid such issues.
    Thanks
    R.

    passion_for_java wrote:
    Will it make any difference if I select columns instead of ls.*
    possibly, especially if there's a lot or data being returned.
    Less data over the wire means a faster response,
    You may also want to look at your database, is that outer join really needed? Does it perform? Are your indexes good?
    A bad index (or a missing one) can kill query performance (we've seen performance of queries drop from seconds to hours when indexes got corrupted).
    A missing index can cause full table scans, which of course kill performance if the table is large.

  • Adobe Flash crash when I try to upload some photos to create an album on Facebook . I think this issue related to the new version of Mozilla Firefox that I had installed 3 days ago and cause my system not able to open the internet . But I have sent a rep

    Adobe flash crash when I try to upload photos to created album on Facebook . Is this related to the new version of Mozilla Firefox that I installed few days ago.
    == This happened ==
    Not sure how often
    == This morning July 18th 2010

    Did you reinstall CS3 after CC?
    For that matter, doing an in-place upgrade on the OS is always a gamble with Adobe programs. Reinstalling all the versions you need, in order, would probably solve your problem.
    And you shouldn't need to save as IDML after opening the .inx in CC.

  • Performance Issue Crystal Report and Oracle

    Hello,
    We have one procedure that takes 3 input parameters and returns Cursor from Procedure that is used to design the report. There is no caluculation involved here and cursor is opended dynamically. We are using Oracle Native connection.
    When we click on preview button it takes lots of time ( >10 Mins) to show complete data. While we call the same procedure in application and generate HTML report using Cursor returned it is done in < 15 Seconds. Can some point me where to look into to improve the performance of Crystal Report.
    DB: Oracle 10G
    CR: Version XI

    Hi Vadiraja
    The performance of a report is related to:
    External factors:
    1. The amount of time the database server takes to process the SQL query.
    ( Crystal Reports send the SQL query to the database, the database process it, and returns the data set to Crystal Reports. )
    2. Network traffics.
    3. Local computer processor speed.
    ( When Crystal Reports receives the data set, it generates a temp file to further filter the data when necessary, as well as to group, sort, process formulas, ... )
    4. The number of record returned
    ( If a sql query returns a large number of records, it will take longer to format and display than if was returning a smaller data set.)
    Report design:
    1. Where is the Record Selection evaluated.
    Ensure your Record Selection Formula can be translated in SQL, so the data can be filter down on the server, otherwise the filtering will be done in a temp file on the local machine which will be much slower.
    They have many functions that cannot be translated in SQL because they may not have a standard SQL for it.
    For example, control structure like IF THEN ELSE cannot be translated into SQL. It will always be evaluated in Crystal Reports. But if you use an IF THEN ELSE on a parameter, it will convert the result of the condition to SQL, but as soon as uses database fileds in the conditions it will not be translated in SQL.
    2. How many subreports the report contains and in section section they are located.
    Minimise the number of subreports used, or avoid using subreports if possible because
    subreports are reports within a report, and if you have a subreport in a details section, and the report returns 100 records, the subreport will be evaluated 100 times, so it will query the database 100 times. It is often the biggest factor why a report takes a long time to preview.
    3. How many records will be returned to the report.
    Large number of records will slow down the preview of the reports.
    Ensure you only returns the necessary data on the report, by creating a Record Selection Formula, or basing your report off a Stored Procedure, or a Command Object that only returns the desired data set.
    4. Do you use the special field "Page N of M", or "TotalPageCount"
    When the special field "Page N of M" or "TotalPageCount" is used on a report, it will have to generate each page of the report before it displays the first page, therfore it will take more time to display the first page of the report.
    If you want to improve the speed of a report, remove the special field "Page N of M" or "Total Page Count" or formula that uses the function "TotalPageCount". If those aren't use when you view a report it only format the page requested.
    It won't format the whole report.
    5. Link tables on indexed fields whenever possible.
    6. Remove unused tables, unused formulas, unused running totals from the report.
    7. Suppress unnecessary sections.
    8. For summaries, use conditional formulas instead of running totals when possible.
    9. Whenever possible, limit records through selection, not suppression.
    10. Use SQL expressions to convert fields to be used in record selection instead of using formula functions.
    For example, if you need to concatenate 2 fields together, instead of doing it in a formula, you can create a SQL Expression Field. It will concatenate the fields on the database server, instead of doing in Crystal Reports. SQL Expression Fields are added to the SELECT clause of the SQL Query send to the database.
    11. Using one command as the datasource can be faster if you returns only the desired data set.
    It can be faster if the SQL query written only return the desired data.
    12. Perform grouping on server
    This is only relevant if you only need to return the summary to your report but not the details. It will be faster as less data will be returned to the reports.
    Regards
    Girish Bhosale

  • Performance ISSUE related to AGGREGATE

    hi Gem's can anybody give the list of issue which we can face related to AGGREGATE maintananece in support project.
    Its very urgent .plz.........respond to my issue.its a urgent request.
    any link any thing plz send me
    my mail id is
        [email protected]

    Hi,
    Try this.
    "---" sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
    if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
    Refer.
    http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
    Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
    use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
    If aggregates contain incorrect data, you must regenerate them.
    Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
    Check   SE11 > table RSDDAGGRDIR . You can find the last callup in the table.
    Generate Report in RSRT 
    http://help.sap.com/saphelp_nw04/helpdata/en/74/e8caaea70d7a41b03dc82637ae0fa5/frameset.htm
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    /people/juergen.noe/blog/2007/12/13/overview-important-bi-performance-transactions
    /people/prakash.darji/blog/2006/01/26/query-optimization
    Cube Performance
    /thread/785462 [original link is broken]
    Thanks,
    JituK

  • Performance issue related to EP 6.0 SP2 Patch 5

    We have implemented SAP EP6.0 SP2 Patch5.We have also configured IIS 6.0 to access our portal from internet.
    When we access the portal from the internet,it is very slow.Sometime,pages are taking 5-10 minutes to load.
    I am using the cacheing technique for the iview.I wanted to know whether it is a good idea to use cacheing coz it is taking lot of time to load the iview.
    I would really appreciate any coments or suggestions.

    Paritosh,
    I think you need to analyze the issue step by step as the response time seems to be very high. Here are a few suggestions. Response time high could be due to many factors - Server Side, Network and Client Browser setting. Let us analysis the case step by step.
    1) Do a basic test to access the EP within the same network (LAN) to make sure we eliminate network to verify everything works fine within the LAN.
    2) If performance is not acceptable within LAN, then accessing over WAN or Internet will not be better anyway. If LAN performance is not acceptable (it requires you know the acceptable response time, say 5 sec or something), you need to find out whether you have large contents in the page you are accessing. You need to know how many iViews you have in the page. What kind of iViews are they - are they going to backend system? If they are going to the backend, how is the going? Are they using ITS or JCo-RFC? If it goes through ITS, how about accessing the same page directly via ITS? Do you get the same problem? If you are using via JCo, have you monitor RFC traffic (size of data and number of round trips using ST05).
    There could be many other potential issues. Have you done proper tuning of EP for JVM parameters, threads, etc? Are you using keep-alive settings in the dispatcher, firewall, and load balancer (if any)? Are you using compression enabled in the J2EE Server? Do you use content expirations at the J2EE Server? How is your browser setting for browser cache?
    In summary, we like to start with EP landscape with all components. We need to make sure that response time is acceptable within LAN. If we are happy, we can look into Network part for WAN/Internet performance.
    Hope it will give you a few starting points. Once you provide more information, we can follow-up.
    Thanks,
    Swapan

  • Performance issue related to Wrapper and variable value retrievel

    If I have a array of int(primitive array) and on the other hand if I have an array of it's corresponding Wrapper class , while dealing is there is any performance difference betwen these 2 cases . If in my code I am doing to conversion from primitive to wrapper object , is that affecting my performnace as there is already a concept of auto-boxing.
    Another issue is that if I acces the value of a variable name (defined in in superclass) in subclass by ' this.getName() ' rather than ' this.name ' . is there ne performance diffreance between the 2 cases.

    If I have a array of int(primitive array) and on the
    other hand if I have an array of it's corresponding
    Wrapper class , while dealing is there is any
    performance difference betwen these 2 cases . If in
    my code I am doing to conversion from primitive to
    wrapper object , is that affecting my performnace as
    there is already a concept of auto-boxing.I'm sure there is. It's probably not worth worrying about until you profile your application and determine it's actually an issue.
    Another issue is that if I acces the value of a
    variable name (defined in in superclass) in subclass
    by ' this.getName() ' rather than ' this.name ' .
    is there ne performance diffreance between the 2
    cases.Probably, but that also depends on what precisely getName() is doing doesn't it? This is a rather silly thing to be worrying about.

  • Performance Issue related to SAP EP

    Hi All,
    Performance wise, I would like to know which one is better SAP EP or SAP GUI?
    Also how good is SAP EP for handling Large Scale Data Entry Transactions and Printing Jobs.

    Paritosh,
    I think you need to analyze the issue step by step as the response time seems to be very high. Here are a few suggestions. Response time high could be due to many factors - Server Side, Network and Client Browser setting. Let us analysis the case step by step.
    1) Do a basic test to access the EP within the same network (LAN) to make sure we eliminate network to verify everything works fine within the LAN.
    2) If performance is not acceptable within LAN, then accessing over WAN or Internet will not be better anyway. If LAN performance is not acceptable (it requires you know the acceptable response time, say 5 sec or something), you need to find out whether you have large contents in the page you are accessing. You need to know how many iViews you have in the page. What kind of iViews are they - are they going to backend system? If they are going to the backend, how is the going? Are they using ITS or JCo-RFC? If it goes through ITS, how about accessing the same page directly via ITS? Do you get the same problem? If you are using via JCo, have you monitor RFC traffic (size of data and number of round trips using ST05).
    There could be many other potential issues. Have you done proper tuning of EP for JVM parameters, threads, etc? Are you using keep-alive settings in the dispatcher, firewall, and load balancer (if any)? Are you using compression enabled in the J2EE Server? Do you use content expirations at the J2EE Server? How is your browser setting for browser cache?
    In summary, we like to start with EP landscape with all components. We need to make sure that response time is acceptable within LAN. If we are happy, we can look into Network part for WAN/Internet performance.
    Hope it will give you a few starting points. Once you provide more information, we can follow-up.
    Thanks,
    Swapan

  • VC - Compile and Deploy performance issues related to UserID

    Dear Guru's,
    I'm currently working at a customer where a small team of 4 is working with VC 7.0.
    One user has very long Compile and Deploy times. We first thought that it was related to his workstation.
    Then one of the other guys logged in on his PC and run the compile + deploy ant then it suddenly takes seconds again.
    So we created a new userID for this user who has the issues "<oldUI>+test" and suddenly all is back to normal for him.
    But, now here it comes that we deleted his old userID and created it again, but the issue is still there.
    So my assumption is that there is some kind of faulty record or index or something other strange linked to his userID.
    What can this be and how can we solve it?
    Thanks in advance!
    Benjamin

    Hi Anja,
    We use VC on 7.0 and we do not have any integration with the DTR.
    So in other words we use the default way of working with VC.
    The user had his models in his Personal folder then moved it to the Public folder so that other colleagues could see/try them as well. It doesn't matter where the model is stored (public or personal) as long if this specific UID is used compiling/deploying goes very slow... the log files do not give much info, why this happens...
    Cheers,
    Benjamin

  • Display issues related to Mac OS version

    I have a late 2009 Mac Mini connected to a Samsung LN40B630 (40in HDTV) using a mini DVI to HDMI connector. When I first hooked up the system last Jam everything worked perfectly. Then sometime 4-6 months later, the display no longer fit the screen no matter what I tried (settings and various different connectors). Recently, reading in the chat boards, there was an indication that the OS version and (I suspect) resulting display driver updates might be the issue. I did an erase and install which corrected the issue and now the display fits and looks perfect. I've allowed all the software updates to bring the system current except for the OS update. I'm still running 10.6.2. I'm wondering if anyone has experienced the same or similar issue. Also, does anyone know if there is an improved display driver update? For now, I'm not going to allow the OS update however, I don't see this as a long term solution.
    thanks

    Hi - I tried changing settings on both the Samswung TV and the Mac Mini. My TV has a setting called "screen fit" which should do the trick but for some reason has no effect on the Mac Mini signal. I've tried "overscan" on and off as well. The only thing that has worked is to reload the starter OS disk that came with the machine using the erase and install function or hooking up with my wife's Macbook Pro also works fine. I'm still guessing the OS update is what makes the Mac mini output not fit the scree. I've backed up the recent reload to Time Machine and will try the OS update to confirm this is what breaks it. Any other ideas ?

  • Performance issue related to BSIS table:pls help

    Theres a select statement which fetches data from BSIS table.
    As in the where clause the only key field used is BUKRS its consuming more time.Below is the code.
    Could you please tell me how to improvise this piece of code.
    I tried to fecth first from BKPF table based on the selection screen paramater t001-bukrs and then for all entries in BKPF fetched from BSIS.But it didnt worked.
    your help would be very much appreciated.Thanks in advance.
      SELECT bukrs waers ktopl periv
             FROM t001
             INTO TABLE i_ccode
             WHERE bukrs IN s_bukrs.
    SELECT bukrs hkont gjahr belnr buzei bldat waers blart monat bschl
    shkzg mwskz dmbtr wrbtr wmwst prctr kostl
               FROM bsis
               INTO TABLE i_bsis
               FOR ALL ENTRIES IN i_ccode
               WHERE bukrs EQ i_ccode-bukrs
               AND   budat IN i_date.
    Regards
    Akmal
    Moved by moderator to the correct forum
    Edited by: Matt on Nov 6, 2008 4:10 PM

    Dnt go for FOR ALL ENTRIES  it will not help in this case .Do like below , you can see a lot of performance improvement.
    SELECT bukrs waers ktopl periv
             FROM t001
             INTO TABLE i_ccode
             WHERE bukrs IN s_bukrs.
    sort i_ccode by bukrs.
    LOOP AT i_ccode.
       SELECT bukrs hkont gjahr belnr buzei bldat waers blart         monat bschl shkzg mwskz dmbtr wrbtr wmwst prctr kostl
             FROM bsis
            APPENDING TABLE i_bsis
            WHERE bukrs EQ i_ccode-bukrs
            AND   budat IN i_date.
      ENDLOOP.
    I dnt know why perform is good for the above query than "bulrs in so_bukrs" .This willl help , i m sure. this approach helped me.
    Edited by: Karthik Arunachalam on Nov 6, 2008 8:52 PM

  • Performance Issue related to RFC

    Hi All,
    Iam moving attachments from CRM to R/3 .For this Iam using RFC.If Iam attaching multiple files at a time.Do I need to call the RFC in Loop or should I call it Once for all attachments.Which gives better PERFORMANCE.
    One more thing If I call the RFC in SYNCHRONOUS  mode what happends if the server is down the other side for two to three days.
    If I call the RFC in ASYNCHRONOUS  mode I need to work on the return values of the RFC.How to handle this situation.
    Plz give me the reply as early as possible.
    Thanks,
    Saritha

    Hi,
    If an RFC Channel already exists between the client and server the same channel will be used between the systems. Hence, even calling in a loop should not be a problem. But in here the data needs to go one by one through the channel. Try to send them in a table as this goes as one chunk of data.
    In case of ASYNCHRONOUS also if  you want to receive results then the called system should be up. for this the syntax is
    CALL FUNCTION 'FM' STARTING NEW TASK DESTINATION <dest>
    PERFORMING <form> ON END OF TASK
    EXPORTING
    EXCEPTIONS
    FORM <form> USING taskname.
      RECEIVE RESULTS FROM <fm>
      IMPORTING
    ENDFORM.
    But in any case, the called system should be open for connections.
    Try if possible tRFC calls.
    Regards,
    Goutham

  • Query on Performance issues relating to a report

    Hi Group,
    I have an issue while running a report which was creating Business Partners for (both Company and the Contact person and as well as relationship b/w them).
    This report was having BAPI( for creating Business Partners ) and also for creating relationships and the report was taking too much of response time.
    I was thinking it to be the reason for calling BAPIs. But, I want to know from you that is that the real cause or it might be the other cause.
    So please kindly let me know inputs from your side on this.
    thanks in advance.
    Regards,
    Vishnu.

    Hi
    I think it's always better to use the provided standard fm's and bapi's to make changes to the data in the system instead of directly placing them in the tables.
    One thing you can do is try to use parallel processing. E.g 10.000 BP's should be created. In that case schedule 4 jobs to create the Bp's instead of 1 job creating the whole lot.
    Kind regards, Rob Dielemans

Maybe you are looking for