Performance issue related to BSIS table:pls help

Theres a select statement which fetches data from BSIS table.
As in the where clause the only key field used is BUKRS its consuming more time.Below is the code.
Could you please tell me how to improvise this piece of code.
I tried to fecth first from BKPF table based on the selection screen paramater t001-bukrs and then for all entries in BKPF fetched from BSIS.But it didnt worked.
your help would be very much appreciated.Thanks in advance.
  SELECT bukrs waers ktopl periv
         FROM t001
         INTO TABLE i_ccode
         WHERE bukrs IN s_bukrs.
SELECT bukrs hkont gjahr belnr buzei bldat waers blart monat bschl
shkzg mwskz dmbtr wrbtr wmwst prctr kostl
           FROM bsis
           INTO TABLE i_bsis
           FOR ALL ENTRIES IN i_ccode
           WHERE bukrs EQ i_ccode-bukrs
           AND   budat IN i_date.
Regards
Akmal
Moved by moderator to the correct forum
Edited by: Matt on Nov 6, 2008 4:10 PM

Dnt go for FOR ALL ENTRIES  it will not help in this case .Do like below , you can see a lot of performance improvement.
SELECT bukrs waers ktopl periv
         FROM t001
         INTO TABLE i_ccode
         WHERE bukrs IN s_bukrs.
sort i_ccode by bukrs.
LOOP AT i_ccode.
   SELECT bukrs hkont gjahr belnr buzei bldat waers blart         monat bschl shkzg mwskz dmbtr wrbtr wmwst prctr kostl
         FROM bsis
        APPENDING TABLE i_bsis
        WHERE bukrs EQ i_ccode-bukrs
        AND   budat IN i_date.
  ENDLOOP.
I dnt know why perform is good for the above query than "bulrs in so_bukrs" .This willl help , i m sure. this approach helped me.
Edited by: Karthik Arunachalam on Nov 6, 2008 8:52 PM

Similar Messages

  • Performance issue in ECC 6.0, pls help!

    Dear all,
    I have a join statement which works fine with the same data in 4.6 C but in the upgraded system it is too slow.
    SELECT PA0105USRID PA0001BUKRS PA0001~GSBER
       INTO TABLE I_pa0001
      FROM PA0105 INNER JOIN PA0001 ON PA0001PERNR = PA0105PERNR
    FOR ALL ENTRIES IN i_apqi
       WHERE PA0001~ENDDA EQ '99991231'
       AND PA0001~BUKRS IN S_bukrs
    AND PA0001~GSBER IN S_GSBER
       AND PA0105~SUBTY EQ '0001'
       and pa0105~usrid = i_apqi-creator
      AND PA0105~ENDDA EQ '99991231'.
    Table i_apqi has a list of user ids. This is taking around 6 minutes which is too much. Any ideas to improve this?
    Will be happy to reward points for the answers
    Regards
    Veena

    Hi,
    I think you could use this code.
    IF i_apqi[] IS NOT INITIAL.
    SELECT PA0105~USRID PA0001~BUKRS PA0001~GSBER
    INTO TABLE I_pa0001
    FROM PA0105 INNER JOIN PA0001 ON PA0001~PERNR = PA0105~PERNR
    FOR ALL ENTRIES IN i_apqi
    WHERE pa0105~usrid = i_apqi-creator
    AND PA0001~ENDDA EQ '99991231'
    AND PA0001~BUKRS IN S_bukrs
    AND PA0001~GSBER IN S_GSBER
    AND PA0105~SUBTY EQ '0001'
    AND PA0105~ENDDA EQ '99991231'
    %_HINTS ORACLE 'FIRST_ROWS'.
    ENDIF.
    But your table i_apqi must have than first row creator
    Brgds
    Julien

  • Performance issues -- related to printing

    Hi All,
    I am haviing production system performance issues related to printing. endusers are telling the printing is slow for almost printers. We are having more that 40 to 50 printers in landscape.
    As per my primary investigation I didnt find any issues in TSP01 & TSP02 tables. But I can see the table TST01 and TST03 table having many number of entries (more that lakh). I dont have idead about this table. Is ther eany thing related to this table where the print causes slowness or any other factors also makes this printing issue .. Please advice ..
    thanks in advance

    Hai,
    Check the below link...
    http://help.sap.com/saphelp_nw70/helpdata/en/c1/1cca3bdcd73743e10000000a114084/content.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/fc/04ca3bb6f8c21de10000000a114084/frameset.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/86/1ccb3b560f194ce10000000a114084/content.htm
    TemSe cannot administer objects that require more than two gigabytes of storage space, regardless of whether the objects are stored in the database or in the file system. Spool requests of a size greater than two gigabytes must therefore be split into several smaller requests.
    It is enough if you perform the regular background jobs and Temse consistency checks for the tables.
    This will help in controlling the capacity problems.
    If you change the profile parameter rspo/store_location parameter value to 'G' this will make the performance better. The disadvantages are TemSe data must be backed up and restored separately from the database using operating system tools, In the event of problems, it can be difficult to restore consistency between the data held in files and the TemSeu2019s object management in the database. Also you have take care of the Hard disk requirements because in some cases, spool data may occupy several hundred megabytes of disk storage. If you use the G option, you must ensure that enough free disk space is available for spool data.
    Regards,
    Yoganand.V

  • Performance issue with joins on table VBAK, VBEP, VBKD and VBAP

    hi all,
    i have a report where there is a join on all 4 tables VBAK, VBEP, VBKD and VBAP.
    the report is giving performance issues because of this join.
    all the key fields are used for the joining of tables. but some of the non-key fields like vbap-vstel, vbap-abgru and vbep-wadat are also part of select query and are getting filled.
    because of these there is a performance issue.
    is there any way i can improve the performance of the join select query?
    i am trying "for all entries" clause...
    kindly provide any alternative if possible.
    thanks.

    Hi,
    Pls perform some of the below steps as applicable for the performance improvement:
    a) Remove join on all the tables and put joins only on header and item (VBAK & VBAP).
    b) code should have separate select for VBEP and VBKD.
    c) remove the non key fields from the where clause. Once you retrieve data from the database into the internal table, sort the table and delete the entries which are not part of the non-key fields like vstel, abgru and wadat.
    d) last option is you can create index in the VBAP & VBEP table with respect to the fields vstel, abgru & wadat ( not advisable)
    e) buffering option on database tables also possible.
    f) select only the fields into the internal table that are applicable for the processing logic and also the select query should contaian the field names in the same order as mentioned in the database table.
    Hope this helps.
    Regards
    JLN

  • Performance issue related to OWM? Oracle version is 10.2.0.4

    The optimizer picks hash join instead of nested loop for the queries with OWM tables, which causes full table scan everywhere. I wonder if it happens in your databases as well, or just us. If you did and knew what to do to solve this, it would be great appriciated! I did log in a SR to Oracle but it usually takes months to reach the solution.
    Thanks for any possible answers!

    Ha, sounded like you knew what I was talking about :)
    I thought the issue must've had something to do with OWM because some complicate queries have no performance issue while they're regular tables. There's a batch job which took an hour to run now it takes 4.5 hours. I just rewrote the job to move the queries from OWM to regular tables, it takes 20 minutes. However today when I tried to get explain plans for some queries involve regular tables with large amount of data, I got the same full table scan problem with hash join. So I'm convinced that it probably is not OWM. But the patch for removing bug fix didn't help with the situation here.
    I was hoping that other companies might have this problem and had a way to work around. If it's not OWM, I'm surprised that this only happens in our system.
    Thanks for the reply anyway!

  • Experiencing strange performance issues after a hard drive failure - Help!

    I bought my mid-2012 i5 Macbook Pro in December of 2012. I realized when shopping for computers that I wanted an SSD installed, but that it would be a lot cheaper if I bought the SSD and installed it rather than customizing it in the Apple Store. So I bought a nice Samsung 128GB SSD (820 or 840 - can't remember which) and did the installation. I went ahead and installed two 4GB sticks of RAM while I was at it. Everything was just dandy: my boot time was just under 9 seconds, and all of my data-heavy apps booted in no-time at all. Then all **** broke loose.
    About two weeks ago, I opened my computer and I got the dreaded "? File Folder" notification with a gray screen. I immediately thought hard drive failure. No matter how many times I tried to boot, the computer just would not talk to the SSD anymore. I used Internet Recovery to get into my Disk Utility, and the entire partition was gone. I assumed the worst but wanted to be sure - I bought a hard drive enclosure and hooked the SSD up to an older Macbook, and lo and behold: it worked perfectly. I was not only able to recover data, but I could write data to the drive. Nothing appeared wrong with the drive when I plugged it into the old Macbook, but my newer Macbook still would not recognize it. Even my fiance's Windows 7 PC recognized the drive as "?" (since it was formatted for Mac, but hey - it recognized that it existed!).
    I decided to re-install the original HDD that came with the 2012 Macbook Pro (the one I removed in favor of the SSD). I was able to re-install the OS and I can boot up at will, but everything is different. The performance issues are extremely noticeable. I can't have more than two programs running at one time without the spinning wheel of death appearing. My boot time went from 9 seconds to 2 minutes. I know that SSDs increase performance, so there is some slight performance downgrade to be expected since I am using a mechanical drive now -- but these are not normal issues. Sometimes I can't even type a web address into Safari without the wheel appearing. iTunes, and specifically the App Store, take minutes to open - and I have no media is on iTunes.
    Here's the thing: I have tried just about anything to fix this problem that Google can pull up. I've verified the HDD, I've booted into Safe Mode, reset RAM and cache, run benchmarks and other performance tests, entered all sorts of weird language into Command Prompt, and studied Activity Monitor - I can't find a single red flag that would indicate anything being wrong. It appears to be a perfectly functioning, updated computer.
    I'm thinking a piece of hardware failed that triggered the error with the SSD. I'm not really sure though since all of my performance tests indicate perfectly functioning hardware. I'm a little afraid to take it to the Apple store because I know they'll tell me it's my fault for opening the computer and replacing the hard drive in the first place.
    Any ideas? At this point anything to salvage this computer would be helpful.

    Spin Cycle,
    were those other computers which were able to recognize your SSD in its external enclosure also Macs? Do you know if your SSD has its most recent firmware revision installed? (If it doesn’t, its installer can be downloaded from the Samsung SSD firmware page for burning onto a bootable DVD.) I haven’t used the 830 myself, so I don’t know what its reputation is with Macs. I have an 840 PRO in my MacBook Pro, which has been trouble-free for me, but my understanding is that the 840 EVO has had trouble with Macs in its earlier firmware revisions — so I’m wondering if the 830 has a known track record with Macs, good or bad.

  • Performance issues related to logging (ForceSingleTraceFile option)

    Dear SDN members,
    I have a question about logging.
    I like to place my logs/traces for every application in different log files. By doing this you have to set the ForceSingleTraceFile option to NO (in the config tool).
    But in a presentation of SAP, named SAP Web Application Server 6.40; SAP Logging and Tracing API, is stated:
    - All traces by default go to the default trace file.
         - Good for performance
              - On production systems, this is a must!!!
    - Hard to find your trace messages
    - Solution: Configure development systems to pipe traces and logs for applications to their own specific trace file
    But I want the logs/traces also by our customers (production systems) in separate files. So my question is:
    What are the performance issues we face, if we turn the ForceSingleTraceFile option to NO by our customers?
    and
    If we turn the ForceSingleTraceFile to NO will the logs/traces of the SAP applications also go to different files? If so, then I can imagine that it will be difficult to find the logs of the different SAP applications.
    I hope that someone can clarify the working of the ForceSingleTraceFile setting.
    Kind regards,
    Marinus Geuze

    Dear Marinus,
    The performance issues with extensive logging are related to high memory usage (for concatenation/generation of the messages which are written to the log files) and as result increased garbare collection frequency, as well as high disk I/O and CPU overhead for the actual logging.
    Writing to same trace file, if logging is extensive can become a bottleneck.
    Anyway it is not related to if you should write the logs to the default trace of a standard location. I believe that the recommendation in the documentation is just about using the standard logging APIs of the SAP Java Server, because they are well optimized.
    Best regards,
    Sylvia

  • Performance issue in a custom table

    Hi All,
    I have  a ztable used in a program wherin I have a doubt of performance issue in selection.Its like :
        SELECT ship_no invoice_no
          INTO TABLE it_ship_no_hist
          FROM zco_cust_hist
          FOR ALL ENTRIES IN it_freight
          WHERE ship_no = it_freight-tknum.
    there are 7 key fields in this table out of which one ( tknum ) is used in a where condition.The table is without any index.
       For performance purpose should I create an index with the very field 'tknum' in the index..can I do that or index should be created only along with non key fields.

    Hi,
    a table has - besides a few exceptions - always one index that is the primary key. The fields are the key fields in the same order as in the table.
    The primary key is always there and therefore not displayed under the botton 'index'.
    Is tknum a key field? What are the key fields in correct order? If it is in the key and maybe the first one, then it does not make sense that you create an index.
    Siegfried

  • Performance ISSUE related to AGGREGATE

    hi Gem's can anybody give the list of issue which we can face related to AGGREGATE maintananece in support project.
    Its very urgent .plz.........respond to my issue.its a urgent request.
    any link any thing plz send me
    my mail id is
        [email protected]

    Hi,
    Try this.
    "---" sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
    if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
    Refer.
    http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
    Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
    use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
    If aggregates contain incorrect data, you must regenerate them.
    Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
    Check   SE11 > table RSDDAGGRDIR . You can find the last callup in the table.
    Generate Report in RSRT 
    http://help.sap.com/saphelp_nw04/helpdata/en/74/e8caaea70d7a41b03dc82637ae0fa5/frameset.htm
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    /people/juergen.noe/blog/2007/12/13/overview-important-bi-performance-transactions
    /people/prakash.darji/blog/2006/01/26/query-optimization
    Cube Performance
    /thread/785462 [original link is broken]
    Thanks,
    JituK

  • View objects performance issue with oracle seeded tables

    While i am writing a view object on a oracle seeded tables like MTL_PARAMETERS, its taking more time to show in the oaf page.I am trying to display all these view object columns in detail disclosure of advanced table. My Application is taking more than two minutes to display the view columns of the query which is returning just 200 rows. Please help me how to improve performance when my query using seeded tables.
    This issue is happening only in R12 view object and advanced tables.
    Edited by: vlsn on Jun 24, 2012 11:36 PM

    Hi All,
    Here is architecture of my application:
    Java application creates XML from the screen values and then inserts that XML
    into a framework(separate DB schema) table . then Java calls a Stored Procedure from same framework DB and in SP we have following steps.
    1. It fatches XML from the XML type table and inserts XML into screen specific XML TYPE table in the framework DB Schema. This table has a trigger which parses XML and then inserts XML values into GTT which are created in separate product schemas.
    2. it calls Product SP and then in product SP we have business logic. Product SP
    does the execution and then inserts response into Response GTT.
    3. Response XML is created by using XML generation function and response GTT.
    I hope u will understand my architeture this time and now let me know if GTT are good in this scenario or not. also please not that i need data in GTT only during execution and not after that. i dont want to do specific delete which i have to do if i am using normal tables.
    Regards,
    Vikas Kumar

  • Performance issue when selection LIPS table into program.

    Hi expert,
    I have created Pending sales order report , in that i am facing performance problem for selection of LIPS table.
    i have tried to use VLPMA table but performance has not been improved so, is their any need to create secondary index and
    if yes then which fields of lips table i have to includes in index.
    Please reply.
    Regards,
    Jyotsna

    >
    UmaDave wrote:
    > Hi ,
    > 1.Please make use of PACKAGE in your select query , it will definetly improve the performance.
    > 2.Please use the primary index by passing the fields in where clause in the order in which they appera in LIPS table.
    > 3.You can also create a secondary index with the fields which you are using in where clause of select query and maintain the fields in the same sequence (where clause and secondary index)
    > 4.If there is any inner joins (more than 3) then reduce them and have few mare select queries for them and make use of for all entries.   
    >
    > This will definitely improve the performance to great extend.
    >
    > Hope this is helpful.
    > Regards,
    > Uma
    Please do some more research before offering advice:
    PACKAGE SIZE is for memory management, not performance.
    Creating a secondary index is using a hammer to swat a fly and the order in the SELECT is not relevant.
    FAE does not improve performance over a JOIN.
    Rob

  • Update a table based on Min value of a column of a Another Table.Pls Help.

    Dear All,
    Wishes,
    Actually I need update statement some thing like below scenario...
    Data in table is like below:
    I wrote a query to fetch data like below ( actually scenario is each control number can have single or multiple PO under it ) (i used rank by to find parent to tree like show of data)
    Table: T20
    Control_no        P_no  Col3
    19950021     726473     00
    19950036      731016     00
    19950072     731990     00
                     731990 01
    19950353     734732     00
                     734732 01
    19950406     736189     00
                 736588     01
                 736588     02
                 736588     03                
    Table : T30
    Control_no      P_no              col3
    19950021     726473 
    19950036     731016
    19950072     731990     
                 731990     
    19950353     734732     
                  734732     
    19950406     736189     
                  736588     
                  736588     
                   736588     
      Now requirement is I need to update Table T30's col3 (which do have values in T20 but not this table) in such a way that , It should take MIN (COL3) from T20 and then update that value to related Col3)
    Better I can explain through below new data format in T30 after update:
    After update it should like:
    Table : T30
    Control_no       P_no    col3 (this is updated column)
    19950021     726473   00  -- as this is min value for Pno 726473 belongs to Control NO 199950021 in Table T20 above
    19950036     731016   00  -- as this is min value for Pno 726473 belongs to Control NO 199950021 in Table T20 above
    19950072     731990   00  -- see here..both Pno should updated as '00' as MIN value col3 in Table T20 related to this
                 731990      00     record is '00'  (out of 00,01 it should select 00 and update that value here)
    19950353     734732      00  -- same again both Pno should updated as '00' as MIN value col3 in TableT20 related to this
                 734732      00     record is '00'  (out of 00,01 it should select 00 and update that value here)
    19950406     736189      00  -- As there is single col3 value in T20, 00 should be updated here.
                 736588      01  --  Here it should update col3 as '01' since for this pno(736588)
                 736588      01  --  Here too it should update col3 as 01 per requirement ,minimum value of this pno in T20
                 736588      01  --     same here too.. Sorry if my post formatting is not good...
    Hope i am clear in my requirement..(update T30 col3 based on min value of col3 of related records)
    Please suggest some update sql for this...(ideas would be great)
    I am using oracle 10 g version soon will be migrated to 11g..
    Regards
    Prasanth
    Edited by: Onenessboy on Oct 20, 2010 12:13 PM
    Edited by: Onenessboy on Oct 20, 2010 12:15 PM

    Onenessboy wrote:
    I am really sorry, my post so nonsense in look..
    I used to use for actuall code..
    the out put i tryped, i used [pre] , [/pre] but still does not look good..
    hmm..thanks for your suggestion hoek..
    so any ideas about my requirement...I would suggest spending a bit more time trying hoek's suggestion regarding {noformat}{noformat} tags instead of repeatedly asking for more help.
    Because to understand your requirement, people are going to have to read it first.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Performance issues in million records table

    I Have a scenario wherein have some 20 tables each with a million and more records. [ Historical ]
    On an average I do add 1500 - 2500 records a day... i.e would add a million records every year on an average
    Am looking for archival solutions for these master tables.
    Operations on Archival Tables, would be limited to read.
    Expected benefits
    User base would be around 2500 users on the whole - but expect 300 - 500 parallel users at the max.
    Very limited usage on Historical data - compared to operations on current data
    Performance on operations over current data is important compared over that on historical data
    Environment - Oracle 9i - Should be migrating to Oracle 10g sooner.
    Some solutions i cud think of ...
    [ 1 ] Put every archived record into a archival table and fetch it from there
    i.e clearly distinguish searches as current or archival - prior to searching
    the impact i feel is again archival tables are ever increasing by approx a million in a year
    [ 2 ] Put records into various archival tables each differentiated by a year
    For instance every year i do replicate the set of tables and that year data goes into that table.
    how do i do a fetch??
    Note - i do have a unique way of identifying each record in my master table - the primary key is based on YYYYMMXXXXXXXXXX format eg: 2008070000562330, will the year part help me in anyway to check with the correct table
    The major concern is i do have a very good response based on indexing and other common things, but would not want this to downgrade in a year and more, but expect to improvise on the current response timings and also do ensure to conitnue the same over a period of time.
    Also I don't want to make change to every query in my app - until there is no way out..

    Hi,
    Read the following documentation link about Partitioning in Oracle.
    Best Regards,
    Alex

  • Performance issue with sys.user_history$ table

    Hi,
    I am investigating performance for one of my client's databases (which is at 9.2.0.8) as they are experiencing intermittent poor response. In the Statspack report (Top SQL section) I can see that a catalog table called SYS.USER_HISTORY$ is being accessed very frequently. Now I understand that this would be a result of password limits being set in users' profile but each time the table is accessed, it would appear to incur a full table scan resulting in over 7,500 gets each time. The total buffer gets (and physical blocks read) from this table account for a high percentage of the total and since the highest waits are buffer and I/O related this must be a major factor. Here is an extract from the Statspack report:
    CPU Elapsd
    Buffer Gets Executions Gets per Exec %Total Time (s) Time (s) Hash Value
    2,327,138 316 7,364.4 7.8 190.22 4889.61 3236020785
    select password_date from user_history$ where user# = :1 order by password_date desc
    2,320,524 313 7,413.8 7.8 199.41 4278.44 3584552880
    delete from user_history$ where password_date < :1 and user# = :2
    2,272,260 308 7,377.5 7.6 169.36 3453.12 822812381
    select 1 from dual where exists (select password from user_history$ where password = :1 and user# = :2)
    Physical Reads Executions Reads per Exec %Total Time (s) Time (s) Hash Value
    1,448,689 316 4,584.5 20.6 190.22 4889.61 3236020785
    select password_date from user_history$ where user# = :1 order by password_date desc
    1,269,172 313 4,054.9 18.1 199.41 4278.44 3584552880
    delete from user_history$ where password_date < :1 and user# = :2
    1,206,906 308 3,918.5 17.2 169.36 3453.12 822812381
    select 1 from dual where exists (select password from user_history$ where password = :1 and user# = :2)
    Is there any way to improve access to this table? Since it's a catalog table, I presume it would not be acceptable to an index to it but, for example, would it be acceptable to assign it to a suitably sized KEEP buffer pool which should at least reduce the amount of physical I/O incurred?
    Any ideas would be appreciated.
    Regards,
    Ian Brennan

    Hi,
    Here is the remaining information which I have now gathered:-
    select count(*) from dba_users;
    24681
    select count(*) from sys.user_history$;
    1258133
    select profile, limit from dba_profiles where resource_name = 'PASSWORD_REUSE_TIME';
    PROFILE LIMIT
    DEFAULT UNLIMITED
    PRS2_DEFAULT_PROFILE 365
    select bytes from dba_segments where SEGMENT_NAME='USER_HISTORY$';
    61865984
    explain plan for
    select password_date from user_history$ where user# = :1 order by password_date desc;
    SELECT STATEMENT CHOOSE
    Cost: 647 Bytes: 913 Cardinality: 83
    2 SORT ORDER BY
    Cost: 647 Bytes: 913 Cardinality: 83
    1 TABLE ACCESS FULL SYS.USER_HISTORY$
    Cost: 638 Bytes: 913 Cardinality: 83
    Any further thoughts?

  • Performance issue related to EP 6.0 SP2 Patch 5

    We have implemented SAP EP6.0 SP2 Patch5.We have also configured IIS 6.0 to access our portal from internet.
    When we access the portal from the internet,it is very slow.Sometime,pages are taking 5-10 minutes to load.
    I am using the cacheing technique for the iview.I wanted to know whether it is a good idea to use cacheing coz it is taking lot of time to load the iview.
    I would really appreciate any coments or suggestions.

    Paritosh,
    I think you need to analyze the issue step by step as the response time seems to be very high. Here are a few suggestions. Response time high could be due to many factors - Server Side, Network and Client Browser setting. Let us analysis the case step by step.
    1) Do a basic test to access the EP within the same network (LAN) to make sure we eliminate network to verify everything works fine within the LAN.
    2) If performance is not acceptable within LAN, then accessing over WAN or Internet will not be better anyway. If LAN performance is not acceptable (it requires you know the acceptable response time, say 5 sec or something), you need to find out whether you have large contents in the page you are accessing. You need to know how many iViews you have in the page. What kind of iViews are they - are they going to backend system? If they are going to the backend, how is the going? Are they using ITS or JCo-RFC? If it goes through ITS, how about accessing the same page directly via ITS? Do you get the same problem? If you are using via JCo, have you monitor RFC traffic (size of data and number of round trips using ST05).
    There could be many other potential issues. Have you done proper tuning of EP for JVM parameters, threads, etc? Are you using keep-alive settings in the dispatcher, firewall, and load balancer (if any)? Are you using compression enabled in the J2EE Server? Do you use content expirations at the J2EE Server? How is your browser setting for browser cache?
    In summary, we like to start with EP landscape with all components. We need to make sure that response time is acceptable within LAN. If we are happy, we can look into Network part for WAN/Internet performance.
    Hope it will give you a few starting points. Once you provide more information, we can follow-up.
    Thanks,
    Swapan

Maybe you are looking for

  • Table cells not editable in Contribute

    I have a date table (inside a CSS layout) which displays incorrectly in Contribute Edit mode.This is the URL http://www.greencs.co.uk/accreditations.html In Contribute Edit mode, the right hand column of the table (which contains the PDF certificates

  • How do I resolve a "This document could not be signed" error?

    How do I resolve a "This document could not be signed" error?  I received this error when trying to sign a document with multiple signature boxes.  This particular box locks down most of the document.  It's normally not an issue, but periodically we

  • A.pack dialnorm -31 trouble

    Hi, As part of a DVD I'm authoring I've outputted pcm (stereo/non-surround) aiff audio that I need to convert to ac3 using Apack. The entire audio program is already level matched and mastered and should stay as is. My hope is to use apack as a simpl

  • SWF Viwere Syntax

    Hi, I have some SWF files created using Captivate that we wish to publish to an INTRAnet site. On this site we have the benefit of knowing that ALL users have a local network drive available as the mapped "I:" drive. We'd like to distribute the SWF f

  • Production order settlement, on a batch job

    Dear All, I need to use the KO8G, for running production order settlement, on a batch job. I found a SAP Note Number 498387 where is said that, for the selection of statuses of orders of the order category 10 or greater, will be used a status selecti