PSJ - Performance enhancing

Hello there,
I have developed a functio module which is using function module "LDB_PROCESS" with the LDB PSJ.
I have to reach data about budgets, actuals (both from COSS1 & COSSP1) and POC (from RPSQT).
When evolving large projects, the retrieval takes 5 times longer (!!!) than a retrieval done the "traditional way" (by accessing the data right through the tables).
I'm sure I did something wrong when I used the mentioned function module.
Is there anybody who can give me some clues?
Is there anybody who has used the mentioned function module with PSJ?

Hi
LDB (Logical Databases) are quite conventional and are not suitable for large data retrieval and mostly create performance issues.
Therefore, idle approach is to go for a custom development and fetch the data , as per your requirement and use it accordingly.
But here if possible, you can view the Logical database using transaction - SE36 and look how the data retrieval can be made more effective in its program.
<b>Mostly select .. endselect statements, consume most of the database time. There, try to avoid those statements, as far as possible, and use internal tables to fetch the data once, and use the same locally, instead of again going to database server frequently.</b>
Ask in  your team with the SAP ABAP Developers to suggest something from there end.
Hope this will help.
Pls reward suitable points.
Regards
- Atul

Similar Messages

  • Performance Enhancements

    After owning (and still owning) a herd of PowerMacs G4, I recently attained the glorious heights of Mirrored-Drive Doors with a Dual 1.25 FW800. I also just got a Dual 533 Digital Audio and that thing smokes--it's blazingly fast and nimble. By comparison, the FW800 seems more like a slightly decadent ocean liner--grand, big, loaded with expensive materials but hardly the performance machine I expected. I have upgraded the RAM to 2GB, but don't notice much difference. I do notice the din produced by the world's loudest fan which after an hour or so reaches an annoying level of noise (it sounds like it's a diesel). I want to be amazed by this massive disco-edition PowerMac--what are some performance-enhancing tweaks and adjustments I can make to release its powers? In the meantime, the Dual 533 is jumping through hoops to get my attention.

    I have a MacBook Pro, which is unfondly known as the the EasyBake Pro or the George Foreman Portable Grille (it is regularly 175°-200° F in the cores). Is the Mac Pro unreasonably hot? Granted you don't come into contact with it like you would with a laptop (I mean notebook--I'd never put it on my lap). I imagine my next new/current Mac will be an Intel desktop, but in the meantime my herd of G4s add up in resale value to only half the price of the MacPro. I love the server room idea and can't wait to moor the Queen Mary in here. Thx.

  • Anybody have a recommendation on a third party disk utility, performance enhancing software?

    Anybody have a recommendation on a third party disk utility, performance enhancing software?

    What performance are you wanting to enhance on your MacBook Air?
    OSX does its own routine maintenance, usually at night. You can run those scripts during the day using Onyx, Cocktail or similar utilities.
    The best performance procedure is to keep your software updated, repair permissions using Disk Utility, and not fill your hard drive to capacity (keep about 15% free).

  • Order related Billing : Performance enhancement

    Hi Billing experts,
    We use order related standard CRM Billing (Periodic) and we have the below scenario.
    100 Customer share 1000 contracts with 0.5 Million Sales orders, undergo billing every month. currently it is taking 30 hours to finish the billing. our objective is to finish the billing in 15 hours.
    Please advise on how we can enhance the  performance of the billing.
    Regards
    Satish kumar

    Hi Ashis,
    Please follow below steps for your query:
    1. You will have to set two different billing types {one type for order related billing & one for delivery related billing} in sales document type configuration .
    2. The set up the relevant item category determination as per this sales document type, so that the proper item category will be picked up in the sales order level.
    3. Then set up the copy control fro the sales document type-billing document type at Header as well as Item level.
    I hope this will work out for your query.
    Reward points if you are satisfied.
    Regards,
    Hrishi

  • Redisgn for performance enhance.

    Hi experts,
    I have a problem which is a bit unclear at this point of the time. I have a program with two subroutines and its performance is very poor. So we decided to redesign it into three different individual programs. One for each subroutine and a'HAT' program which calls these two programs. Please let me know if this enhances the performance and if it is feasible.
    regards,
    anirvesh

    Hi Rob,
       The code is very large. Actually in one subroutine we are identifying the records to be updated and in the other subroutine we are updating the records. So the actual plan is to do parallel processing. So if doing parallel processing does the splitting of the program helps??
    Thanks.

  • Dynamic client performance enhancement

    Hi ,
    I am using DII to invoke web services. I know there is a performance overhead
    with DII but I need it to use it as it suits my requirement .
    How could the performance be improved when DII is used ?
    Are client side stubs created when web service is invoked using DII ( i did not
    see any stub classes created on my file sysyem, is it stored in memory). If yes
    which method creates it,
    service.createCall() or call.invoke(parmas).
    I was also thinking whether the call object which is not serialisable could be
    cached.So that I can invoke just the call.inovke(different parameters) method
    for the subsequent requests.
    Please let me know.
    Your help will be highly apreciated.
    -Amol

    You need to use DII only if the web service that
    is invoked is not described using a WSDL.
    If WSDL is available than clientgen can generate
    a stub from it. A generated stub is type safe and
    easy to use.
    Regards,
    -manoj
    http://manojc.com
    "Amol Desai" <[email protected]> wrote in message
    news:3f02d601$[email protected]..
    >
    Hi Neal,
    If the two approaches are simiilar in performance , then
    why/where should stub based call be used.
    Thanks,
    -Amol
    "Neal Yin" <[email protected]> wrote:
    Hi Amol,
    Each service.createCall() creates a new call instance. You can reuse
    the
    same call for multiple invocations. Did you compare the DII performance
    and
    stub performance? There should no significant performance difference.
    Thanks,
    -Neal
    "Amol Desai" <[email protected]> wrote in message
    news:3f019f3e$[email protected]..
    Hi ,
    I am using DII to invoke web services. I know there is a performanceoverhead
    with DII but I need it to use it as it suits my requirement .
    How could the performance be improved when DII is used ?
    Are client side stubs created when web service is invoked using DII( i
    did not
    see any stub classes created on my file sysyem, is it stored in
    memory).
    If yes
    which method creates it,
    service.createCall() or call.invoke(parmas).
    I was also thinking whether the call object which is not serialisablecould be
    cached.So that I can invoke just the call.inovke(different
    parameters)
    method
    for the subsequent requests.
    Please let me know.
    Your help will be highly apreciated.
    -Amol

  • Performance enhancement for parallel loops

    Hi,
      I have performance problem for the following parallel loops.Please help me solve this to improve performance of report,urgently.
    LOOP AT xt_git_ekpo INTO lv_wa_ekpo.
    lv_wa_final-afnam     = lv_wa_ekpo-afnam.
    LOOP at xt_git_ekkn into lv_wa_ekkn  where ebeln = lv_wa_ekpo-ebeln
    and ebelp = lv_wa_ekpo-ebelp.
    lv_wa_final-meins     = lv_wa_ekpo-meins.
    READ TABLE xt_git_ekko INTO lv_wa_ekko
                                    WITH KEY ebeln = lv_wa_ekpo-ebeln
                                    BINARY SEARCH.
          IF sy-subrc IS INITIAL.
            lv_wa_final-ebeln     = lv_wa_ekko-ebeln.
            lv_wa_final-ebelp     = lv_wa_ekpo-ebelp.
            lv_wa_final-txz01     = lv_wa_ekpo-txz01.
            lv_wa_final-aedat     = lv_wa_ekko-aedat.
    READ TABLE xt_git_lfa1 INTO lv_wa_lfa1
                                          WITH KEY lifnr = lv_wa_ekko-lifnr
                                          BINARY SEARCH.
            IF sy-subrc IS INITIAL.
              lv_wa_final-lifnr = lv_wa_lfa1-lifnr.
              lv_wa_final-name1 = lv_wa_lfa1-name1.
            ENDIF.
         LOOP AT xt_git_ekbe INTO lv_wa_ekbe WHERE   ebeln      =   lv_wa_ekpo-ebeln
    AND  ebelp = lv_wa_ekpo-ebelp.
    waiting for quick reply.

    Hi
    U can use SORTED TABLE instead of STANDARD TABLE:
    DATA: xt_git_ekkn TYPE SORTED TABLE OF EKKN WITH NON-UNIQUE KEY EBELN EBELP,
              xt_git_ekbe  TYPE SORTED TABLE OF EKBE WITH NON-UNIQUE KEY EBELN EBELP.
    LOOP AT xt_git_ekpo INTO lv_wa_ekpo.
        lv_wa_final-afnam = lv_wa_ekpo-afnam.
        LOOP at xt_git_ekkn into lv_wa_ekkn where ebeln = lv_wa_ekpo-ebeln
                                                                  and ebelp = lv_wa_ekpo-ebelp.
            lv_wa_final-meins = lv_wa_ekpo-meins.
            READ TABLE xt_git_ekko INTO lv_wa_ekko WITH KEY ebeln = lv_wa_ekpo-ebeln
                                                                                    BINARY SEARCH.
            IF sy-subrc IS INITIAL.
              lv_wa_final-ebeln = lv_wa_ekko-ebeln.
              lv_wa_final-ebelp = lv_wa_ekpo-ebelp.
              lv_wa_final-txz01 = lv_wa_ekpo-txz01.
              lv_wa_final-aedat = lv_wa_ekko-aedat.
              READ TABLE xt_git_lfa1 INTO lv_wa_lfa1 WITH KEY lifnr = lv_wa_ekko-lifnr
                                                                                    BINARY SEARCH.
              IF sy-subrc IS INITIAL.
                 lv_wa_final-lifnr = lv_wa_lfa1-lifnr.
                 lv_wa_final-name1 = lv_wa_lfa1-name1.
             ENDIF.
             LOOP AT xt_git_ekbe INTO lv_wa_ekbe WHERE ebeln = lv_wa_ekpo-ebeln
                                                                             AND ebelp = lv_wa_ekpo-ebelp.
    Anyway u should considere to upload in the internal table only the record of the current document, in this case u need to insert the SELECT into the loop:
    SORT  xt_git_ekpo by EBELN EBELP.
    LOOP AT xt_git_ekpo INTO lv_wa_ekpo.
        lv_wa_final-afnam = lv_wa_ekpo-afnam.
       IF lv_wa_ekkn-EBELN <>  lv_wa_ekpo-EBELN.
         SELECT * FROM EKKN INTO TABLE xt_git_ekkn WHERE EBELN = lv_wa_ekpo-EBELN.
         SELECT * FROM EKBE INTO TABLE xt_git_ekbe WHERE EBELN = lv_wa_ekpo-EBELN.
       ENDIF.
        LOOP at xt_git_ekkn into lv_wa_ekkn where ebelp = lv_wa_ekpo-ebelp.
            lv_wa_final-meins = lv_wa_ekpo-meins.
            READ TABLE xt_git_ekko INTO lv_wa_ekko WITH KEY ebeln = lv_wa_ekpo-ebeln
                                                                                    BINARY SEARCH.
            IF sy-subrc IS INITIAL.
              lv_wa_final-ebeln = lv_wa_ekko-ebeln.
              lv_wa_final-ebelp = lv_wa_ekpo-ebelp.
              lv_wa_final-txz01 = lv_wa_ekpo-txz01.
              lv_wa_final-aedat = lv_wa_ekko-aedat.
              READ TABLE xt_git_lfa1 INTO lv_wa_lfa1 WITH KEY lifnr = lv_wa_ekko-lifnr
                                                                                    BINARY SEARCH.
              IF sy-subrc IS INITIAL.
                 lv_wa_final-lifnr = lv_wa_lfa1-lifnr.
                 lv_wa_final-name1 = lv_wa_lfa1-name1.
             ENDIF.
             LOOP AT xt_git_ekbe INTO lv_wa_ekbe WHERE ebelp = lv_wa_ekpo-ebelp.
    In my experience (for a very large number of records) this second solution was faster than the first one:
    - Using the first solution (upload all data in internal table and use sorted table): my job takes 2/3 days
    - Using the second solution: my job takes 1 hour.
    Max

  • Labview sub-vi performance enhancement is needed

    Hello,
    I have two sub-vi's in an attachment with this message that I could use some help with.  I've been told by a regional NI field sales engineer that they can be improved by a NI application engineer so that they will perform much faster.  Anyone able to help me out in a bind on short notice?  These sub-vi's are used to generate hexidecimal code with a check-sum that's used to control the functional mode of a CDMA microprocessor.  The code is passed through the serial port interface of the CDMA device and it needs to operate faster.
    /BCU
    /BCU002
    Hardware Engineer•Design Reliability/Performance and Validation Group•Wavecom, Inc. - Research Triangle Park, N.C.•http://www.wavecom.com
    Attachments:
    Sub-vi.zip ‏54 KB

    Altenbach,
    Your re-creation of the original sub-vi seems to be a nice fit for my application and it does seem to run faster.  However now I'm not able figure out why the rest of the application isn't functioning properly.  The rest of my application runs through all of the subvi's very quickly now but once I get to the While Loop it stops working everything inside of it except for the two delays I'm using as on/off timers.  I've changed the VI properties of the sub-vi's that are being used in triplicate to 'reentrant' but that's not helping me like I expected.  Another odd thing is that I can't get the indicators on my top level panel to update and display the information I need from the While Loop.
    I really need more help with this application because I've spetn all day and all evening trying to figure it out and I'm at a loss for understanding how to fix the problems.  Can you help me out of this mess?
    /BCU002
    /BCU002
    Hardware Engineer•Design Reliability/Performance and Validation Group•Wavecom, Inc. - Research Triangle Park, N.C.•http://www.wavecom.com
    Attachments:
    GTM1-DMFTC.zip ‏2 KB

  • One of the best articles on PS CS4 performance enhancements I have seen

    this web site has probably already been mentioned several times but it has to be one of the best tutorials of enhancing PS CS4 I have ever seen.
    Enjoy (if you have not already seen it).
    Bob
    http://kb.adobe.com/selfservice/viewContent.do?externalId=kb404439&sliceId=1

    Thank you.
    Bill

  • Is there a Mac OS X manual/best practice/performance enhancement guide?

    Hi
    I just got all Mac'ed up and am pretty new to it. I've been a PC looser for ages, but at least I knew what I was doing with it! Although my Mac is super fast and efficient now, I am paranoid that, like all the PC's I've ever owned, this "new car smell" state will not last for ever, unless some maintenance is kept up. Is there any kind of manual or maybe a website out there that can tell me about best practice with Macs. Stuff like keeping the registry clean (like on a PC), the best way to remove applications (entirely!), managing memory for best performance, etc. I'd also like to know about stuff like how using non-Apple made/brand applications effects the integrity of the system. Basically just find out how Mac software works and how best to use it.
    Thanks in advance.

    Start with these:
    Switching from Windows to Mac OS X,
    Basic Tutorials on using a Mac,
    MacFixIt Tutorials,
    MacTips, and
    Switching to the Mac: The Missing Manual, Leopard Edition.
    For maintenance, see these:
    Macintosh OS X Routine Maintenance
    Mac OS X speed FAQ
    Maintaining OS X
    Additionally, *Texas Mac Man* recommends:
    Quick Assist.
    Welcome to the Switch To A Mac Guides, and
    A guide for switching to a Mac.

  • Performance enhanced with recent os/firmware update: Benchmarks

    Hey, Benchmarked my system using xbench before and after the updates in the same conditions and got a 25% improvement! My original score was 54.36, the new one is 67.53. I do not know much about xbench or benchmarking but the user-interface test went from a 23 to a 47 which im guessing is showing they are making the os faster and more able to handle the new intel processors. Has anyone else ran xbench before or after, or just want to post what there getting for there benchmarking scores. The program xbench is free and can be found here www.xbench.com

    They will never issue a rollback to the previous version of the Android OS.

  • Regarding performance enhancement to reduce execution time

    Hi ,
    actually there is a following piece of code(a SELECT query) in my program,due to which TIME OUT dump occurs.plz help me to modify the code to reduce it's execution time.
    code is:
    DATA: at_seque_extref TYPE zprms_ord_ind OCCURS 0 WITH HEADER LINE.
                SELECT (sv_fields) FROM zprms_ord_ind
                  APPENDING CORRESPONDING FIELDS OF TABLE at_seque
                     UP TO lv_max_sequs ROWS
                      WHERE process_type    IN lr_process_type
                        AND   posting_date    IN lr_pos_date
                        AND   sol_date        IN lr_sol_date
                        AND   changed_at      IN lr_chg_date
                      AND   external_ref    NE space
                        AND  (ls_where)
                        AND   ib_comp_network IN lr_ibc_netw
                        AND   ib_comp_node    IN lr_ibc_node
                        AND   ib_comp_site    IN lr_ibc_site
                    ORDER BY (iv_order_by).
                CLEAR at_seque_extref.
                LOOP AT at_seque.
                  IF at_seque-external_ref CP is_bus_trans_search-external_ref  .
                    MOVE-CORRESPONDING  at_seque TO at_seque_extref.
                    APPEND at_seque_extref.
                  ENDIF.
                ENDLOOP.
                CLEAR at_seque[].
    *the table at_seque[]  type zprms_ord_ind
    Or plz suggest me check points to reduce it's execution time.

    DATA: at_seque_extref TYPE zprms_ord_ind OCCURS 0 WITH HEADER LINE.
    SELECT (sv_fields) FROM zprms_ord_ind
    <b>PACKAGE SIZE 100</b>
    APPENDING CORRESPONDING FIELDS OF TABLE at_seque
    UP TO lv_max_sequs ROWS
    WHERE process_type IN lr_process_type
    AND posting_date IN lr_pos_date
    AND sol_date IN lr_sol_date
    AND changed_at IN lr_chg_date
    AND external_ref NE space
    AND (ls_where)
    AND ib_comp_network IN lr_ibc_netw
    AND ib_comp_node IN lr_ibc_node
    AND ib_comp_site IN lr_ibc_site
    ORDER BY (iv_order_by).
    <b>ENDSELECT.</b>
    CLEAR at_seque_extref.
    LOOP AT at_seque.
    IF at_seque-external_ref CP is_bus_trans_search-external_ref .
    MOVE-CORRESPONDING at_seque TO at_seque_extref.
    APPEND at_seque_extref.
    ENDIF.
    ENDLOOP.
    CLEAR at_seque[].
    Make the necessary modifications.
    Copy paste the code.
    Regards,
    Pavan

  • RapidCreator - SAP User Performance Enhancement Tool

    Do you need to train your staff to use Enterprise Application suite?
    Are your SAP End users struggling to use new interfaces and procedures?
                       RapidCreator is the perfect solution that enables effortless and effecient creation, distribution and organization of procedural documents, training materials and online help to facilitate knowledge sharing among your team members.

    Which company is offering this tool? Please provide the contact details.

  • ViewSonic 17" LED Display Eco-Friendly and Performance Enhancing $129.99

    I recently attended an HP/VMware webcast where the HP host said that iLO4 equipped servers were capable of sending hardware alerts via email.  I've logged into our iLO on a ML350 Gen8 server, but I don't see the option for this.  Can somone please advise on how to do this?

    Hi everyone,I have a user connecting to a Server 2008 R2 RDP server. He can connect no problem, but there is quite a bit of latency which I do not understand. He pings to the server at around ~60ms. I have another server on the same site (Server 2012 R2) and he does not experience the same latency. I've connected another Windows 8.1 computer to the Server 2008 R2 box and CANNOT replicate the issue.He appears to be the only one.I've temporarily disabled his AV, windows firewall, disabled local visualization settings (and on the server). I've configured RDP to 56k settings in the Experience tab which changed absolutely nothing.I've also run the netsh commands everyone seems to suggest with this issue.Any recommendations or things to try?EDIT: I've also updated his wireless drivers. I don't have the laptop in hand to try the local...

  • XML DOM Parser Performance Enhancement

    Hello,
    I parsed an XML file with DOM Parser.
    When I read it (Which is successfully done) I see, in the debugging phase, that there are empty elements
    which might represent a space, tab or newline and that are considered childrenNodes.
    This actually cost us double time to run through everything.
    My main question is how to get rid of the unwanted childrenNodes in order to iterate less on each element?
    Thank you.

    According to the documentation, you should be able to call normalize() on either the document or the root element. In practice, I still have to iterate through the empty elements. Such is life.
    - Saish

Maybe you are looking for