GL COSOLIDATION PROCESS PERFORMANCE

제품 : FIN_GL
작성날짜 : 2002-11-07
GL COSOLIDATION PROCESS PERFORMANCE
===================================
GL의 Consolidation 작업시 발생할수 있는 Performance문제
PURPOSE
느린 GL Consolidation작업
Explanation
Consolidation은 크게 두가지의 방법으로 mapping rule을 정의할수 있다.
1. Segment Mapping Rule
특정 Segment는 어떤 Account로 바뀌어져야 한다는 Rule이다.
Segment단위로 작업을 진행하기 때문에 Performance에는 크게 영향을
주지 않기 때문에 가능한 Segment Rule로 Consolidation을
진행할것을 권고한다.
2. Account Mapping Rule
만약 특정 Account 범위는 Segment Rule로 표현할수 없는 Account로
Mapping을 원한다면 Account Rule을 사용하여 특정 범위의 Account를
Conversion한다.
주의) Account Mapping Rule을 사용할때는 가능한 작은 범위의 Rule을
사용할것을 권고한다.
특히 Segment Mapping Rule과 Account Mapping Rule을 동시에
사용하지 않을것은 오라클은 권고한다.
GL concurrent Debugging(Bulletin#17744)을 사용하여 GL consolidation
작업의 시간을 측정해 본결과 Account Mapping 에서 작업 시간이 크게
지연될 경우 GL_INTERFACE table의 GL_INTERFACE_N2가
반드시 아래와 같은지 확인 한다.
1. request_id
2. je_header_id
3. status
4. code_combination_id
Example
Reference Documents
Bug No: 2632310

Thanks for the reply Roger. I have a solution.. and it's very quick (and i'm in a hurry so apologies if this doesn't read well..)
Extra info first. My FLEX_VALUE column == SEGMENT2 values.
In GL_SECURITY_PKG there is a query on a table called GL_BIS_SEGVAL_INT. This contains the segment_column_name, segment_value and parent_segment that my apps user can see.
So, i'm looking for SEGMENT2 values from this table...
select * from GL_BIS_SEGVAL_INT where SEGMENT_COLUMN_NAME = 'SEGMENT2'
i join my original query to this query and.. bingo, i have what i need!
SELECT
MY_TABLE.FLEX_VALUE,
GL_BIS_SEGVAL_INT.SEGMENT_COLUMN_NAME,
GL_BIS_SEGVAL_INT.SEGMENT_VALUE,
GL_BIS_SEGVAL_INT.PARENT_SEGMENT
FROM
MY_TABLE MY_TABLE,
GL_BIS_SEGVAL_INT GL_BIS_SEGVAL_INT
WHERE
GL_BIS_SEGVAL_INT.SEGMENT_VALUE = MY_TABLE.FLEX_VALUE
AND
GL_BIS_SEGVAL_INT.SEGMENT_COLUMN_NAME = 'SEGMENT2'
returns only the flex_value/segment2 values my apps user has access to.
regards,
Joss.
Edited by: Joss Leaver on Sep 9, 2010 7:19 PM
Edited by: Joss Leaver on Sep 9, 2010 7:20 PM

Similar Messages

  • BPM - Process Performance Indicators and Process Monitoring

    Hi,
    In SAP BPM, is there any way to get some reports or dashboard with key PPI (Process Performance Indicators). What we are really interested in is to know how many times a month a process or a specific task of a process has been run, how many time it took to complete the task, was the task completed on time, where are we in the process right now, etc?
    We have just started to look into SAP BPM, but my first impression is that it is more a modeling tool along with some taks coordinations. I feel like it's missing the key analytics to really drive innovation in our processes.
    Probably that I'm wrong and I've missed something. Please let me know if there is a way to do that, if there are any workarounds or if there are any SAP partners that offer a solution that we could use along with SAP BPM.
    Thanks a lot!
    Martin

    Thanks for your feedback.
    We want to implement a SAP BPM scenario in a finance process for the VAT Tax reporting. So basically, our accountant needs to run some SAP transactions along with some manual outside steps. Then the supervisor will perform some checks and finally the tax manager needs to approve it.
    There is some interactions with SAP but only for a small part of the process. What we want to achieve is to be able to see the key performance indicator for our process. But at the moment this is not delivered with SAP BPM. I've heard that this may come in the next release end of 2009, but in the meanwhile I'm wondering if other people have been able to implement some customization in Net Weaver or have found other alternatives to be able to monitor their process adequately.
    Thanks
    Martin

  • Increase Apply Process Performance

    Dear All,
    I want to know how can I increase Apply Process Performance in Oracle Streams Setup.
    I use Windows 2003 and Oracle 10g R2

    Check metalink Note:335516.1
    HTH...

  • Improving ODM Process Performance

    Hi Everyone,
    I'm running several workflow on sqldeveloper data miner tools to create my model. My Data is around 3 million rows, to monitor the process I look to oracle enterprise manager.
    From what I've seen in oracle enterprise manage most of process ODM from my modelling didn't get parallel and sometimes my process not finished more than a day.
    Any tips/suggestion how we can improve ODM Process Performance ? By enable parallelism on each process/query maybe ?
    Thanks

    Ensure that any input table used in modeling or scoring has a PARALLEL attribute set properly. Since minig algorithms are usually CPU bound try to utilize whatevet CPU power you have. Following might be a good starting point:
    ALTER TABLE myminingtable PARALLEL <Number of Physical Cores on your Hardware>;

  • Difference between ARIS Process Performance Manager and SAP BI

    Hi All,
    I am searching for an answer on the following question: when the business purpose is to measure performance of E.g. a call center process. Can the Process Performance Manager from ARIS replace SAP BI.
    Regards,
    Marcel

    Hi Marcel,
    If I can add to the comment of Ajay,
    I think that when your goal is to measure the performance of a Process and analyze root cause of performance problems, Process Performance Manager is best suited.
    Even if SAP BI could do it as well, I think that SAP BI is best suited for data analysis like financial reports, Market studies ...
    I think that the baseline is that SAP BI is a broader BI solution but ARIS PPM is best suited for process performance than SAP BI.

  • Image Processing Performance Issue | JAI

    I am processing TIFF images to generate several JPG files out of it after applying image processing on it.
    Following are the transformations applied:
    1. Read TIFF image from disk. The tiff is available in form of a PlanarImage object
    2. Scaling
         /* Following is the code snippet */
         PlanarImage origImg;
         ParameterBlock pb = new ParameterBlock();
         pb.addSource(origImg);
         pb.add(scaleX);
         pb.add(scaleY);
         pb.add(0.0f);
         pb.add(0.0f);
         pb.add(Interpolation.getInstance(Interpolation.INTERP_BILINEAR));
         PlanarImage scaledImage = JAI.create("scale", pb);3. Convertion of planar image to buffered image. This operation is done because we need a buffered image.
         /* Following is the code snippet used */
         bufferedImage = planarImage.getAsBufferedImage();4. Cropping
         /* Following is the code snippet used */
         bufferedImage = bufferedImage.getSubimage(artcleX, artcleY, 302, 70);The performance bottle neck in the above algorithm is step 3 where we convert the planar image to buffered image before carrying out cropping.
    The operation typically takes about 1120ms to complete and considering the data set I am dealing with this is a very expensive operation. Is there an
    alternate to the above mentioned approach?
    I presume if I can carry out the operation mentioned under step 4 above on a planr image object instead of buffered image, I will be able to save
    considerable processing time as in this case step 3 won't be required. (and that seems like the bottle neck). I have also noticed that the processing
    time of the operation mentioned in step 3 above is proportional to the size of the planar image object.
    Any pointers around this would be appreciated.
    Thanks,
    Anurag
    Edited by: anurag.kapur on Oct 4, 2007 10:17 PM
    Edited by: anurag.kapur on Oct 4, 2007 10:17 PM

    It depends on whether you want to display the data or not.
    PlanarImage (the subclass of all renderedOps) has a method that returns a Graphics object you can use to draw on the image. This allows you to do this like write on an image.
    PlanarImage also has a getAsBufferedImage that will return a copy of the data in a format that can be used to write to Graphics objects. This is used for simply drawing processed images to a display.
    There is also a widget called ImageCanvas (and ScrollingImagePanel) shipped with JAI (although it is not a committed part of the API). These derive from awt.Canvas/Panel and know how to render RenderedImage instances. This may use less copying/memory then getting the data as a BufferedImage and drawing it via a Graphics Object. I can't say for sure though as I have never used them.
    Another way may be to extend JComponent (or another class) and customize it to use calls to PlanarImage/RenderedOp instances directly. This can hep with large tiled images when you only want to display a small portion.
    matfud

  • Question on BPEL Process Performance

    Hello,
    We have a BPEL process reading the datafile through file adapter and upserting into DB using DB Adapter. Our requirement is
    If there are 10 records to process and two records (record 5 and 9) fail while inserting/updating for some reason(i.e data type mismatch, column length mismatch etc..), at the end of the process you should see 8 records in the destination table and two records in error table.
    I know there are solutions of this :
    *1) Multiple calls to DB:* Use a While loop in a BPEL process and Invoke DB adapter for each record and use exception handling(Catch all block).
    *2) Invoke Store Procedure:* to prevent multiple calls to DB, create a stored proc on DB side to iterate and insert the records and the stored proc should also return the IDs of failed records back as error response so that you can insert those failed records to a log table or in log files.
    Can you suggest which solution is best in terms of performance and why ??
    Also we need to perform some business validation (i.e NOT NULL check, date format check etc..), Where should we perform this.. at DB level or BPEL process level?? and why..
    Thanks,
    Buddhi

    BPEL is a slow performer.
    Always call a stored procedure to do complex data processings.
    Hence go with the second approach.
    Error records:
    If your going to log errors in the same database, insert the error details direcly into the error table. Dont go back to BPEL.
    Application specific validations should be handled in the application itself.
    --Prasanna                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Logical Standby Apply Process Performance

    Hello,
    We are testing our logical standby database for sql apply process.We run batch jobs in our active database and monitor the standby database for the time it takes to bring the database in sync following are the steps we follow:
    1) Insure active and standby are in sync.
    2) Stop sql apply on standby database.
    3) Run Batch job on active database.
    4) After completion of the job on active,start sql apply on standby.
    Following are the details of the time taken by sql apply,based on the previous runs:
    1st. 654K volume = 4 hrs (2727 records per min)
    2nd. 810K volume = 8 hrs 45 mins (1543 records per min)
    3rd. 744K volume = 7 hrs 17 mins (1704 records per min)
    Following are the details of the logical stdby parameters :
    MAX_SGA 100
    MAX_SERVERS 15
    PREPARE_SERVERS 4
    APPLY_SERVERS 8
    MAX_EVENTS_RECORDED 10000
    RECORD_SKIP_ERRORS TRUE
    RECORD_SKIP_DDL TRUE
    RECORD_APPLIED_DDL FALSE
    RECORD_UNSUPPORTED_OPERATIONS FALSE
    EVENT_LOG_DEST DEST_EVENTS_TABLE
    LOG_AUTO_DELETE TRUE
    LOG_AUTO_DEL_RETENTION_TARGET 1440
    PRESERVE_COMMIT_ORDER TRUE
    ALLOW_TRANSFORMATION FALSE
    can we ensure SQL apply process to apply data in consistent volume,Is it okay for a sql apply process to take same amount of time what the actual batch takes in active instance,can we further tweak apply process to get better performance.
    Please help.
    Thank you !!

    Following are the details of the time taken by sql apply,based on the previous runs:
    1st. 654K volume = 4 hrs (2727 records per min)
    2nd. 810K volume = 8 hrs 45 mins (1543 records per min)
    3rd. 744K volume = 7 hrs 17 mins (1704 records per min)
    Following are the details of the logical stdby parameters :
    Hi,
    By looking at the above apply rate, the apply process is working normally and not having issues.
    Since it's a bulk batch data update in PRIMARY, it's obvious and quite normal that it will take time in STANDBY database to get applied and in sync with PRIMARY.
    Still, if you need to consider improving the performance, look out for adjusting the APPLIER & PREPARER process. (parameteres, APPLY_SERVERS & PREPAR_SERVERS).

  • Line Split Interface to feed Integration Process - Performance Issues

    Hi All
    We have a scenario whereby we receive an XML message from a 3rd Party through an exposed SOAP Adapter service. The XML Message has multiple lines that need to be split up and processed as individual messages. We need to create a Line Splitting interface in order to achieve this. The Line Split interface would feed different Integration Processes depending on a specific payload value. The Integration Processes would then perform certain specific logic & Rules as well as transformations to specific message formats (e.g. idoc, xml, flatfile). The Line Split interface also maps from an xml structure that caters for multiple lines, to a flatened xml structure which only contains one line. The uper range of a message we may need to split into individual messages is 30 000 lines.
    We first used an Interface Map and used SplitByValue to achieve this, however we ran into the constraint that we could not feed the output split messages to an Integration Process - you can only feed it to Adapters that reside on the J2EE engine.
    We then decided to build a seprate Integration Process thats sole purpose was to split the message and route the indvidual messages to other integration processes to perform the logic, business rules and specific transformations. However, the performance of the ccBPM line splitting Integration Process was nowhere near the Interface Map.
    e.g. Interface Map Split 1000 Lines = 13 seconds - BPM Integration Process 1000Lines = 100 seconds.
    Does anybody have any suggestions on how we can perform the line split outside of BPM, or how we can improve ther performance of the line splitting within BPM?
    Thanks for your assistance.
    Edited by: CostaC on Aug 24, 2009 11:53 AM

    hi,
    >>>We first used an Interface Map and used SplitByValue to achieve this, however we ran into the constraint that we could not feed the output split messages to an Integration Process - you can only feed it to Adapters that reside on the J2EE engine.
    the easiest (not the only) way :
    do the split as you did here and post the results in different folders (file adapter)
    then set up scenarios that will get the files from those folders
    (many additional objects but will be much much faster and better then a BPM)
    you could also split the messages in the adapter module but this is more advanced
    and officially SAP does not recommend it - even though it's possible
    Regards,
    Michal Krawczyk

  • Business Process Performance Tuning

    Hi Pals,
    I would like to request your help and inputs regarding tuning performance of Business process for my scenario.
    I have created a synchronous Process with 3 message mapping transformation steps. ( inbetween Sync Receive and Sync Send steps). So its pretty simple process.
    I am able to execute only 3500 processes per hour.
    The SAP Netweaver Server m/c configuration is 2 dual core processors with 12 GB RAM.
    Business process - Without buffering with multiple queues (content specific).
    IE Logging - No sync logging, logging level - 0 ( so logging turned off )
    I have tried out all the configurations mentioned in below weblogs, but with very less improvement in my case.
    Performance Tuning Checks in SAP Exchange Infrastructure(XI): Part-III
    Performance Tuning Checks in SAP Exchange Infrastructure(XI): Part-II
    Performance Tuning Checks in SAP Exchange Infrastructure
    I think something else is choking the execution, as the CPU or memory usage is not more than 10-20%.
    Please pour in your inputs.
    Thank you!
    Best Regards,
    Saravanan N

    Thank you very much Bhavesh!
    In my BPM, all the steps are set for "No New Transaction". So as to avoid any performance issue. But there is no  improvement.
    Even I have deleted all the work-items from Trxn: SWWL before the test.
    From ST03N, for each process instance executed, four function modules takes the maximum time.
    Function Module--No. of Calls--
    Execution Time/RFC Call
    TRFC_QIN_DEST_SHIP-- 1--
    995 milliseconds
    TRFC_QIN_ACTIVATE--1--
    1077 milliseconds
    ARFC_DEST_SHIP--2--
    280 milliseconds
    ARFC_RUN_NOWAIT--2--
    402 milliseconds
    Best Regards,
    Saravanan N

  • Process Performance monitoring Java API

    Hi,
    I am looking for java API's which can help me monitor performance stats like CPU utilization, memory utilization etc on a windows platform. I did get a handle on few APIs to measure memory utilization but somehow monitoring the CPU seems to be a problem. My requirement is to measure CPU and memory for a particular process and not for the entire system. Any pointers on this front would be of great help.
    Thanks
    Bhavin

    I don't see any great free libraries that you can just drop in and make this happen.
    There's an interesting looking product called "SIGAR" that seems to speak directly to what you want to do here.
    Or this might be a great opportunity for you to play with JNI and some C++... awesome! ;-)

  • Can a EDQ Process perform looping ?

    Hello,
    I want to create a looping process in EDQ. Here is the scenario..
    Reader based on a snapshot/data store.
    1. Process reads one record at a time from Reader.
    2. For each record,
          a. Call an external web-service (inserts the record in CRMOD, and returns Row-Id)**,
          b. Writes the record + CRMOD.Row_ID to a Writer (CSV file),
          c. Return to Reader and pick up the next record,
        Until No more records.
    **I have already created the custom script processor for the CRMOD web-service and it works.
    Is this type of flow even possible in EDQ ?
    Thanks in advance.
    Deepak Gopal.
    eVerge, LLC.

    Hi Nick,
    Thanks for your response. Yes I have written a process. The process has two processors. I have attached the dxi.
    1. Reader - Reads 100 records from snapshot (based on csv file)
    2. Script Processor - takes input (four fields), calls CRMOD query web service and performs query on the four fields, returns CRMOD data.
    The problem is, EDQ is flooding CRMOD and maxing out the concurrent session limit - 100 records get to CRMOD so quickly that they end up being too many concurrent sessions in CRMOD, which has limit of 5 concurrent sessions for this particular account.
    So my thought was - Don't process next record in EDQ until the current transaction has completed (so we maintain only one concurrent session in CRMOD).
    Here is my script..
    addLibrary("http");
    function GetValue(content, name) {
      var value = "";
      var startPos = content.indexOf("<" + name + ">");
      if (startPos > -1) {
       var endPos = content.indexOf("</" + name + ">", startPos);
    value = content.substring(startPos + name.length + 2, endPos);
      return value;
    try {
    var result = new Array();
    var inputFN = input1[0];
    var inputMN = input1[1];
    var inputLN = input1[2];
    var inputEmail = input1[3];
    var inputTitle = input1[4];
    var url = "https://secure-slsomxuda.crmondemand.com/Services/Integration";
    var xmlHttp = new XMLHttpRequest();
    xmlHttp.open("POST", url, false); ---------------------- false = synchronous session ------------------------------
    var request = "<?xml version='1.0' encoding='UTF-8' ?>" +
               " <soapenv:Envelope xmlns:soapenv='http://schemas.xmlsoap.org/soap/envelope/' xmlns:con='urn:crmondemand/ws/contact/' xmlns:con1='urn:/crmondemand/xml/contact'>" +
        " <soapenv:Header>" +
           " <wsse:Security xmlns:wsse='http://schemas.xmlsoap.org/ws/2002/04/secext'>" +
                " <wsse:UsernameToken>" +
                   " <wsse:Username>xxx</wsse:Username>" +
                   " <wsse:Password Type='wsse:PasswordText'>xxx</wsse:Password>" +
                " </wsse:UsernameToken>" +
           " </wsse:Security>" +
        " </soapenv:Header>" +
        " <soapenv:Body>" +
           "<con:ContactWS_ContactQueryPage_Input>" +
              "<con1:ListOfContact>" +
                 "<con1:Contact>" +
                           "<con1:ContactId></con1:ContactId>" +
                    "<con1:ContactEmail>='" + inputEmail + "'</con1:ContactEmail>" +
                    "<con1:ContactFirstName>='" + inputFN + "'</con1:ContactFirstName>" +
                    "<con1:JobTitle>='" + inputTitle + "'</con1:JobTitle>" +
                    "<con1:ContactLastName>= '" + inputLN + "'</con1:ContactLastName>" +
                    "<con1:MiddleName>='" + inputMN + "'</con1:MiddleName>" +
                  "</con1:Contact>" +
              "</con1:ListOfContact>" +
           "</con:ContactWS_ContactQueryPage_Input>" +
        "</soapenv:Body>" +
    "</soapenv:Envelope>";
        xmlHttp.setRequestHeader("SOAPAction", "\"document/urn:crmondemand/ws/contact/:ContactQueryPage\"");
        xmlHttp.send(request);
        var response = "" + xmlHttp.responseXML;
        var result = new String();
        var startPos = response.indexOf("<Contact>");
        while (startPos > -1)
           var endPos = response.indexOf("</Contact>", startPos);
           var record = response.substring(startPos, endPos);
           var conid = GetValue(record, "ContactId");
           result = conid;
           startPos = response.indexOf("<Contact>", endPos);
         if (startPos = response.indexOf("SBL-ODU-01003") > -1)
                { result = "Session Limit Reached";}
    catch (e) {
        result = "Error: " + e.toString();
    output1 = result;
    Thanks,
    Deepak.

  • Adobe form Processing performance bottlenack in Portal

    Hi
    In our ECC6-> EP7 portal while opening any adobe form in portal it is taking too much time
    in portal monitoring i found this component is taking maximum time in all request overview
    com.sap.tc.webdynpro.runtime.SessionManagement.doApplicationProcessing
    is there any parameter or anything we can do so as to improve adobe form processing in portal or anything related to above mention component so that performance issue in portal will be resolved.

    Hi Bhupinder,
                        Please try this below link:
    http://help.sap.com/saphelp_nwce10/helpdata/en/6f/8e0a414f3af223e10000000a155106/content.htm
    Kind Regards,
    Manoj Durairaj

  • Adobe form Processing performance Problem Portal

    Hi
    In our ECC6-> EP7 portal while opening any adobe form in portal it is taking too much time
    in portal monitoring i found this component is taking maximum time in all request overview
    com.sap.tc.webdynpro.runtime.SessionManagement.doApplicationProcessing
    is there any parameter or anything we can do so as to improve adobe form processing in portal or anything related to above mention component so that performance issue in portal will be resolved.

    Hi Arafat, Thanks for reply
    To answer
    No only adobe form is taking time in opening.All other applications are running fine.
    We are using Adobe Reader Version 8 and Adobe Life Cycle Designer(ADA) also have version 8 and yes there are dropdowns getiing populated in the form
    When i save the generated form and try to open in in my machine it just open in no time
    Please help me out i am struck up with this issue.
    Regards,
    Bhupinder

  • Improving SSAS Tabular processing performances

    Hi,
    I need to know if it is possible to improve the full process of a/more large Tabular table/s without using another processing option (fe not using process add), but acting on ssas instance settings.
    Any suggests to me, please?

    The bad news is that processing those tables from the SSMS GUI ends up processing them in serial. The good news is that if you click the Script button instead of the OK button and then add a <Parallel> tag around both Process commands, it will process
    both tables in serial. Other than that, I encourage you to read the Tabular Performance Guide mentioned above for other tips like changing the PacketSize.
    http://artisconsulting.com/Blogs/GregGalloway

Maybe you are looking for