Gl posting is taking time

Hi All,
Everyday we run a gl process. Gl posting is taking more 1 hr. Same way 4 gl posting run during gl process which is causing
performance delay.
I see in the the log file that below stage is taking time.
glpibr.concurrency() 02-MAR-2013 08:13:03
glpibr.concurrency() 02-MAR-2013 08:13:03
glpidb() 02-MAR-2013 08:46:41
glpidb() 02-MAR-2013 08:49:59
glpibr() 02-MAR-2013 08:49:59
This process itself takes more than 30 hours. It varies everyday. Sometime it takes 1 hr also.
insert into gl_balances
( set_of_books_id, code_combination_id, currency_code, period_name,
actual_flag, budget_version_id, encumbrance_type_id, last_update_date,
last_updated_by, period_type, period_year, period_num, period_net_dr, period_net_cr,
period_to_date_adb, quarter_to_date_dr, quarter_to_date_cr, quarter_to_date_adb, year_to_date_adb,
project_to_date_dr, project_to_date_cr, project_to_date_adb, begin_balance_dr, begin_balance_cr,
period_net_dr_beq, period_net_cr_beq, begin_balance_dr_beq, begin_balance_cr_beq, template_id, translated_flag )
select pi.set_of_books_id, pi.code_combination_id, pi.currency_code, pi.period_name, pi.actual_flag, pi.budget_version_id, pi.encumbrance_type_id, sysdate, :fnd_user_id, pi.period_type, pi.period_year, pi.period_num, 0, 0, NULL, 0, 0, NULL, NULL,
0, 0, NULL, 0, 0, NULL, NULL, NULL, NULL, pi.template_id, pi.translated_flag from gl_posting_interim_50130 pi where not exists ( select 'X' from gl_balances b where b.set_of_books_id = pi.set_of_books_id and b.code_combination_id = pi.code_combination_id and b.currency_code = pi.currency_code and
b.period_name = pi.period_name and b.actual_flag = pi.actual_flag and
nvl(b.encumbrance_type_id, -1) = nvl(pi.encumbrance_type_id, -1) and
nvl(b.budget_version_id, -1) = nvl(pi.budget_version_id, -1) and decode(b.translated_flag, '', '-1', 'Y', '0', 'N', '0', 'R', '1', b.translated_flag) = decode(pi.translated_flag, '', '-1', 'Y', '0', 'N', '0', 'R', '1', pi.translated_flag) )Above query is taking more than 30 mins as i can see in awrreport . As per metalink note id - 1096873.1 it is due to gl_posting_interim tables which is locking the another gl_interim_posting tables. I dont see any lock but i can see lots of gl_posting_interim present in the database. As per my knowledge this table must be dropped after posting got over.
Env details -
Apps version - 11.5.10.2
DB version - 11.2.0.1
OS - IBM AIX 6.1
Please suggest.
Thanks

Please see these docs.
R11i GLTTRN Translation Performance Issue In INSERT INTO GL_BALANCES [ID 761898.1]
Information Center: Optimizing Performance for Oracle General Ledger [ID 1489537.2]
Performance Issue With Translation Program After Upgrading To 10G Database [ID 742025.1]
Deleting Summary Accounts Has Poor Performance [ID 1088585.1]
GL Posting Performance Issue at glpip2() [ID 280641.1]
GLPPOS Performance Issue After 9i To 10g Upgrade in gluddl.lpc [ID 1262020.1]
Thanks,
Hussein

Similar Messages

  • Using Materilaized view in a query .. query is taking time????

    Hi I have a query :-
    SELECT rownum as id, u.last_name, u.first_name,u.phone phone, u.empid,u.supervisor id
    FROM emp_view u -- using view
    CONNECT BY PRIOR u.empid = u.supervisor_id
    START WITH u.sbcuid = 'ph2755';
    here emp_view is a view .
    ------ The above query is taking 3 sec to execute.
    Then I created Materialuized view emp_mv and the the MV query is same as emp_view view query.
    After this i executed following sql
    SELECT rownum as id, u.last_name, u.first_name,u.phone phone, u.empid,u.supervisor id
    FROM emp_mv u -- using materialized view
    CONNECT BY PRIOR u.empid = u.supervisor_id
    START WITH u.sbcuid = 'ph2755';
    this query is taking 15 sec to execute..... :(
    can anyone please tell me why MV query is taking time????

    Hi,
    In your first case you query a view, meaning that you query the underlying tables. These probably have indexes and stats are updated.
    In you second case you query a materialized view, meaning that you query the underlying base table of that mview.
    This probably do not have the same indexes to support that query.
    But of course, I'm just guessing based on the little information provided.
    If you want to take this further, please search for "When your query takes too long" and "How to post a tuning request".
    These two threads holds valuable information, not only on how to ask this kind of question, but also how to start solving it on your own.
    Regards
    Peter

  • Insert statement taking time on oracle 10g

    Hi,
    My procedure taking time in following statement while database upgrading from oracle 9i to oracle 10g.
    I m using oracle version 10.2.0.4.0.
    cust_item is matiralize view in procedure and it is refreshing in the procedure
    Index is dropping before inserting data into cust_item_tbl TABLE and after inserting data index is created.
    There are almost 6 lac records into MV which are going to insert into TABLE.
    In 9i below insert statement is taking 1 hr time to insert while in 10g it is taking 2.30 hrs.
    EXECUTE IMMEDIATE 'ALTER SESSION ENABLE PARALLEL QUERY';
    EXECUTE IMMEDIATE 'ALTER SESSION ENABLE PARALLEL DML';
    INSERT INTO /*+ APPEND PARALLEL */ cust_item_tbl  NOLOGGING
             (SELECT /*+ PARALLEL */
                     ctry_code, co_code, srce_loc_nbr, srce_loc_type_code,
                     cust_nbr, item_nbr, lu_eff_dt,
                     0, 0, 0, lu_end_dt,
                     bus_seg_code, 0, rt_nbr, 0, '', 0, '', SYSDATE, '', SYSDATE,
                     '', 0, ' ',
                                   case
                                 when cust_nbr in (select distinct cust_nbr from aml.log_t where CTRY_CODE = p_country_code and co_code = p_company_code)
                                 THEN
                                         case
                                            when trunc(sysdate) NOT BETWEEN trunc(lu_eff_dt) AND trunc(lu_end_dt)
                                            then NVL((select cases_per_pallet from cust_item c where c.ctry_code = a.ctry_code and c.co_code = a.co_code
                                                          and c.cust_nbr = a.cust_nbr and c.GTIN_CO_PREFX = a.GTIN_CO_PREFX and c.GTIN_ITEM_REF_NBR = a.GTIN_ITEM_REF_NBR
                                                          and c.GTIN_CK_DIGIT = a.GTIN_CK_DIGIT and trunc(sysdate) BETWEEN trunc(c.lu_eff_dt) AND trunc(c.lu_end_dt) and rownum = 1),
                                                          a.cases_per_pallet)
                                      else cases_per_pallet
                                  end
                          else cases_per_pallet
                     END cases_per_pallet,
                     cases_per_layer
                FROM cust_item a
               WHERE a.ctry_code = p_country_code ----varible passing by procedure
                 AND a.co_code = p_company_code   ----varible passing by procedure
                 AND a.ROWID =
                        (SELECT MAX (b.ROWID)
                           FROM cust_item b
                          WHERE b.ctry_code = a.ctry_code
                            AND b.co_code = a.co_code
                            AND b.ctry_code = p_country_code ----varible passing by procedure
                            AND b.co_code = p_company_code   ----varible passing by procedure
                            AND b.srce_loc_nbr = a.srce_loc_nbr
                            AND b.srce_loc_type_code = a.srce_loc_type_code
                            AND b.cust_nbr = a.cust_nbr
                            AND b.item_nbr = a.item_nbr
                            AND b.lu_eff_dt = a.lu_eff_dt));explain plan of oracle 10g
    Plan
    INSERT STATEMENT  CHOOSECost: 133,310  Bytes: 248  Cardinality: 1                      
         5 FILTER                 
              4 HASH GROUP BY  Cost: 133,310  Bytes: 248  Cardinality: 1            
                   3 HASH JOIN  Cost: 132,424  Bytes: 1,273,090,640  Cardinality: 5,133,430       
                        1 INDEX FAST FULL SCAN INDEX MFIPROCESS.INDX_TEMP_CUST_AUTH_PERF_MV Cost: 10,026  Bytes: 554,410,440  Cardinality: 5,133,430 
                        2 MAT_VIEW ACCESS FULL MAT_VIEW MFIPROCESS.TEMP_CUST_AUTH_PERF_MV Cost: 24,570  Bytes: 718,680,200  Cardinality: 5,133,430  can you please look into the issue?
    Thanks.

    According to the execution plan you posted parallelism is not taking place - no parallel operations listed
    Check the hint syntax. In particular, "PARALLEL" does not look right.
    Running queries in parallel can either help performance, hurt performance, or do nothing for performance. In your case a parallel index scan on MFIPROCESS.INDX_TEMP_CUST_AUTH_PERF_MV using the PARALLEL_INDEX hint and the PARALLEL hint specifying the table for MAT_VIEW MFIPROCESS.TEMP_CUST_AUTH_PERF_MV Cost might help, something like (untested)
    select /*+ PARALLEL_INDEX(INDX_TEMP_CST_AUTH_PERF_MV) PARALLEL(TEMP_CUST_AUTHPERF_MV) */Is query rewrite causing the MVs to be read? If so hinting the query will be tricky

  • Selecting with rownum=1 taking time

    Hi, I am using below script in one of my PL/SQL program. The strange thing is that this is taking time to execute. But If we gather stats on user_scheduler_job_run_details, it executes quickly. Looks like it is doing a full table scan in the 1st  case. Any idea about why this is happening.   SELECT *   FROM user_scheduler_job_run_details   where ROWNUM = 1 Thanks.

    Hi,
    1) USER_SCHEDULER_JOB_RUN_DETAILS is a view. How do you collect optimizer stats on a view?!
    2) "the strange thing is that this is taking time to execute" -- there is nothing strange about that. Everything takes time. How much time does the query take exactly?
    3) "looks like it is doing a full table scan" -- post appropriate diagnostic information for this query, e.g. SQL real-time monitor output, dbms_xplan.display_cursor output or tkprof'ed trace file.
    Best regards,
      Nikolay

  • Publishing and Overwriting a Universe, Updates new measures quickly in Business Explorer / Information Spaces but taking time in dashboard to be updated.

    Hi gurus:
    I have a continuous problem while Publishing and Overwriting a Universe from IDT, new measures UPDATES quickly in Business Explorer / Information Spaces but taking time in dashboard to be updated. it approximately takes half an hour to be updated while accessing universe from dashboards.
    Regards:
    Jawad Khalid

    Hi gurus:
    I have a continuous problem while Publishing and Overwriting a Universe from IDT, new measures UPDATES quickly in Business Explorer / Information Spaces but taking time in dashboard to be updated. it approximately takes half an hour to be updated while accessing universe from dashboards.
    Regards:
    Jawad Khalid

  • In one website, it takes to much time to load the page, in other site its not taking time, so do i need to enable or change any settings

    in one website, its taking time to load the page, on other PC its not taking any time( with internet explorer) in my PC other websites are opening quickly but this website takes too much time with firefox

    Zepo wrote:
    My iMac has been overwhelmed almost since I bought it new.  After some digging the guiness bar suggested its my Aperture library being on the same
    internal Tera byte drive as my operating system.
    Having a single internal hard drive overfilled (drives slow as they fill) is very likely contributing to your problems, but IMO "my Aperture library being on the same internal Tera byte drive as my operating system" is very unlikely to be contributing to your problems. In fact the Library should stay on an underfilled (roughly, for speed, I would call ~half full "underfilled") internal drive, not on the Drobo.
    Instead build a Referenced-Masters workflow with the Library and OS on an internal drive, Masters on the Drobo, OS 10.6.8 (there have been issues reported with OS 10.7 Lion). Keep Vault backup of the Library on the Drobo, and of course back up all Drobo data off site.
    No matter what you do with i/o your C2D Mac is not a strong box for Aperture performance. If you want to really rock Aperture move to one of the better 2011 Sandy Bridge Macs, install 8 GB or more of RAM and build a Referenced-Masters workflow with the Library and OS on an internal solid state drive (SSD).
    Personally I would prefer investing in a Thunderbolt RAID rather than in a Drobo but each individual makes his/her own network speed/cost decisions. The Drobo should work OK for referenced Masters even though i/o is limited by the Firewire connection.
    Do not forget the need for off site backup. And I suggest that in the process of moving to your new setup it is most important to get the data safely and redundantly copied and generally best to disregard how long it may take.
    HTH
    -Allen Wicks

  • Special G/L posting to one-time account is not defined Message no. F5265

    hi,
    Special G/L posting to one-time account is not defined
    Message no. F5265
    the above message is displaying when tried to post for one time customer.
    where  to check this in configuration level. what are the tcodes , please let me know.
    actually i am abaper, dont know much abt fi configuration..
    regards,
    Hari priya

    Dear H Priya,
    u r posting to one time Customer
    in that there is no need of special G/L indicater. (it means special G/L - down payment)
    if the customer is one time how can u take the down payment from the customer.
    if any quaries plz...........
    Regards
    radha

  • BPM - to post message N times (message split)

    Hi,
    If i want to post messages N times based on field input Value (N), which step do i need to use?
    Thanks

    If i want to post messages N times based on field input Value (N), which step do i need to use?
    Depending on your requirement:
    1) You have many occurences of a node and according to those you want to post your message:
    Use a Block step in BPM with mode as ForEach, for more info read this: /people/milan.thaker/blog/2008/08/05/modes-in-block-step-of-bpm
    Also read this help section for more understanding:
    http://help.sap.com/saphelp_nw70/helpdata/EN/11/13283fd0ca8443e10000000a114084/frameset.htm
    2) If your requirement is to post a message to some receiver based on a value in input then you can do so by giving the condition in receiver determination.
    Regards,
    Abhishek.

  • Prcedure taking time

    Hello Experts,
    I am running a procedure which is taking too much time during execution. How can I found that which statement or query in the procedure is taking time. Is there any package available to check this out.
    Thanks & Regards,
    Manish

    Execute the below query to find out the session id of the session in which your procedure is getting executed.
    [c]
    SELECT s.sid
    ,s.serial#
    ,ddl.name
    ,ddl.type
    ,ddl.owner
    ,s.status
    ,s.osuser
    ,s.machine
    FROM dba_ddl_locks ddl
    ,v$session s
    WHERE ddl.session_id = s.sid;
    [\c]
    Once you get the sid,execute the below query to know which DML its executing:
    [c]
    SELECT /*+ leading(s) */
    s.sid
    ,s.serial#
    ,st.piece
    ,st.sql_text
    ,s.status
    ,s.osuser
    ,s.machine
    FROM v$session s
    ,v$sqltext st
    WHERE s.sql_hash_value = st.hash_value(+)
    AND s.sid = &sid_from_above_query
    ORDER BY st.piece;
    [\c]
    Regards
    Arun

  • JSP taking time to load

    Hi All
    I am having a strange problem. The JSP file is taking to much of time to load and the IE doesnot responds properly even after the the page is loaded. I agree that the JSP has significant amount of Java code in it. What could be the problem?? The response from the server is coming fast. The page is taking time after recieving the response from server.
    Regards
    Prashant

    What could be the problem??Slow hardware, slow code, perhaps some other app is eating your resources, could be anything really. I think its time for a profiler.
    The JSP file is taking to much of time to load and the IE doesnot responds properly even after the the page is loadedIE is the best browser for improper behavior, but how exactly is it misbehaving in your case? It might be a bad piece of javascript that is causing it head aches.

  • Spl G/L posting to one time vendor not allowed

    Hi Gurus,
    I have a one time vendor and want to post special G/L posting for it.
    Is there a way of doing this? I have used the account grp CPD.
    I am also getting the error below:
    <b>spl G/L posting to one time vendor not allowed</b>
    Regards
    Karan

    Hi Karan,
    As rightly said by Madhu Spl G/l posting to one time vendor is not allowed in SAP
    Iam pasting the contents of SAP Note regarding this.
    Kamal
    29.03.2007 Page 1 of 1
    SAP Note Number 19638 - Special G/L transactions on one-time
    accounts
    Note Language: English Version: 2 Validity: Valid from 11.12.1996
    Summary
    Symptom
    Special G/L transactions, such as down payments, are not supported for
    one-time accounts. One-time accounts should be used for one-time
    transactions. For down payment, bill of exchange management or similar
    transactions for a customer or vendor, you can assume that this is not a
    one-time transaction. To post this type of accounting transactions in FI,
    you have to create a master record for the business partner.
    Additional key words
    Cause and prerequisites
    Conception of the one-time accounts.
    Solution
    Do not use one-time accounts if you want to post special G/L transactions.
    Source code corrections
    Header Data
    Release Status: Released for Customer
    Released on: 10.12.1996 23:00:00
    Priority: Recommendations/additional info
    Category: Consulting
    Main Component FI Financial Accounting
    The SAP Note is release-independent
    Related Notes
    Number Short Text
    867348 Preventing down payment request for one-time customers
    814038 SAPF103: you cannot post to one-time accounts (F5265)

  • How to know which sql query is taking time for concurrent program

       Hi sir,
    I am running concurrent program,that is taking time to execute ,i want to know which sql query causing performance
    Thanaks,
    Sreekanth

    Hi,
    My Learning: Diagnosing Oracle Applications Concurrent Programmes - 11i/R12
    How to run a Trace for a Concurrent Program? (Doc ID 415640.1)
    FAQ: Common Tracing Techniques in Oracle E-Business Applications 11i and R12 (Doc ID 296559.1)
    How To Get Level 12 Trace And FND Debug File For Concurrent Programs (Doc ID 726039.1)
    How To Trace a Concurrent Request And Generate TKPROF File (Doc ID 453527.1)
    Regards
    Yoonas

  • Select query taking time

    THe following query is taking time. Is there anyway better to write this query.
    SELECT PROGRAM_NAME_ID ,PROGRAM_NAME,sum(balance)"Unpaid Balance"
        FROM (
    SELECT DISTINCT
    PROGRAM_NAME_ID ,PROGRAM_NAME,
    t.billing_key billing_key,
    (TUFF_GENERIC_PKG.GET_TOTAL(t.billing_key,t.program_key)+
    nvl(PENALTY_INTEREST(t.billing_key,t.program_key,b.company_id,b.report_period ),0))
    -PAYMENT_AMOUNT(B.COMPANY_ID,T.PROGRAM_KEY,B.REPORT_PERIOD) Balance,
    Report_period,company_id
    FROM  BILLING B,
    PROG_SURCH T ,
    mv_program_dict P
    WHERE
    B.BILLING_KEY=T.BILLING_KEY
    AND  p.program_key= t.program_key(+)
    and company_id=:p3_hide_comp
    and b.SUBMIT_STATUS='S'
    union
    SELECT DISTINCT
    PROGRAM_NAME_ID ,PROGRAM_NAME,
    t.billing_key billing_key,
    (TUFF_GENERIC_PKG.GET_TOTAL(t.billing_key,t.program_key)+
    nvl(PENALTY_INTEREST(t.billing_key,t.program_key,b.company_id,b.report_period ),0))
    -PAYMENT_AMOUNT(B.COMPANY_ID,T.PROGRAM_KEY,B.REPORT_PERIOD) Balance,
    Report_period,company_id
    FROM  MV_BILLING B,
    MV_PROG_SURCH T ,
    mv_program_dict P
    WHERE
    B.BILLING_KEY=T.BILLING_KEY
    AND  p.program_key= t.program_key(+)
    and company_id=:p3_hide_comp
    order by report_period,program_name_id )
    where balance>=0
    GROUP BY PROGRAM_NAME_ID,PROGRAM_NAME
    ORDER BY PROGRAM_NAME_ID

    Hi,
    This is totally right.
    >
    Being one such call. The price for calling pl/sql functions in SQL can be quite high. I'd highly recommend you find a way to incorporate the pl/sql code into the SQL query.
    >
    but, try with this query. I hope would help you and return the rows you want.
    SELECT   program_name_id, program_name,
               SUM (  tuff_generic_pkg.get_total (billing_key, program_key)
                    + NVL (penalty_interest (billing_key,
                                             program_key,
                                             company_id,
                                             report_period
                           0
             - payment_amount (company_id, program_key, report_period) balance
        FROM (SELECT program_name_id, program_name, t.billing_key, t.program_key,
                     b.company_id, b.report_period
                FROM billing b, prog_surch t, mv_program_dict p
               WHERE b.billing_key = t.billing_key
                 AND p.program_key = t.program_key(+)
                 AND company_id = :p3_hide_comp
                 AND b.submit_status = 'S'
              UNION
              SELECT program_name_id, program_name, t.billing_key, t.program_key,
                     b.company_id, b.report_period report_period, company_id
                FROM mv_billing b, mv_prog_surch t, mv_program_dict p
               WHERE b.billing_key = t.billing_key
                 AND p.program_key = t.program_key(+)
                 AND company_id = :p3_hide_comp) sub
       WHERE   (  tuff_generic_pkg.get_total (billing_key, program_key)
                + NVL (penalty_interest (billing_key,
                                         program_key,
                                         company_id,
                                         report_period
                       0
             - payment_amount (company_id, program_key, report_period) >= 0
    GROUP BY program_name_id, program_nameObviosly I cannot testing.
    HTH -- johnxjean --

  • Transfer of XML through a firewall is taking time

    Hi,
    I have following scenario:
    Publisher BPEL --> || Firewall || Subscriber BPEL
    Now the XML is being transferred over the firewall is around 9MB.
    When these two BPEL processes are deployed on local intranet, the transfer of XML is too fast and Subscriber BPEL completes within seconds. But when above two BPEL are deployed over the internet with firewall in-between, XML transfer is taking time of about 15 Minutes.
    Is there a solution to reduce the XML transfer time or some work-around to handle such issue?
    Thanks,
    Bhavnesh.

    If it is a 9mb file this is quite large, I think this will come down to latency and bandwidwth. Test form your environment how long does it take to download a 9mb, then from the remote location. If this takes around 15 mins to perform then there is little you can do but get a better ISP.
    I doubt the issue will be with the firewall.
    cheers
    James

  • Speical GL Posting to one time A/R or A/P account

    Hello,
    Does SAP allows special G/L posting to one time A/P or A/R account? While trying to post, it just gave an error saying not possible. Any pointers to this will be useful.
    Thanks
    Kishore

    Hi Kishore,
    For that u have to do some customisation.Go through OBYR or OBXR give reconcile of one time vendor or customer accounts and special gl account also.
    may be this information is useful to u

Maybe you are looking for