Consolidation taking time for Specific POV

When users are running Consol for the following Entity Structure with Entity A and Contribution Total in POV
- Entity A
Entity B
Entity C
Entity D
(Here B is Child of A and C is child of B and D is child of C)
It takes around 16 min
however when POV is changed to Entity B and Contribution Total
It takes 4 min . Can anyone let me know why there can be such huge difference in consol timing with change in POV.
Edited by: Mugdha Shidhore on May 24, 2012 8:18 PM
Edited by: Mugdha Shidhore on May 24, 2012 8:23 PM

The first thing to understand is that running a consolidation on the Contribution Total member not only consolidates data to that entity it also consolidates data to the <Entity Currency> member of the parent of that entity which means that the siblings of that entity would also be consolidated. In your example, by selecting Contribution Total on Entity A you would consolidate all data to the parent of Entity A include any siblings of Entity A. You have not indicated if Entity A has any siblings.
The difference in your consolidation time could, therefore, be explained by the parent of Entity A having a much larger number of descendants than the number of descendants of just Entity A.
If you only wanted to consolidate data only up to Entity A then you should choose <Entity Currency> or <Entity Curr Total> of Entity A. That should give you a clearer picture of the difference in consolidation times.
There are also other possibilities such as rules that only run on certain entities which could also be related.
Brian Maguire

Similar Messages

  • I installed mountain lion over snow leopard and my macbook pro 13" taking time for login and logout,

    i installed mountain lion over snow leopard and my macbook pro 13" taking time for login and logout.. any solution

    Hi JoeyR.  Well, according to this link at the Apple Store, OS X Moutain Lion became available in July and I downloaded it for $19.99.  I figured I would do that before renewing my Norton security SW.  Are we talking about the same thing?
    http://www.apple.com/osx/

  • How to know which sql query is taking time for concurrent program

       Hi sir,
    I am running concurrent program,that is taking time to execute ,i want to know which sql query causing performance
    Thanaks,
    Sreekanth

    Hi,
    My Learning: Diagnosing Oracle Applications Concurrent Programmes - 11i/R12
    How to run a Trace for a Concurrent Program? (Doc ID 415640.1)
    FAQ: Common Tracing Techniques in Oracle E-Business Applications 11i and R12 (Doc ID 296559.1)
    How To Get Level 12 Trace And FND Debug File For Concurrent Programs (Doc ID 726039.1)
    How To Trace a Concurrent Request And Generate TKPROF File (Doc ID 453527.1)
    Regards
    Yoonas

  • Initial download taking time for CTParts in syclo inventory manager 3.2

    Hi All,
    While doing the Initial download in syclo inventory manager 3.2 we have observed that it is taking a lot of time for fetching the data from the complex table CTParts.
    In agentry diagram CTParts complex table is showing nine fields, out of this nine fields few fields like UOM, BatchIndicator etc does not have any dependency.So can i delete those fields?
    If yes, what will be the impact on application after deleting those fields .
    Thanks for your help
    -Garima
    Tags edited by: Michael Appleby

    Garima,
    You  need to analyze couple of things before making any program changes:-
    a) Can you check if you have set filter for ctparts MDO object in SAP ?  if MDO filter for plant points to user parameter "WRK'  , look at value of WRK in SU3.  Make sure that you have plant value maintained for WRK parameter.
    b) if indeed WRK value is maintained then go to MARC and check the number of  materials that exists for WRK  plant. if it is too many then do you  really all those materials downloaded to Mobile device ? check if you can maintain other filters values to restrict material records downloaded like material type , material group etc.
    c) Check  the point of bottlecheck a) whether it takes more  time to execute query in SAP . b) whether it takes time to transfer data from SAP to Java layer.  if so try to increase Java heap size.
    d)  Also look at MDO  field selections for ctparts in SAP. Only  select fields that you want to do.
    e) Did you create additional indexes for ctparts  complex table ?
    f)  Finally if nothing works then look at option of replacing output structure in BAPI  which return CTparts with Z structure with only required 9 fields which also  requires Z Java code changes for ctparts complex table.
    Thanks
    Manju.

  • Displaying time for specific dept record by adding 10 minutes

    Hello,
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    create table dept(
    dept_id number(14) primary key,
    c_id number(14) constraint dept_fk references centers(c_id) NOT NULL,
    deptName varchar2(50)
    create sequence dept_seq
    start with 1 increment by 1;
    create or replace trigger dept_trig
    before insert on dept
    referencing new as new
    for each row
    begin
    select dept_seq.nextval into :new.dept_id from dual;
    end;
    create table shiftManager(
    s_id number(9) primary key,
    dept_id number(14) constraint shiftManager_fk references dept(dept_id) NOT NULL,
    title varchar2(110), /**e.g. Morning, After noon, Evening **/
    startTime date,
    endTime date
    create sequence shiftManager_seq
    start with 1 increment by 1;
    create or replace trigger shiftManager_trig
    before insert on shiftManager
    referencing new as new
    for each row
    begin
    select shiftManager_seq.nextval into :new.s_id from dual;
    end;
    select to_char(startTime,'HH24:MI'),to_char(endTime,'HH24:MI')
    from shiftManager where dept_id=1 order by startTime asc;I want to select startTime by adding 10 minutes till the endTime for specific dept_id e.g.
    dept1
    =====
    startTime=09:00 and endTime=14:00
    OUTPUT required:
    ================
    09:10
    09:20
    09:30
    09:40
    09:50
    10:00
    13:40
    13:50
    14:00
    Thanks in anticipation
    Best regards

    Use:
    select  to_char(column_value,'HH24:MI') tm
      from  shiftManager,
            table(
                  cast(
                       multiset(
                                select  least(startTime + (level - 1) / 24 / 6,endTime)
                                  from  dual
                                  connect by startTime + (level - 1) / 24 / 6 <= endTime + (600 - 1) /3600 / 24
                       as sys.OdciDateList
      where dept_id=1
      order by column_value
    /For example:
    SQL> select * from shiftManager
      2  /
       DEPT_ID STARTTIME ENDTIME
             1 01-JUL-11 01-JUL-11
             2 01-JUL-11 01-JUL-11
    SQL> select  to_char(column_value,'HH24:MI') tm
      2    from  shiftManager,
      3          table(
      4                cast(
      5                     multiset(
      6                              select  least(startTime + (level - 1) / 24 / 6,endTime)
      7                                from  dual
      8                                connect by startTime + (level - 1) / 24 / 6 <= endTime + (600 - 1) /3600 / 24
      9                             )
    10                     as sys.OdciDateList
    11                    )
    12               )
    13    where dept_id=1
    14    order by column_value
    15  /
    TM
    09:30
    09:40
    09:50
    10:00
    10:10
    10:20
    10:30
    10:40
    10:50
    11:00
    11:10
    TM
    11:20
    11:30
    11:40
    11:50
    12:00
    12:10
    12:20
    12:30
    12:40
    12:45
    21 rows selected.
    SQL> SY.

  • Get actual time for specific time zone

    Dear experts,
    is it possible to get the actual time for a given time zone? We need to know the time of a certain plant and we are searching for a standard function module which calculates the time based on the plants time zone.
    Thanks in advance,
    David
    Edited by: David Claes on Apr 12, 2010 5:05 PM

    You can use TIME ZONE statement for this.
    DATA: time_stamp_s TYPE string,
                time_stamp     TYPE timestamp,
                tzone              TYPE timezone,
               wf_date_conv TYPE sy-datum,
               wf_time_conv TYPE sy-uzeit.
    tzone = 'CET'.
    CONCATENATE sy-datlo   "Local Date
                              sy-timlo    "Local Time
                      INTO time_stamp_s.
    time_stamp = time_stamp_s.
    CONVERT TIME STAMP time_stamp TIME ZONE tzone INTO DATE wf_date_conv TIME wf_time_conv.
    Otherwise FM IB_CONVERT_INTO_TIMESTAMP/IB_CONVERT_FROM_TIMESTAMP can bne used for same.
    Edited by: Satyajit on Apr 13, 2010 1:31 PM

  • SYS_CONTEXT usage taking time for first run

    Hi all,
    we have a screen(search) with conditions like make, manufacturing date, region, etc upto 20 conditions. User can restrict the result he wants based on the filters specified. As there are more parameters, a static query was takign a lot of time to return the result.
    We went with dynamic query(only adding the conditions that user has specified in the screen dynamically) and to avoid hard parsing we introduced sys_context.
    we found the result amazing with the subsequent run taking very less time(3-10 secs max) to fetch the results. But the first run (which goes for hard parsing done) is taking a quite a lot of time (close to 10mins).
    we are using Oracle 11g.
    thought this query is used on a regular basis, sometimes the sqlID gets cleared off from the cache due to LRU algorithm. Also sometimes, I see there are more than one SQL ID getting generated for the same set of parameter.
    we cannot use the sql pin option as we would need to pin the sql for every combination of parameter(around 30).
    Could anyone suggest a way of reducing the first run time?
    Thanks in advance..
    Regards,
    Ela

    >
    we cannot use the sql pin option as we would need to pin the sql for every combination of parameter(around 30).
    >
    Then try specifying 'something' for every condition and using '%' for the context values that you don't care about.
    If you don't care about 'empno' use '%' for the context value.
    select * from emp where empno like '%'If you do care use an actual value
    select * from emp where empno like '7369'The 'like' will be treated as '=' for the second case and in the first case it should be optimized out in the actual query.
    That way every query has the same 30 placeholders but the '%' will optimize out the ones you don't want to use.

  • Adpatch 9239089 taking time for Updating Snapshot Tables

    Hi,
    I am upgrading Apps R12.1.1 to Apps R12.1.3 . Applying patch 9239089 as prerequisite of patch 9239090.
    This patch is taking much time and Updating Snapshot Tables as below
    No of records processed =205032 Updating Snapshot Tables...Start time:Sun Nov 20 2011 11:04:15
    Done Updating Snapshot Tables for the above rows...End Time:Sun Nov 20 2011 11:04:17
    No of records processed =210033 Updating Snapshot Tables...Start time:Sun Nov 20 2011 11:06:37
    Done Updating Snapshot Tables for the above rows...End Time:Sun Nov 20 2011 11:06:38
    No of records processed =215033 Updating Snapshot Tables...Start time:Sun Nov 20 2011 11:08:57
    Done Updating Snapshot Tables for the above rows...End Time:Sun Nov 20 2011 11:08:58
    No of records processed =220034 Updating Snapshot Tables...Start time:Sun Nov 20 2011 11:11:17
    Done Updating Snapshot Tables for the above rows...End Time:Sun Nov 20 2011 11:11:18
    No of records processed =225034 Updating Snapshot Tables...Start time:Sun Nov 20 2011 11:13:36
    Done Updating Snapshot Tables for the above rows...End Time:Sun Nov 20 2011 11:13:37
    No of records processed =230035 Updating Snapshot Tables...Start time:Sun Nov 20 2011 11:15:56
    Done Updating Snapshot Tables for the above rows...End Time:Sun Nov 20 2011 11:15:57
    No of records processed =235035 Updating Snapshot Tables...Start time:Sun Nov 20 2011 11:18:16
    Done Updating Snapshot Tables
    Please see if there is any issue or how can avoid to updating snapshot tables.
    Regards,
    Raj

    Hi Raj,
    I am using shared appl_top on NFS file system , and this patch is using the NFS file system . So this issue could be NFS file system.
    when I am performing any write intensive operation on NFS file system it is taking huge time. adadmin utility is also taking muchtime to invoke.I also applied the same patch on a shared APPL_TOP but never had any performance issue.
    So I need some oracle recommendation to use NFS file system or how performance can be tune of nfs file system.?Have you tried to maintain snapshot via adadmin before applying the patch and see how it behaves?
    Oracle Applications Maintenance Utilities
    http://www.oracle.com/technetwork/documentation/applications-167706.html
    Thanks,
    Hussein

  • Hyperion System 9.3.1 reports taking longer time for the very first time

    We are on Hyperion System 9.3.1. The Financial reports are taking longer time (like 2 to 3 minuter) for the very first time for each login. The subsequest reports are does work faster.
    The behaviour is same for the Production and Development environments.
    All the reporting services have given enough JVM heap size.
    FYI, Reporting and Workspace runngin on the same server. Workspace/Reporting are clusted in two servers. HFM app is running on different server. HFM web is on different server. Shared Services is also on running on different server.
    Any help would be greately appreciated.
    Thanks.

    The reason they run quicker the subsequent times, is because the data has already been cached in the system.
    You could try the usual tricks to speed the report up:
    - move items into POV
    - have children and parent in the same row
    - arrange dimensions in inverse outline order
    - remove excessive formatting
    - push report calculations back to the data source
    We have found that using lots of dynamically calculated members also slows down reports, so try and limit the number of these.
    Hope this helps. If not maybe give us an idea of how the report is created to see if other changes could be made.

  • How to get users' login logout time for user IDs for a specific date?

    Dear All,
    There is a case I being requested to retrieve the Userid, User Name,
    User Group, User Dept, Date, Login Time, Logout Time in a specific date, for example, 21.05.2009.
    How should I retrieve the information? The user want to input specific date and user group then return the details that mentioned above.
    I try with SUIM->Users->By Logon Date and Password Change... but I can't specific the date that I want ...
    I try with SM19 (Security Audit Log), but unfortunately in my system this is not activated.
    I've seek for SAP's advise, and they say need to ask abaper to developr a report in order to get such details....
    Do you guys have any other methods?
    Do you guys know which tables will contain the details as mentioned above?
    Best Regards,
    Ken

    Unfortunately without the audit log, you're going have a hard time finding this information.  As mentioned, ST03N will give you some information.  If your systems daily workload aggregation goes back to the date you require then you'll be able to get a list of all users who logged on that day.  ST03N doesn't keep time stamps just response times.
    My only idea is VERY labor intensive.  If your DB admin can retrieve a save of the database from that day then table USR02 will hold a little more information for you.  It will contain last login times for that day.  If your system backup policy happened to have saved the contents of folder "/usr/sap/<SID>/<instance>/data" then you potentially have access to all the data you require.  The stat file will have recorded every transaction that took place during that day.  If that file is restored you could use program RSSTAT20 to query against it.
    Good luck and turn on the audit log as it makes your life much easier!

  • Generic extractor FM : taking 5-6 hours time for 3 months to BW urgent:

    Dear experts,
    I have designed a FM for generic extraction , which is taking 5-6 hoours time for 3 months data i.e 24 lakhs records to BW up to PSA .
    i have given the coding below plz provide any modifications to improve the performance.....
    FUNCTION zhr_att_analysis.
    ""Local Interface:
    *"  IMPORTING
    *"     VALUE(I_REQUNR) TYPE  SRSC_S_IF_SIMPLE-REQUNR
    *"     VALUE(I_DSOURCE) TYPE  SRSC_S_IF_SIMPLE-DSOURCE OPTIONAL
    *"     VALUE(I_MAXSIZE) TYPE  SRSC_S_IF_SIMPLE-MAXSIZE OPTIONAL
    *"     VALUE(I_INITFLAG) TYPE  SRSC_S_IF_SIMPLE-INITFLAG OPTIONAL
    *"     VALUE(I_REMOTE_CALL) TYPE  SBIWA_FLAG DEFAULT SBIWA_C_FLAG_OFF
    *"  TABLES
    *"      I_T_SELECT TYPE  SBIWA_T_SELECT OPTIONAL
    *"      I_T_FIELDS TYPE  SBIWA_T_FIELDS OPTIONAL
    *"      E_T_DATA STRUCTURE  ZHR_ATT_MAIN OPTIONAL
    *"  EXCEPTIONS
    *"      NO_MORE_DATA
    *"      ERROR_PASSED_TO_MESS_HANDLER
    Auxiliary Selection criteria structure
      DATA: l_s_select TYPE sbiwa_s_select.
    Maximum number of lines for DB table
      STATICS: l_maxsize TYPE sbiwa_s_interface-maxsize.
    Select ranges
      RANGES: l_r_pernr FOR pa9004-pernr,
              l_r_bukrs FOR pa0001-bukrs,
              l_r_persg FOR pa0001-persg,
              l_r_begda FOR pa9004-begda,
              l_r_persk FOR pa0001-persk.
    Maximum number of lines for DB table
      STATICS: s_s_if TYPE srsc_s_if_simple,
    counter
              s_counter_datapakid LIKE sy-tabix,
    cursor
              s_cursor TYPE cursor.
    *"Declaration of store data
    TYPES : BEGIN OF ty_9004,
             pernr TYPE persno,
             endda TYPE endda,
             begda TYPE begda,
             zrs TYPE zrs,
             zstorecode TYPE zstorecode,
            END OF ty_9004.
    *"Declaration of employee data
      TYPES : BEGIN OF ty_0001,
              pernr TYPE pernr_d,
              endda TYPE endda,
              begda TYPE begda,
             AEDTM TYPE AEDAT,
              bukrs TYPE bukrs,
              persg TYPE persg,
              persk TYPE persk,
              END OF ty_0001.
    *"Declaration of expected mandays
      TYPES : BEGIN OF ty_0000,
              pernr TYPE persno,
              endda TYPE endda,
              begda TYPE begda,
              aedtm TYPE aedat,
              stat2 TYPE stat2,
              massn TYPE massn,
              END OF ty_0000.
    *"Declaration of man days swiped
      TYPES : BEGIN OF ty_teven,
              pernr TYPE pernr_d,
              ldate TYPE ldate,
              satza TYPE retyp,
              aedtm TYPE aedat,
              counter_swiped TYPE i,
              END OF ty_teven.
    *"Declaration of Man days regularized
      TYPES : BEGIN OF ty_2002,
              pernr TYPE pernr_d,
              subty TYPE subty,
              endda TYPE endda,
              begda TYPE begda,
              aedtm TYPE aedat,
              END OF ty_2002.
    *"Declaration of Man days lostdue to leave
      TYPES : BEGIN OF ty_2001,
              pernr TYPE pernr_d,
              subty TYPE subty,
              endda TYPE endda,
              begda TYPE begda,
              aedtm TYPE aedat,
              END OF ty_2001.
    *****Declaration of weekly off
      TYPES : BEGIN OF ty_2003,
              pernr TYPE pernr_d,
              subty TYPE subty,
              endda TYPE endda,
              begda TYPE begda,
              aedtm TYPE aedat,
              tprog TYPE tprog,
              END OF ty_2003.
    Auxiliary Selection criteria structure
      DATA :
            it_0001 TYPE TABLE OF ty_0001,
            wa_0001 TYPE ty_0001,
            it_0000 TYPE TABLE OF ty_0000,
            wa_0000 TYPE ty_0000,
            it_teven TYPE TABLE OF ty_teven,
            wa_teven TYPE ty_teven,
            it_2002 TYPE TABLE OF ty_2002 ,
            wa_2002 TYPE ty_2002,
            it_2001 TYPE TABLE OF ty_2001,
            wa_2001 TYPE ty_2001,
            it_2003 TYPE TABLE OF ty_2003,
            wa_2003 TYPE ty_2003,
            wa_target TYPE zhr_att_main.
      DATA : date  TYPE dats,
      doj TYPE dats,
      dol TYPE dats,
      date1 TYPE dats,
      date2 TYPE dats,
             counter(9)  TYPE n.
    Initialization mode (first call by SAPI) or data transfer mode
    (following calls) ?
      IF i_initflag = sbiwa_c_flag_on.
    Initialization: check input parameters
                    buffer input parameters
                    prepare data selection
    Check DataSource validity
        CASE i_dsource.
          WHEN 'ZHR_ATT_ANALYSIS'.
          WHEN OTHERS.
            IF 1 = 2. MESSAGE e009(r3). ENDIF.
            log_write 'E'                  "message type
                      'R3'                 "message class
                      '009'                "message number
                      i_dsource            "message variable 1
                      ' '.                 "message variable 2
            RAISE error_passed_to_mess_handler.
        ENDCASE.
        APPEND LINES OF i_t_select TO s_s_if-t_select.
    Fill parameter buffer for data extraction calls
        s_s_if-requnr    = i_requnr.
        s_s_if-dsource   = i_dsource.
        s_s_if-maxsize   = i_maxsize.
    Fill field list table for an optimized select statement
    (in case that there is no 1:1 relation between InfoSource fields
    and database table fields this may be far from beeing trivial)
        APPEND LINES OF i_t_fields TO s_s_if-t_fields.
      ELSE.                 "Initialization mode or data extraction ?
    Data transfer: First Call      OPEN CURSOR + FETCH
                   Following Calls FETCH only
    First data package -> OPEN CURSOR
        IF s_counter_datapakid = 0.
          LOOP AT s_s_if-t_select INTO l_s_select WHERE fieldnm = 'PERNR'.
            MOVE-CORRESPONDING l_s_select TO l_r_pernr.
            APPEND l_r_pernr.
          ENDLOOP.
          LOOP AT s_s_if-t_select INTO l_s_select WHERE fieldnm = 'BUKRS'.
            MOVE-CORRESPONDING l_s_select TO l_r_bukrs.
            APPEND l_r_bukrs.
          ENDLOOP.
          LOOP AT s_s_if-t_select INTO l_s_select WHERE fieldnm = 'PERSG'.
            MOVE-CORRESPONDING l_s_select TO l_r_persg.
            APPEND l_r_persg.
          ENDLOOP.
          LOOP AT s_s_if-t_select INTO l_s_select WHERE fieldnm = 'BEGDA'.
            MOVE-CORRESPONDING l_s_select TO l_r_begda.
            APPEND l_r_begda.
          ENDLOOP.
          LOOP AT s_s_if-t_select INTO l_s_select WHERE fieldnm = 'PERSK'.
            MOVE-CORRESPONDING l_s_select TO l_r_persk.
            APPEND l_r_persk.
          ENDLOOP.
          OPEN CURSOR WITH HOLD s_cursor FOR
    populate only store code employess does not have empty store codes
            SELECT apernr bpernr bendda bbegda bbukrs bpersg b~persk FROM pa9004 AS a INNER JOIN pa0001 AS b
                    ON  apernr = bpernr
                     WHERE a~pernr IN l_r_pernr AND
                          a~zstorecode <> ''    AND
                          bukrs IN l_r_bukrs  AND
                          persg IN l_r_persg AND
                          persk IN l_r_persk.
        ENDIF.
    Fetch records into interface table.
      named E_T_'Name of extract structure'.
        FETCH NEXT CURSOR s_cursor
                   APPENDING CORRESPONDING FIELDS
                   OF TABLE  it_0001
                   PACKAGE SIZE s_s_if-maxsize.
        IF sy-subrc <> 0.
          CLOSE CURSOR s_cursor.
          RAISE no_more_data.
        ELSE.
         break-point.
          IF l_r_begda-high = '00000000' AND l_r_begda-low = '00000000'.
            date1 = sy-datum - 1.
            date2 = sy-datum - 1.
          ELSE.
            date1 = l_r_begda-low .
            date2 = l_r_begda-high.
          ENDIF.
          SORT it_0001 BY pernr persg begda endda bukrs.
          DELETE it_0001 WHERE persg NE 'T' AND
                               persg NE 'K' AND
                               persg NE 'P' AND
                               persg NE 'W'.
          DELETE ADJACENT DUPLICATES FROM it_0001 COMPARING pernr begda endda bukrs.
    populate all the employees that are active in pa9004.
          IF NOT it_0001[] IS INITIAL.
            SELECT pernr endda begda aedtm massn FROM pa0000
                   INTO CORRESPONDING FIELDS OF TABLE it_0000
                   FOR ALL ENTRIES IN it_0001
                   WHERE pernr = it_0001-pernr
                     AND ( massn = 'A1' OR massn = '00' OR massn = 'A6' OR massn = 'A3' ).
            SORT it_0000 BY pernr begda DESCENDING.
          ENDIF.
    populate SWIPED MAN DAYS data
          IF NOT it_0001[] IS INITIAL.
            SELECT pernr ldate satza aedtm FROM teven
               INTO CORRESPONDING FIELDS OF  TABLE it_teven
               FOR ALL ENTRIES IN it_0001
               WHERE pernr = it_0001-pernr AND
                                 satza = 'P01'
                                 AND ldate IN l_r_begda.
            SORT it_teven BY pernr ldate.
          ENDIF.
    **populate REGULARIZATION DAYS data
          IF NOT it_0001[] IS INITIAL.
            SELECT pernr subty endda begda aedtm FROM pa2002
              INTO CORRESPONDING FIELDS OF  TABLE it_2002
               FOR ALL ENTRIES IN it_0001
               WHERE pernr = it_0001-pernr
                AND  begda >= date1
                AND endda <= date2 .
            SORT it_2002 BY pernr begda endda.
          ENDIF.
    **populate LEAVE DAYS data
          IF NOT it_0001[] IS INITIAL.
            SELECT pernr subty endda begda aedtm FROM pa2001
              INTO CORRESPONDING FIELDS OF   TABLE it_2001
               FOR ALL ENTRIES IN it_0001
               WHERE pernr = it_0001-pernr
                AND  begda >= date1
                AND endda <= date2  .
            SORT it_2001 BY pernr begda endda .
          ENDIF.
    **populate WEEKLY OFF data
          IF NOT it_0001[] IS INITIAL.
            SELECT pernr subty endda begda aedtm tprog FROM pa2003
              INTO CORRESPONDING FIELDS OF  TABLE it_2003
                 FOR ALL ENTRIES IN it_0001
                 WHERE pernr = it_0001-pernr AND
                              tprog = 'OFF'
                               AND  begda >= date1
                               AND endda <= date2  .
            SORT it_2003 BY pernr begda endda.
          ENDIF.
          date = sy-datum.
    ********added changes on 06.04.2008**************action type & date dependent extaction****
    loop over it_0001 table
         BREAK-POINT.
          LOOP AT it_0001 INTO wa_0001.
           if sy-tabix = 1.
            counter = 0.
    for expected mandays
            LOOP AT it_0000 INTO wa_0000 WHERE pernr = wa_0001-pernr .
              IF wa_0000-massn = 'A1' OR wa_0000-massn = '00' OR wa_0000-massn = 'A3'.
                doj = wa_0000-begda.
               if wa_0000-endda = '99991231'.
              date2  = sy-datum.
               else.
                dol = date2.
               endif.
              ELSEIF wa_0000-massn = 'A6'.
                dol = wa_0000-begda.
              ENDIF.
            ENDLOOP.
            IF  date1 <= wa_0001-begda AND date2 <= wa_0001-endda AND date2 >= wa_0001-begda AND date1 <= wa_0001-endda.
              counter = date2 - wa_0001-begda .
              counter = counter + 1.
              date = wa_0001-begda - 1.
            ELSEIF date1 >= wa_0001-begda  AND date2 >= wa_0001-endda AND date2 >= wa_0001-begda AND date1 <= wa_0001-endda.
              counter =  wa_0001-endda - date1.
              counter = counter + 1.
              date = date1 - 1.
            ELSEIF date1 >= wa_0001-begda AND date2 <= wa_0001-endda AND  date2 >= wa_0001-begda AND date1 <= wa_0001-endda.
              counter = date2  - date1.
              counter = counter + 1.
              date = date1 - 1.
            ELSEIF  date1 <= wa_0001-begda AND  date2 >= wa_0001-endda AND date2 >= wa_0001-begda AND date1 <= wa_0001-endda.
              counter = wa_0001-endda - wa_0001-begda.
              counter = counter + 1.
              date =  wa_0001-begda - 1.
            ELSE.
              CONTINUE.
            ENDIF.
    ********completed changes on 06.04.2008**************action type & date dependent extaction**
    split records from date of joining to till date
            DO counter  TIMES.
              CLEAR : wa_teven , wa_target.
              date = date + 1.
              wa_target-date1 = date.
              wa_target-pernr = wa_0001-pernr.
              wa_target-bukrs = wa_0001-bukrs.
              wa_target-persg = wa_0001-persg.
              wa_target-persk = wa_0001-persk.
    for expected mandays count
              IF wa_target-date1 >= doj AND wa_target-date1 <= dol.
                wa_target-expectedmandays = 1.
                wa_target-aedtm = wa_0000-aedtm.
    for swiped mandays
                READ TABLE it_teven INTO wa_teven WITH KEY pernr = wa_target-pernr
                                                           ldate = wa_target-date1 BINARY SEARCH.
                IF sy-subrc = 0.
                  wa_target-swiped_days = 1.
                  wa_target-aedtm = wa_teven-aedtm.
                ENDIF.
    for regularized days
                LOOP AT it_2002 INTO wa_2002 WHERE pernr = wa_target-pernr
                   AND  ( endda GE wa_target-date1 AND begda LE wa_target-date1 ).
                  wa_target-reg_days  = 1.
                  wa_target-subty2 = wa_2002-subty.
                  wa_target-aedtm = wa_2002-aedtm.
                ENDLOOP.
    for leave days
                LOOP AT it_2001 INTO wa_2001 WHERE pernr = wa_target-pernr
                   AND  ( endda GE wa_target-date1 AND begda LE wa_target-date1 ).
                  wa_target-leave_days  = 1.
                  wa_target-subty1 = wa_2001-subty.
                  wa_target-aedtm = wa_2001-aedtm.
                ENDLOOP.
    for weekly off days
                LOOP AT it_2003 INTO wa_2003 WHERE pernr = wa_target-pernr
                   AND  ( endda GE wa_target-date1 AND begda LE wa_target-date1 ).
                  wa_target-off_days   = 1.
                  wa_target-aedtm = wa_2003-aedtm.
                ENDLOOP.
    append work area to e_t_data
                APPEND wa_target TO  e_t_data.
              ENDIF.
            ENDDO.
          ENDLOOP.
    clear internal tables
          CLEAR :  it_0000 , it_0001 , it_2001 , it_2002 , it_2003 , it_teven.
        ENDIF.
        s_counter_datapakid = s_counter_datapakid + 1.
      ENDIF.   "Initialization mode or data extraction ?
    ENDFUNCTION.

    Hi Guys
    I can have both your cases looked into for you.
    Please send me an email using the contact us form in my profile. The address for this form in the section 'about me'.
    Thanks
    Stuart
    BTCare Community Mod
    If we have asked you to email us with your details, please make sure you are logged in to the forum, otherwise you will not be able to see our ‘Contact Us’ link within our profiles.
    We are sorry that we are unable to deal with service/account queries via the private message(PM) function so please don't PM your account info, we need to deal with this via our email account :-)

  • Firefox has been randomly crashing for two days. I always have google and facebook running. Firefox seems to crash at random times - no specific web page loading. My Firefox is up to date.

    Firefox has been randomly crashing for two days. I always have google and facebook running. Firefox seems to crash at random times - no specific web page loading. My Firefox is up to date.

    There can be multiple reasons for crashing. Seeing this article would be helpful as it lists out the solution for this-
    http://support.mozilla.com/en-US/kb/Firefox%20crashes?s=firefox+crash&as=s#os=win&browser=fx4

  • Background job is taking lot of time for processing the job.

    One background job - which is processing Idocs is processing a job for more than 2000+ seconds.. and completed tho.
    But this is happening for the past few days.. is there any way to trouble shoot, or find from any logs of this completed job, to find out why it was taking lot of time for processing.
    Can you please tell me the steps of analyzing / trouble shooting why it was taking lot of time daily.
    Regards,
    Satish.

    Hi Satish,
    Run DB stat from db13 it will improve the performance.
    Check number of idocs. You can send part by part, instead of sending huge data.
    Check SM58 logs
    Suman

  • How to make repricing for specific conditions at the time of billing?

    Hello
    I'm SD Pricing person. Let me ask here experts below my concern.
    In EU countries, there is recycling fee in sales of electronics or Note PC with batteries....to keep our earth clean.
    So when customers buy such products, they have to pay more as recycling fee including invoice amount.
    My question is... we want to make repricing for recycling conditions when the billing is created.
    Based on our configuration, recycling condition is not defined as a kind of tax condition.
    So this value is just copied from sales order.
    (pricing type in copy control is 'G' which means repricing for tax condition.)
    In this situation, we want to make repricing for those conditions during operating system.
    Is there any easy way to cover this?
    As I think, this is not easy because the system is already operated.
    To change condition attribution is really risky. If we dare do, we have to migrate all open orders.
    So I want to put this way to the end of my choice.
    For this requirement,
    1. We have to change condition class or category or calculation type in order to be repriced based on pricing type 'G'.
        (ex. Set the condition category as 'I' inter-billing or 'L' always repricing.)
        But transaction data are created now and all open orders will be affected.
    2. To create new conditions are not easy because these conditions are mapped to the CO-PA value field and the values are posted in FI doc.
    3. To change pricing type in copy control is almost impossible because of the impact.
    What can I do this in this situation?
    What I want to do is just to make repricing for specific conditions at the time of billing in case that pricing type of copy control is 'G'.
    Thank you in advance.

    Let me ask agagin to all experts.
    I want to make A condition to be repriced at the time of billing.
    For this, I have to set condition category as 'L' (Generally new when copying).
    But I do not want to do in that way becauuse I am maintaining big operated system now.
    In addition, though I migrate open orders after changing config. as 'L', it is almost impossible for use to migrate because we have more than a thound open orders per a DAY as Globalized system.
    That is why I am asking.
    Simply I can create new condition but as I mentioned, there are various recycling fee so we already created about 10 conditions. And this recycling conditions are linked to REA package of SAP. So creating another 10 more conditions can not be a way for us.
    At last, what I want is not to be shown this condition only in billing doc.
    'A' condition should be displayed in both Sales order and Billing doc.
    And simultaneously, when the billing is created and if user changed 'A' condition master, then new value which is different from sales order have to be reflected in billing doc.
    Thank you in advance.

  • Query taking long time for EXTRACTING the data more than 24 hours

    Hi ,
    Query taking long time for EXTRACTING the data more than 24 hours please find the query and explain plan details below even indexes avilable on table's goe's to FULL TABLE SCAN. please suggest me.......
    SQL> explain plan for select a.account_id,round(a.account_balance,2) account_balance,
    2 nvl(ah.invoice_id,ah.adjustment_id) transaction_id,
    to_char(ah.effective_start_date,'DD-MON-YYYY') transaction_date,
    to_char(nvl(i.payment_due_date,
    to_date('30-12-9999','dd-mm-yyyy')),'DD-MON-YYYY')
    due_date, ah.current_balance-ah.previous_balance amount,
    decode(ah.invoice_id,null,'A','I') transaction_type
    3 4 5 6 7 8 from account a,account_history ah,invoice i_+
    where a.account_id=ah.account_id
    and a.account_type_id=1000002
    and round(a.account_balance,2) > 0
    and (ah.invoice_id is not null or ah.adjustment_id is not null)
    and ah.CURRENT_BALANCE > ah.previous_balance
    and ah.invoice_id=i.invoice_id(+)
    AND a.account_balance > 0
    order by a.account_id,ah.effective_start_date desc; 9 10 11 12 13 14 15 16
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)|
    | 0 | SELECT STATEMENT | | 544K| 30M| | 693K (20)|
    | 1 | SORT ORDER BY | | 544K| 30M| 75M| 693K (20)|
    |* 2 | HASH JOIN | | 544K| 30M| | 689K (20)|
    |* 3 | TABLE ACCESS FULL | ACCOUNT | 20080 | 294K| | 6220 (18)|
    |* 4 | HASH JOIN OUTER | | 131M| 5532M| 5155M| 678K (20)|
    |* 5 | TABLE ACCESS FULL| ACCOUNT_HISTORY | 131M| 3646M| | 197K (25)|
    | 6 | TABLE ACCESS FULL| INVOICE | 262M| 3758M| | 306K (18)|
    Predicate Information (identified by operation id):
    2 - access("A"."ACCOUNT_ID"="AH"."ACCOUNT_ID")
    3 - filter("A"."ACCOUNT_TYPE_ID"=1000002 AND "A"."ACCOUNT_BALANCE">0 AND
    ROUND("A"."ACCOUNT_BALANCE",2)>0)
    4 - access("AH"."INVOICE_ID"="I"."INVOICE_ID"(+))
    5 - filter("AH"."CURRENT_BALANCE">"AH"."PREVIOUS_BALANCE" AND ("AH"."INVOICE_ID"
    IS NOT NULL OR "AH"."ADJUSTMENT_ID" IS NOT NULL))
    22 rows selected.
    Index Details:+_
    SQL> select INDEX_OWNER,INDEX_NAME,COLUMN_NAME,TABLE_NAME from dba_ind_columns where
    2 table_name in ('INVOICE','ACCOUNT','ACCOUNT_HISTORY') order by 4;
    INDEX_OWNER INDEX_NAME COLUMN_NAME TABLE_NAME
    OPS$SVM_SRV4 P_ACCOUNT ACCOUNT_ID ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT_NAME ACCOUNT_NAME ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT CUSTOMER_NODE_ID ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT ACCOUNT_TYPE_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_ACCOUNT_TYPE ACCOUNT_TYPE_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_INVOICE INVOICE_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_PREVIOUS_INVOICE PREVIOUS_INVOICE_ID ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT_NAME_ID ACCOUNT_NAME ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT_NAME_ID ACCOUNT_ID ACCOUNT
    OPS$SVM_SRV4 I_LAST_MODIFIED_ACCOUNT LAST_MODIFIED ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_INVOICE_ACCOUNT INVOICE_ACCOUNT_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ACCOUNT ACCOUNT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ACCOUNT SEQNR ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_INVOICE INVOICE_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADINV INVOICE_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA CURRENT_BALANCE ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA INVOICE_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA ADJUSTMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA ACCOUNT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_LMOD LAST_MODIFIED ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADINV ADJUSTMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_PAYMENT PAYMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADJUSTMENT ADJUSTMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_APPLIED_DT APPLIED_DATE ACCOUNT_HISTORY
    OPS$SVM_SRV4 P_INVOICE INVOICE_ID INVOICE
    OPS$SVM_SRV4 U_INVOICE CUSTOMER_INVOICE_STR INVOICE
    OPS$SVM_SRV4 I_LAST_MODIFIED_INVOICE LAST_MODIFIED INVOICE
    OPS$SVM_SRV4 U_INVOICE_ACCOUNT ACCOUNT_ID INVOICE
    OPS$SVM_SRV4 U_INVOICE_ACCOUNT BILL_RUN_ID INVOICE
    OPS$SVM_SRV4 I_INVOICE_BILL_RUN BILL_RUN_ID INVOICE
    OPS$SVM_SRV4 I_INVOICE_INVOICE_TYPE INVOICE_TYPE_ID INVOICE
    OPS$SVM_SRV4 I_INVOICE_CUSTOMER_NODE CUSTOMER_NODE_ID INVOICE
    32 rows selected.
    Regards,
    Bathula
    Oracle-DBA

    I have some suggestions. But first, you realize that you have some redundant indexes, right? You have an index on account(account_name) and also account(account_name, account_id), and also account_history(invoice_id) and account_history(invoice_id, adjustment_id). No matter, I will suggest some new composite indexes.
    Also, you do not need two lines for these conditions:
    and round(a.account_balance, 2) > 0
    AND a.account_balance > 0
    You can just use: and a.account_balance >= 0.005
    So the formatted query isselect a.account_id,
           round(a.account_balance, 2) account_balance,
           nvl(ah.invoice_id, ah.adjustment_id) transaction_id,
           to_char(ah.effective_start_date, 'DD-MON-YYYY') transaction_date,
           to_char(nvl(i.payment_due_date, to_date('30-12-9999', 'dd-mm-yyyy')),
                   'DD-MON-YYYY') due_date,
           ah.current_balance - ah.previous_balance amount,
           decode(ah.invoice_id, null, 'A', 'I') transaction_type
      from account a, account_history ah, invoice i
    where a.account_id = ah.account_id
       and a.account_type_id = 1000002
       and (ah.invoice_id is not null or ah.adjustment_id is not null)
       and ah.CURRENT_BALANCE > ah.previous_balance
       and ah.invoice_id = i.invoice_id(+)
       AND a.account_balance >= .005
    order by a.account_id, ah.effective_start_date desc;You will probably want to select:
    1. From ACCOUNT first (your smaller table), for which you supply a literal on account_type_id. That should limit the accounts retrieved from ACCOUNT_HISTORY
    2. From ACCOUNT_HISTORY. We want to limit the records as much as possible on this table because of the outer join.
    3. INVOICE we want to access last because it seems to be least restricted, it is the biggest, and it has the outer join condition so it will manufacture rows to match as many rows as come back from account_history.
    Try the query above after creating the following composite indexes. The order of the columns is important:create index account_composite_i on account(account_type_id, account_balance, account_id);
    create index acct_history_comp_i on account_history(account_id, invoice_id, adjustment_id, current_balance, previous_balance, effective_start_date);
    create index invoice_composite_i on invoice(invoice_id, payment_due_date);All the columns used in the where clause will be indexed, in a logical order suited to the needs of the query. Plus each selected column is indexed as well so that we should not need to touch the tables at all to satisfy the query.
    Try the query after creating these indexes.
    A final suggestion is to try larger sort and hash area sizes and a manual workarea policy.alter session set workarea_size_policy = manual;
    alter session set sort_area_size = 2147483647;
    alter session set hash_area_size = 2147483647;

Maybe you are looking for