Time Management... using ZES table...

Hi experts!!
I need help with this dates problem....
I have to read the cluster B2, specifically ZES table, and I want to print the data for three differents weeks that I have in selection screen. When I see the information of the cluster in RPCLSTB2 I can see in table ZES the concept and the date but when I try to catch the date in the table ZES I only have the day...how can I reference to the program the date?
I explain....
Selection-Screen....
Week1  (select options).........
Week2...................
Week3.........
Now I need to go with one of the weeks to the cluster and catch the data that are between Week1 , Week2 and Week3, but when I access to this table I only have the day...no the month.... how can I do this?
Thanks a lot!
Regards,
Rebeca

Hey Rebeca,
          Just  look into below information.if this info usefull for u thats gr8.
DATA: T_ZL TYPE PC2BF OCCURS 0 WITH HEADER LINE.
  data: begin of tgetbuff occurs 0,
      x(10),
  end of tgetbuff.
  CALL FUNCTION 'HR_TIME_RESULTS_GET'
       EXPORTING
         GET_PERNR     = PERNR
         GET_PABRJ     = YEAR
         GET_PABRP     = MONTH
        GET_CLTYP     = '1'
       TABLES
         GET_TBUFF     = TGETBUFF
         GET_ZL        = T_ZL
       EXCEPTIONS
         NO_PERIOD_SPECIFIED   = 1
         WRONG_CLUSTER_VERSION = 2
         NO_READ_AUTHORITY     = 3
         CLUSTER_ARCHIVED      = 4
         TECHNICAL_ERROR       = 5
         OTHERS                = 6.
Regards,
Lakshman.Annamaneni

Similar Messages

  • Create a Time Management Custom Function in PE04

    Hi Experts
                  Pls give the steps to create a custom function in Time Management using PE04,
    whta is purpose of using infotype in PE04 , what are the input parametr and output paramaete . What are the intrnal table available in custom function . How we can test the custom function .
    Thansk and regards
    Renjith MP

    No idea on how to set.
    But in our case, we have a requirement for calculation of GOSI for our Bahrani Employees. Hence did the same using by creating a FUNCTION ZGOSI.
    We have not given any input or output parameters.
    Used a Z program for payroll driver RPCALCX0 -- ZSH_GOSI_CALC and the calculations are taken from this program and output is displayed in the RT table- Wage types.
    For more information, check with functions of infotype like P0014, P0015, you will get some idea on the same.
    Before giving the above, need to mention the parameter list for control.
    Mostly the input parameters are given as Objects like in our Payroll Process, there is WPBP table and output parameters the results to be displayed using object RT.
    Analyze more on the same and you will come to know.
    Good luck.
    Regards
    Team Member.

  • What are the major challenges for the end-user in using Positive Time Management

    Hi Seniors,
    Can anybody please provide me some data on the same.
    Thanks.

    Hi,
    Positive time management is slightly difficult to implement compared to negative time management.
    In negative time management, you use planned working times + deviations in time evaluation.
    In positive time management, you record actual times through time recording system. You have to bring these times to SAP through an interface and then evaluate the actual times against the planned times and accordingly pay them out. You would need to pay attention regarding the integration between SAP R/3 and time recording system.
    Building rules for positive time management is slightly more challenging compared to negative time management.
    Kindly go through the below documents on positive time management which will give you more idea about the same.
    Integration of Time Recording Terminals with SAP R/3
    Determination of First Clock In & Last Clock Out in Positive Time Evaluation
    Late Coming, Early Going and Unauthorized absence
    Prorated Grant of Absence Quota for Contract Period in Time Evaluation
    Splitting of Overtime Hours after X Hours in Time Evaluation
    Rounding off Overtime Hours Generated via T510S table in Time Evaluation
    Public Holiday Calendar and Work Schedule Rules
    I hope these are helpful to you.
    Thanks and regards,
    Vivek Barnwal

  • Managing statistics for object collections used as table types in SQL

    Hi All,
    Is there a way to manage statistics for collections used as table types in SQL.
    Below is my test case
    Oracle Version :
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE    11.2.0.3.0      Production
    TNS for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    SQL> Original Query :
    SELECT
         9999,
         tbl_typ.FILE_ID,
         tf.FILE_NM ,
         tf.MIME_TYPE ,
         dbms_lob.getlength(tfd.FILE_DATA)
    FROM
         TG_FILE tf,
         TG_FILE_DATA tfd,
              SELECT
              FROM
                   TABLE
                        SELECT
                             CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                             OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
                        FROM
                             dual
         )     tbl_typ
    WHERE
         tf.FILE_ID     = tfd.FILE_ID
    AND tf.FILE_ID  = tbl_typ.FILE_ID
    AND tfd.FILE_ID = tbl_typ.FILE_ID;
    Elapsed: 00:00:02.90
    Execution Plan
    Plan hash value: 3970072279
    | Id  | Operation                                | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                         |              |     1 |   194 |  4567   (2)| 00:00:55 |
    |*  1 |  HASH JOIN                               |              |     1 |   194 |  4567   (2)| 00:00:55 |
    |*  2 |   HASH JOIN                              |              |  8168 |   287K|   695   (3)| 00:00:09 |
    |   3 |    VIEW                                  |              |  8168 |   103K|    29   (0)| 00:00:01 |
    |   4 |     COLLECTION ITERATOR CONSTRUCTOR FETCH|              |  8168 | 16336 |    29   (0)| 00:00:01 |
    |   5 |      FAST DUAL                           |              |     1 |       |     2   (0)| 00:00:01 |
    |   6 |    TABLE ACCESS FULL                     | TG_FILE      |   565K|    12M|   659   (2)| 00:00:08 |
    |   7 |   TABLE ACCESS FULL                      | TG_FILE_DATA |   852K|   128M|  3863   (1)| 00:00:47 |
    Predicate Information (identified by operation id):
       1 - access("TF"."FILE_ID"="TFD"."FILE_ID" AND "TFD"."FILE_ID"="TBL_TYP"."FILE_ID")
       2 - access("TF"."FILE_ID"="TBL_TYP"."FILE_ID")
    Statistics
              7  recursive calls
              0  db block gets
          16783  consistent gets
          16779  physical reads
              0  redo size
            916  bytes sent via SQL*Net to client
            524  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              2  rows processed Indexes are present in both the tables ( TG_FILE, TG_FILE_DATA ) on column FILE_ID.
    select
         index_name,blevel,leaf_blocks,DISTINCT_KEYS,clustering_factor,num_rows,sample_size
    from
         all_indexes
    where table_name in ('TG_FILE','TG_FILE_DATA');
    INDEX_NAME                     BLEVEL LEAF_BLOCKS DISTINCT_KEYS CLUSTERING_FACTOR     NUM_ROWS SAMPLE_SIZE
    TG_FILE_PK                          2        2160        552842             21401       552842      285428
    TG_FILE_DATA_PK                     2        3544        852297             61437       852297      852297 Ideally the view should have used NESTED LOOP, to use the indexes since the no. of rows coming from object collection is only 2.
    But it is taking default as 8168, leading to HASH join between the tables..leading to FULL TABLE access.
    So my question is, is there any way by which I can change the statistics while using collections in SQL ?
    I can use hints to use indexes but planning to avoid it as of now. Currently the time shown in explain plan is not accurate
    Modified query with hints :
    SELECT    
        /*+ index(tf TG_FILE_PK ) index(tfd TG_FILE_DATA_PK) */
        9999,
        tbl_typ.FILE_ID,
        tf.FILE_NM ,
        tf.MIME_TYPE ,
        dbms_lob.getlength(tfd.FILE_DATA)
    FROM
        TG_FILE tf,
        TG_FILE_DATA tfd,
            SELECT
            FROM
                TABLE
                        SELECT
                             CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                             OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
                        FROM
                             dual
        tbl_typ
    WHERE
        tf.FILE_ID     = tfd.FILE_ID
    AND tf.FILE_ID  = tbl_typ.FILE_ID
    AND tfd.FILE_ID = tbl_typ.FILE_ID;
    Elapsed: 00:00:00.01
    Execution Plan
    Plan hash value: 1670128954
    | Id  | Operation                                 | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                          |                 |     1 |   194 | 29978   (1)| 00:06:00 |
    |   1 |  NESTED LOOPS                             |                 |       |       |            |          |
    |   2 |   NESTED LOOPS                            |                 |     1 |   194 | 29978   (1)| 00:06:00 |
    |   3 |    NESTED LOOPS                           |                 |  8168 |  1363K| 16379   (1)| 00:03:17 |
    |   4 |     VIEW                                  |                 |  8168 |   103K|    29   (0)| 00:00:01 |
    |   5 |      COLLECTION ITERATOR CONSTRUCTOR FETCH|                 |  8168 | 16336 |    29   (0)| 00:00:01 |
    |   6 |       FAST DUAL                           |                 |     1 |       |     2   (0)| 00:00:01 |
    |   7 |     TABLE ACCESS BY INDEX ROWID           | TG_FILE_DATA    |     1 |   158 |     2   (0)| 00:00:01 |
    |*  8 |      INDEX UNIQUE SCAN                    | TG_FILE_DATA_PK |     1 |       |     1   (0)| 00:00:01 |
    |*  9 |    INDEX UNIQUE SCAN                      | TG_FILE_PK      |     1 |       |     1   (0)| 00:00:01 |
    |  10 |   TABLE ACCESS BY INDEX ROWID             | TG_FILE         |     1 |    23 |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       8 - access("TFD"."FILE_ID"="TBL_TYP"."FILE_ID")
       9 - access("TF"."FILE_ID"="TBL_TYP"."FILE_ID")
           filter("TF"."FILE_ID"="TFD"."FILE_ID")
    Statistics
              0  recursive calls
              0  db block gets
             16  consistent gets
              8  physical reads
              0  redo size
            916  bytes sent via SQL*Net to client
            524  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              2  rows processed
    Thanks,
    B

    Thanks Tubby,
    While searching I had found out that we can use CARDINALITY hint to set statistics for TABLE funtion.
    But I preferred not to say, as it is currently undocumented hint. I now think I should have mentioned it while posting for the first time
    http://www.oracle-developer.net/display.php?id=427
    If we go across the document, it has mentioned in total 3 hints to set statistics :
    1) CARDINALITY (Undocumented)
    2) OPT_ESTIMATE ( Undocumented )
    3) DYNAMIC_SAMPLING ( Documented )
    4) Extensible Optimiser
    Tried it out with different hints and it is working as expected.
    i.e. cardinality and opt_estimate are taking the default set value
    But using dynamic_sampling hint provides the most correct estimate of the rows ( which is 2 in this particular case )
    With CARDINALITY hint
    SELECT
        /*+ cardinality( e, 5) */*
    FROM
         TABLE
              SELECT
                   CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                   OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
              FROM
                   dual
         ) e ;
    Elapsed: 00:00:00.00
    Execution Plan
    Plan hash value: 1467416936
    | Id  | Operation                             | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                      |      |     5 |    10 |    29   (0)| 00:00:01 |
    |   1 |  COLLECTION ITERATOR CONSTRUCTOR FETCH|      |     5 |    10 |    29   (0)| 00:00:01 |
    |   2 |   FAST DUAL                           |      |     1 |       |     2   (0)| 00:00:01 |
    With OPT_ESTIMATE hint
    SELECT
         /*+ opt_estimate(table, e, scale_rows=0.0006) */*
    FROM
         TABLE
              SELECT
                   CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                   OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
              FROM
                   dual
         ) e ;
    Execution Plan
    Plan hash value: 4043204977
    | Id  | Operation                              | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                       |      |     5 |   485 |    29   (0)| 00:00:01 |
    |   1 |  VIEW                                  |      |     5 |   485 |    29   (0)| 00:00:01 |
    |   2 |   COLLECTION ITERATOR CONSTRUCTOR FETCH|      |     5 |    10 |    29   (0)| 00:00:01 |
    |   3 |    FAST DUAL                           |      |     1 |       |     2   (0)| 00:00:01 |
    With DYNAMIC_SAMPLING hint
    SELECT
        /*+ dynamic_sampling( e, 5) */*
    FROM
         TABLE
              SELECT
                   CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                   OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
              FROM
                   dual
         ) e ;
    Elapsed: 00:00:00.00
    Execution Plan
    Plan hash value: 1467416936
    | Id  | Operation                             | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                      |      |     2 |     4 |    11   (0)| 00:00:01 |
    |   1 |  COLLECTION ITERATOR CONSTRUCTOR FETCH|      |     2 |     4 |    11   (0)| 00:00:01 |
    |   2 |   FAST DUAL                           |      |     1 |       |     2   (0)| 00:00:01 |
    Note
       - dynamic sampling used for this statement (level=2)I will be testing the last option "Extensible Optimizer" and put my findings here .
    I hope oracle in future releases, improve the statistics gathering for collections which can be used in DML and not just use the default block size.
    By the way, are you aware why it uses the default block size ? Is it because it is the smallest granular unit which oracle provides ?
    Regards,
    B

  • How can we see status of all Requests at a time using any tables?

    How can we see status of all Requests at a time in the Info cube or ODS using any tables? what I am facing was while I run a Process Chain in some data targets red requests remains and due to this all other requests doesnot available for reporting.
    Now is there any T-Code or Table to view all requests in a single shot.
    Thanks in advance,
    Raj

    Hi Go for RSMONICDP give timestamp and u'll have it.
    Please assign points if useful
    Edited by: Aadil on Jan 21, 2008 7:43 AM

  • Namedquery using same table field multiple times with the use of a label

    Hi all,
    i'm having some trouble with a namedquery. I'm trying to
    use the following namedquery in Toplink to retrive some
    data out of a database.
    select proj.id
    , proj.code
    , proj.name
    , proj.budget
    , proj.status
    , proj.startdate
    , proj.enddate
    , proj.mdr_id projleader_id
    , med_leader.name projleader
    , proj.mdr_id_valt_onder promanager_id
    , med_promanager.name promanager
    , proj.mdr_id_is_account_from accmanager_id
    , med_accmanager.name accmanager
    from uur_projecten proj
    , uur_medewerkers med_leader
    , uur_medewerkers med_promanager
    , uur_medewerkers med_accmanager
    where ( #p_name is not null or #p_search_string is not null )
    and med_leader.id = proj.mdr_id
    and ( proj.mdr_id = nvl( #p_name, proj.mdr_id )
    or proj.mdr_id_valt_onder = nvl( #p_name, proj.mdr_id )
    or proj.mdr_id_is_account_van = nvl( #p_name, proj.mdr_id ))
    and (( #p_status is not null
    and substr( proj.status, 1, 1 ) = upper( #p_status ))
    or ( #p_status is null ))
    and ( upper( proj.code ) like upper( '%' || #p_search_string || '%' )
    or upper( proj.name ) like upper( '%' || #p_search_string || '%' ))
    and med_promanager.id = proj.mdr_id_valt_onder
    and med_accmanager.id = proj.mdr_id_is_account_van
    order by decode( substr( proj.status, 1, 1 )
    , 'A', 2, 'T', 3, 'F', 4, 1 ), proj.code desc
    As you all can see the table ‘uur_medewerkers’ is been used trice to
    determine the name for the corresponding ID. I have a Java class with
    the fields for the results and created a Toplink descriptor to map
    the fields to the database fields.
    The problem is that for the 'projleader', 'promanager' and 'accmanager'
    fields the results are null. The reason is probably that Toplink doesn't
    recognize the fields because of the label for the tables.
    Is there a way to make this work?
    Greets, René

    Post Author: quafto
    CA Forum: .NET
    Your query is not too clear so I'll do my best to answer it broadly.
    You mentioned that you have a .NET web application where your users enter data on one screen and then may retrieve it on another. If the data is written in real time to a database then you can create a standard Crystal Report by adding multiple tables. The tables should be linked together using the primary and foreign keys in order to optimize the database query and give you a speedy report. Using unlinked tables is not recommended and requires the report engine to index the tables (it is quite slow).
    You also mentioned you have a "PropID" to be used in a WHERE clause. This is a great place to use a parameter in your report. This parameter can then be used in your record selection formula inside Crystal Reports. The report engine will actually create the WHERE clause for you based on the parameter value. This is helpful because it allows you to simply concentrate on your code rather than keeping track of SQL queries.
    Now, what Crystal does not do well with is uncertainty. When you design a report with X number of tables the report engine expects X number of tables to be available at processing time. You should not surprise the print engine with more or less tables because you could end up with processing errors or incorrect data. You may need to design multiple reports for specific circumstances.
    Regarding the group expert question. I'm not sure how you would/could use the group expert to group a table? A table is a collection of fields and cannot be compared to another table without a complex algorithm. The group expert is used to group and sort records based on a field in the report. Have a look at the group expert section of the help file for more information.
    Hopefully my comments have given you a few ideas.

  • Use of TM04 in negative time Management

    Dear All,
    What is the use of TM04 schema in (-)ve time management. For absence and attendaces we use Factoring and the required parameters for Factoring like TKDIVI or TKSOLL is available from function PARTT (Partial period parameter).
    Can u please explain me
    Regards
    Prasad

    Hi Sowmya,
    TM04 is used for -ve Time mgt and is for planned working hours, - ve Time mgt wil capture the any deviations in your work schudule, i.e overtiem, Absence etc,
    and Time Mgt satus is reqiurd for the integtration with payroll,
    Time mgt staus 9 is for Integration with payroll,
    if u use +ve Time  actual working hour u use Time Mgt status 1,2
    Hope this will give u and idea,
    reward points if helpful.
    Regards
    Srinivas

  • USE OF CATS IN Negative time management scinario

    Hi Friends,
                    I need your guidance on following points
        1)    Is there any relevance of using CATS in negative time management.
        2)  If we have to use CATS in negative time management what should be the time management status o or 7
        3) If we transfer the data to HR management does it updates 2002 IT
    Hope for your reply plz
    Thanks in advance
    Regards
    Rajesh Rai

    Rajesh,
    1. CATS is very much relevant in negative time management for payment of overtime, recording absences and attendances. In CATS, the employee would enter time using activity types which needs to be mapped to the relevant attendance / absence types. After approval, the times charged in the time sheets would be transferred to infotypes 2001 / 2002 / 2010 (can be used for overtime)
    You can use the data transfer reports "RPTEXTPT" for Human Resources components
    Pls. refer to the below link for information
    http://help.sap.com/saphelp_47x200/helpdata/en/dc/f9783b05efdb18e10000000a114084/frameset.htm
    2. You can use TMSTA "9" - Time evaluation - planned times while using CATS
    3. CATS data can be very much transferred to infotype 2002
    http://help.sap.com/saphelp_47x200/helpdata/en/dc/f9783b05efdb18e10000000a114084/frameset.htm
    Hope this information helps
    Regards
    Raj

  • How to Calculate Overtime Using Negative Time Management

    Hi Experts,
    We are having PA,OM & PY in our project scope. We have just configured TM uptill workschedule.
    But how to calculate the overtime when we are using negative time management.
    We are having International Payroll Driver.
    Regards,
    Irfan

    Hi,
    Thanks for your prompt response.
    But if the overtime is based on Basic , HRA and other wage components.
    How the configuration of the wage type takes palce.
    Regards,
    Irfan

  • Cannot use duplicate table name error in AMDP

    Hi Experts,
    I'm developing an application which has the following architecture
    SAP UI5->Gateway->ABAP Managed DB procedures(AMDP)->HANA SP(Stored Procedure)
    We are having a very peculiar problem where the gateway service works inconsistently for the same input data.
    i.e If I execute the same service n number of times, I get the results successfully for say n-3 times but the other 3 times I get a "RFC Error :Error while executing Database procedure"
    There is no change in the input data or DB table data during all the executions.
    Running the stored procedure stand alone in HANA studio works all the time.
    We tried executing the AMDP from SE24 and the same behavior occurred.
    When trouble shooting we found that the exception occurs inside the AMDP when the call db procedure statement is executed.
    Could you please advise what could be the possible reason for this any tuning parameter/unavailability of db connections?
    Or could you please suggest some other troubleshooting mechanism by which we can zero in on the cause?
    When I go in the debug mode I can see the exact error as
    cannot use duplicate table name:  [288] SAPXXX.ZCL_PLAN_MAINTENANCE=>GET_COMP_TYPE#stub#20140905055908: line 15 col 3 (at pos 492): cannot use duplicate table name exception:
    where ZCL_PLAN_MAINTENANCE=>GET_COMP_TYPE is the AMDP class and method. SAPXXXis the schema.

    I reported this to Adobe customer support on 11/29, and here is their response:
    Wednesday, December 5, 2007 12:51:27 PM PST
    Hello John,
    Thank you for contacting Adobe® Web Support for assistance with Adobe
    Photoshop Elements® 6.0.
    I understand that images are deleted if you accidentally try to move
    them to a folder that already includes a file of the same name.
    Thank you for bringing this to our attention. I was able to replicate
    this behavior as well. The best method to report errors of this nature
    is using the following form on our website:
    http://www.adobe.com/cfusion/mmform/index.cfm?name=wishform
    I will report this to the product team through my channels. You may want to submit this issue through the web form as it goes directly to the product development team.
    I hope this information helps to resolve your issue. If you require
    further assistance with this issue, please update your web case with
    complete details, including what steps you have applied and any error
    messages you are receiving.
    You may also call Technical Support at (800) 642-3623. We are available from 6:00 am to 5:00 pm Monday - Friday, Pacific Time.
    Kind regards,
    Alan C.
    Adobe Web Support

  • Delta Sync taking too much time on refreshing of tables

    Hi,
    I am working on Smart Service Manager 3.0. I have come across a scenario where the delta sync is taking too much time.
    It is required that if we update the stock quantity then the stock should be updated instantaneously.
    To do this we have to refresh 4 stock tables at every sync so that the updated quantity is reflected in the device.
    This is taking a lot of time (3 to 4 min) which is highly unacceptable from user perspective.
    Please could anyone suggest something so that  only those table get refreshed upon which the action is carried out.
    For eg: CTStock table should get refreshed only If i transfer a stock and get updated accordingly
    Not on any other scenario like the changing  status from accept to driving or any thing other than stocks.
    Thanks,
    Star
    Tags edited by: Michael Appleby

    Hi fiontan,
    Thanks a lot for the response!!!
    Yeah!! I kow it's a lotta code, but i thought it'd be more informative if the whole function was quoted.
    I'm in fact using the PrintWriter to wrap the BufferedWriter but am not using the print() method.
    Does it save any time by using the print() method??
    The place where the delay is occurring is the wile loop shown below:
                while(allitems.hasMoreElements()){
                String aRow = "";
                    XDItem item = (XDItem)allitems.nextElement();
                    for(int i =0 ; i < props.length; i++){
                         String value = item.getStringValue(props);
         if(value == null || value.equalsIgnoreCase("null"))
              value = "";
                             if(i == 0)
                                  aRow = value;
                             else
                                  aRow += ("\t" + value);
    startTime1 = System.currentTimeMillis();
    System.out.println("time here is--before-writing to buffer --out.flush() done: " +startTime1);
    bufferWrt.write(aRow.toCharArray());
    out.flush();//added by rosmon to check extra time taken for extraction//
    bufferWrt.flush();//added by rosmon to check extra time taken for extraction//
    bufferWrt.newLine();
    startTime2 = System.currentTimeMillis();
    System.out.println("time here is--after-writing to buffer : " +startTime2);
    What exactly happens is that after a few loops it just seems to sleep for around 20 seconds and then again starts off and ............it goes on till the records are done.
    Please do lemme know if you have any idea as to why this is happening !!!!! This bug is giving me the scare.
    thanks in advance

  • Some Questions on Time Management PCR's and Functions

    Hi Experts,
    I have Confusion while i am reading some PCR's related to Time Management. Please clarify..
    1. "FILLPV T "    why we are using this operation and what is the need?
    2. "OUTTPORIGS" what is the origin statuses and how many origin statuses are there in SAP and is there any table related to this please tell me.
    3. What is the difference between RPTQTA00 and RPTIME00?
    Please help me
    Thanks.

    Hi All,
    Please give me some examples to explain FILLPV. so that it can help me to understand easily.
    and how many ore there origin statuses are there in SAP.
    What is the INTKY operation why we are using that?
    How we can delete the Time Evaluation Results?.
    Thanks.

  • How to read data from clusters ( Time Management )

    Hi All,
       How can we read data from cluster tables related to Time Management in different ways?
       Can somebody help me please..
    Thanks,
    Sankar.

    Hi
    Kindly check the following thread:
    Cluster database
    It has a sample code to read from & to store data in a cluster table
    Hope you are looking for cluster tables in HR
    Some useful HR tables:
    HRP1000 - Infotype 1000 DB Table
    HRP1001 - Infotype 1001 DB Table
    HRP1028 - Infotype 1028 DB Table
    Paxxxx - transparent tables for infotypes
    PCL1 - HR Cluster 1
    PCL2 - HR Cluster 2
    PCL3 - HR Cluster 3
    PCL4 - HR Cluster 4
    PCL5 - HR/RP Cluster 5; HR Planning Usage
    PERNR - Standard Selections for HR Master Data Reporting (Structure)
    kindly check the following link for some useful HR tables:
    http://www.atomhr.com/know_preview/SAP_HR_tables.htm
    Hope this helps!
    best regards,
    Thangesh

  • DBM - Time Management - CATS Transfer to CO

    Hello DBM Gurus,
    we are trying to set up Time Management  in our DBM implementation (6.04).
    the DBM transactions work and create timestamps in CATS as expected but, we seem to have a problem with the integration between CATS an CO. The entries in cats do not create CO transactions in the internal order of DBM order (no entries created in table CATSCO).
    We have followed all the instructions we could find. this is still not enough.
    Are there any specific steps rellevant to this problem (transfer from CATS to CO) that we might have missed?
    Do you know any Cookbook or helpful help on this issue?
    thanks in advance,
    Daniel Gamss

    You can refer this link for details process of interface data from HR and CO,PS,MM ...
    [http://www.****************/showthread.php?t=11938]
    Normally after book in timesheet from CAT2 ->Data will be save in CATSDB table,
    after approves data will exist in CATSCO , CATSPS, CATSMM and then we use CAT5 to interface data from HR to PS, CAT7 to interface frm HR to CO ...
    For you case no data exist in CATSCO ->check data already approves or not

  • FMs not returning ZES table values

    Hi Experts,
    I am an ABAPer, so the solution to this may be very simple but not to me.
    while fetching cluster B2 data via various FMs the entries of ZES table aren't coming.
    I have used HR_FORMS_TIM_GET_B2_RESULTS and HR_TIME_RESULTS_IN_INTERVAL.
    While using the second FM I have observed it calls HR_TIME_RESULTS_GET and returns the ZES table data
    but they get deleted based on few conditions which are dependent on T549Q payroll periods (obtained from HR_PAYROLL_PERIODS_GET).
    So I need to know what configurational mistakes or time evaluations flaws lead to these kind of cases.
    Thanks in advance,
    Sourav

    Hi,
    I would suggest please ask Functional to check ZES table using PT66, does data is stored in ZES.
    As structure has ZES, it should be store based on time type config.
    Thanks,
    Ketul

Maybe you are looking for