Large volume tables in SAP

Hello All,
Does any1 have the list of all large volume tables(Tables which might create a problem in select queries) present in SAP ?

Hi Nirav,
There is no as such specific list. But irrespective of the data in the table if you are providing all the primary key in the select query there will be no issue with SELECT.
Still if you want the highest size table check transaction DB02.
Regards,
Atish

Similar Messages

  • How to recover from Large Volume Table Insert

    We are inserting 1 Million rows into table using ODI IKM Append. If half the way thru the job fails because of tablespace or other issue, how to recover? Does 50% rows are commited to db? What is the best way to handle this type of situation?

    You can have different options for this.. You can commit them in every 1000 rows in the KM.You can also set the dml operation into one transaction and commit the same at the end of step.
    Try to follow load plan for better restart options. If you cann't provide more undo table space then try loading less amount of records lets say 1 to 100,000 and then 100,001 to 200,000 like this which is little bit complex but you have to sacrifice something to overcome the issue.
    I believe you must have data in C$ and I$ step. Hence you process these data instead of loading again from original source. So it depends on you, how you are going to design the total process. You can better judge it. Talk with your dba about this issues as well.
    Thanks
    http://bhabaniranjan.com/

  • Create a GPT partition table and format with a large volume (solved)

    Hello,
    I'm having trouble creating a GPT partition table for a large volume (~6T). It is a RAID 5 (hardware) with 3 hard disk drives having a size of 3T each (thus the resulting 6T volume).
    I tried creating a GPT partition table with gdisk but it just fails at creating it, stopping here (I've let it run for like 3 hours...):
    Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
    PARTITIONS!!
    Do you want to proceed? (Y/N): y
    OK; writing new GUID partition table (GPT) to /dev/md126.
    I also tried with parted but I get the same result. Out of luck, I created a GPT partition table from Windows 7 and  2 NTFS partitions (15G and the rest of space for the other) and it worked just fine. I then tried to format the 15G partition as ext4 but, as for gdisk, mkfs.ext4 will just never stop.
    Some information:
    fdisk -l
    Disk /dev/sda: 256.1 GB, 256060514304 bytes, 500118192 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk label type: dos
    Disk identifier: 0xd9a6c0f5
    Device Boot Start End Blocks Id System
    /dev/sda1 * 2048 104861695 52429824 83 Linux
    /dev/sda2 104861696 466567167 180852736 83 Linux
    /dev/sda3 466567168 500117503 16775168 82 Linux swap / Solaris
    Disk /dev/sdb: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk label type: dos
    Disk identifier: 0x00000000
    Device Boot Start End Blocks Id System
    /dev/sdb1 1 4294967295 2147483647+ ee GPT
    Partition 1 does not start on physical sector boundary.
    Disk /dev/sdc: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk /dev/sdd: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk label type: dos
    Disk identifier: 0x00000000
    Device Boot Start End Blocks Id System
    /dev/sdd1 1 4294967295 2147483647+ ee GPT
    Partition 1 does not start on physical sector boundary.
    Disk /dev/sde: 320.1 GB, 320072933376 bytes, 625142448 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk label type: dos
    Disk identifier: 0x5ffb31fc
    Device Boot Start End Blocks Id System
    /dev/sde1 * 2048 625139711 312568832 7 HPFS/NTFS/exFAT
    Disk /dev/md126: 6001.1 GB, 6001143054336 bytes, 11720982528 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 65536 bytes / 131072 bytes
    Disk label type: dos
    Disk identifier: 0x00000000
    Device Boot Start End Blocks Id System
    /dev/md126p1 1 4294967295 2147483647+ ee GPT
    Partition 1 does not start on physical sector boundary.
    WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.
    gdisk -l on my RAID volume (/dev/md126):
    GPT fdisk (gdisk) version 0.8.7
    Partition table scan:
    MBR: protective
    BSD: not present
    APM: not present
    GPT: present
    Found valid GPT with protective MBR; using GPT.
    Disk /dev/md126: 11720982528 sectors, 5.5 TiB
    Logical sector size: 512 bytes
    Disk identifier (GUID): 8E7D03F1-8C3A-4FE6-B7BA-502D168E87D1
    Partition table holds up to 128 entries
    First usable sector is 34, last usable sector is 11720982494
    Partitions will be aligned on 8-sector boundaries
    Total free space is 6077 sectors (3.0 MiB)
    Number Start (sector) End (sector) Size Code Name
    1 34 262177 128.0 MiB 0C01 Microsoft reserved part
    2 264192 33032191 15.6 GiB 0700 Basic data partition
    3 33032192 11720978431 5.4 TiB 0700 Basic data partition
    To make things clear: sda is an SSD on which Archlinux has been freshly installed (sda1 for root, sda2 for home, sda3 for swap), sde is a hard disk drive having Windows 7 installed on it. My goal with the 15G partition is to format it so I can mount /var on the HDD rather than on the SSD. The large volume will be for storage.
    So if anyone has any suggestion that would help me out with this, I'd be glad to read.
    Cheers
    Last edited by Rolinh (2013-08-16 11:16:21)

    Well, I finally decided to use a software RAID as I will not share this partition with Windows anyway and it seems a better choice than the fake RAID.
    Therefore, I used the mdadm utility to create my RAID 5:
    # mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1
    # mkfs.ext4 -v -m .1 -b 4096 -E stride=32,stripe-width=64 /dev/md0
    It works like a charm.

  • Initializing large volume 0FI_GL_4

    Just looking for some confirmation of the procedure for initialization of the 0FI_GL_4 extractor when dealing with very large volume.
    In the past, attempts at wide open initialization loads for 0FI_GL_4 have failed at my client.
    I was thinking of running an initialization with data for each fiscal period. Then for the last init containing the current month I would make the initialization selections like this:
    07/2007 - 12/9999
    Then since the initializations cover all of the previous periods and the future periods, would the delta loads function correctly? Can anyone confirm this method or give some more guidance on how you have initialized GL?
    Thanks!
    Justin

    Hi,
    I think you would be fine - but as such I hve never done init for GL....
    for FI you should go with ODS as 1st layer..
    Chk this out ->
    http://help.sap.com/saphelp_bw33/helpdata/en/af/16533bbb15b762e10000000a114084/content.htm
    Table BWOM2_TIMEST serves to document the loading history of Financial Accounting line items. It also provides defined restart points following incorrect data requests.
    Hope it helps
    Gaurav

  • Processing large volumes of data in PL/SQL

    I'm working on a project which requires us to process large volumes of data on a weekly/monthly/quarterly basis, and I'm not sure we are doing it right, so any tips would be greatly appreciated.
    Requirement
    Source data is in a flat file in "short-fat" format i.e. each data record (a "case") has a key and up to 2000 variable values.
    A typical weekly file would have maybe 10,000 such cases i.e. around 20 million variable values.
    But we don't know which variables are used each week until we get the file, or where they are in the file records (this is determined via a set of meta-data definitions that the user selects at runtime). This makes identifying and validating each variable value a little more interesting.
    Target is a "long-thin" table i.e. one record for each variable value (with numeric IDs as FKs to identify the parent variable and case.
    We only want to load variable values for cases which are entirely valid. This may be a merge i.e. variable values may already exist in the target table.
    There are various rules for validating the data against pre-existing data etc. These rules are specific to each variable, and have to be applied before we put the data in the target table. The users want to see the validation results - and may choose to bail out - before the data is written to the target table.
    Restrictions
    We have very limited permission to perform DDL e.g. to create new tables/indexes etc.
    We have no permission to use e.g. Oracle external tables, Oracle directories etc.
    We are working with standard Oracle tools i.e. PL/SQL and no DWH tools.
    DBAs are extremely resistant to giving us more disk space.
    We are on Oracle 9iR2, with no immediate prospect of moving to 10g.
    Current approach
    Source data is uploaded via SQL*Loader into static "short fat" tables.
    Some initial key validation is performed on these records.
    Dynamic SQL (plus BULK COLLECT etc) is used to pivot the short-fat data into an intermediate long-thin table, performing the validation on the fly via a combination of including reference values in the dynamic SQL and calling PL/SQL functions inside the dynamic SQL. This means we can pivot+validate the data in one step, and don't have to update the data with its validation status after we've pivoted it.
    This upload+pivot+validate step takes about 1 hour 15 minutes for around 15 million variable values.
    The subsequent "load to target table" step also has to apply substitution rules for certain "special values" or NULLs.
    We do this by BULK collecting the variable values from the intermediate long-thin table, for each valid case in turn, applying the substitution rules within the SQL, and inserting into/updating the target table as appropriate.
    Initially we did this via a SQL MERGE, but this was actually slower than doing an explicit check for existence and switching between INSERT and UPDATE accordingly (yes, that sounds fishy to me too).
    This "load" process takes around 90 minutes for the same 15 million variable values.
    Questions
    Why is it so slow? Our DBAs assure us we have lots of table-space etc, and that the server is plenty powerful enough.
    Any suggestions as to a better approach, given the restrictions we are working under?
    We've looked at Tom Kyte's stuff about creating temporary tables via CTAS, but we have had serious problems with dynamic SQL on this project, so we are very reluctant to introduce more of it unless it's absolutely necessary. In any case, we have serious problems getting permissions to create DB objects - tables, indexes etc - dynamically.
    So any advice would be gratefully received!
    Thanks,
    Chris

    We have 8 "short-fat" tables to hold the source data uploaded from the source file via SQL*Loader (the SQL*Loader step is fast). The data consists simply of strings of characters, which we treat simply as VARCHAR2 for the most part.
    These tables consist essentially of a case key (composite key initially) plus up to 250 data columns. 8*250 = 2000, so we can handle up to 2000 of these variable values. The source data may have 100 any number of variable values in each record, but each record in a given file has the same structure. Each file-load event may have a different set of variables in different locations, so we have to map the short-fat columns COL001 etc to the corresponding variable definition (for validation etc) at runtime.
    CASE_ID VARCHAR2(13)
    COL001 VARCHAR2(10)
    COL250     VARCHAR2(10)
    We do a bit of initial validation in the short-fat tables, setting a surrogate key for each case etc (this is fast), then we pivot+validate this short-fat data column-by-column into a "long-thin" intermediate table, as this is the target format and we need to store the validation results anyway.
    The intermediate table looks similar to this:
    CASE_NUM_ID NUMBER(10) -- surrogate key to identify the parent case more easily
    VARIABLE_ID NUMBER(10) -- PK of variable definition used for validation and in target table
    VARIABLE_VALUE VARCHAR2(10) -- from COL001 etc
    STATUS VARCHAR2(10) -- set during the pivot+validate process above
    The target table looks very similar, but holds cumulative data for many weeks etc:
    CASE_NUM_ID NUMBER(10) -- surrogate key to identify the parent case more easily
    VARIABLE_ID NUMBER(10) -- PK of variable definition used for validation and in target table
    VARIABLE_VALUE VARCHAR2(10)
    We only ever load valid data into the target table.
    Chris

  • Processing large volume of idocs using BPM Processing

    Hi,
    I have a scenario in which SAP R/3 sends large volume say 30,000 DEBMAS Idocs to XI. XI then sends data to 3 legacy systems using jdbc adapter.
    I created a BPM Process which waits for 4 hrs to collect all the idocs. This is what my BPM does:
    1. Wait for 4 hrs Collect the idocs
    2. For every idoc do a IDOC->JDBC Message transformation.
    3. Append to a Big List
    4. Loop at the Big list from step 4 and in the loop for
    5. Start counter from 0 and increment. Append to a Small List.
    6. if counter reaches 100 then send a Batch JDBC Message in send step.
    7. Reset counter after every send.
    8. Process remaining list i.e if there was an odd count of say 5300 idoc then the remaining 53 idocs will be sent in anther block.
    After sending 5000 idocs to above BPM following problems are there:
    1. I cannot read the workflow log as system does not respond.
    2. In the For Each loop which loops through the big list of say 5000 idocs only first pass of 100 was processed after that the workflow item is not moving ahead. It remains in the status as "STARTED" but I do not see further processing.
    Please tell me why certain Work Items are stuck is it becuase I have reached upper limit and is this the right approach? The Main BPM Process is also hanging from last 2 days.
    I have concerns about using BPM for processing such high volume of idocs in production. Please advice and thanks in advance.
    Regards
    Ashish

    Hi Ashish,
    Please read SAPs Checklist for proper usage of BPMs: http://help.sap.com/saphelp_nw04/helpdata/en/43/d92e428819da2ce10000000a1550b0/content.htm
    One point i'm wondering about is why do you send the IDocs out of R/3 one by one and don't use packaging there? From a performance stand point this is much better than a bpm.
    The SAP Checklist states the following:
    <i>"No Replacement for Mass Interfaces
    Check whether it would not be better to execute particular processing steps, for example, collecting messages, on the sender or receiver system.
    If you only want to collect the messages from one business system to forward them together to a second business system, you should do so by using a mass interface and not an integration process.
    If you want to split a message up into lots of individual messages, also use a mass interface instead of an integration process. A mass interface requires only a fraction of the back-end system and Integration-Server resources that an integration process would require to carry out the same task. "</i>
    Also you might want to have a look at the IDoc packaging capabilities within XI (available since SP14 i believe): http://help.sap.com/saphelp_nw04/helpdata/en/7a/00143f011f4b2ee10000000a114084/content.htm
    And here is Sravyas good blog about this topic: /people/sravya.talanki2/blog/2005/12/09/xiidoc-message-packages
    If for whatever reason you can't or don't want to use the IDoc packets from R/3 or XI there are other points on which you can focus for optimizing your process:
    In the section "Using the Integration Server Efficiently" there is an overview on which steps are costly and which steps are not so costly in their resource consumption. Mappings are one of the steps that tend to consume a lot of resources and unless it is a multi mapping that can not be executed outside a BPM there is always the option to do the mapping in the interface determination either before or after the BPM. So i would sugges if your step 2 is not a multi mapping you should try to execute it before entering the BPM and just handle the JDBC Messages in the BPM.
    Wait steps are also costly steps, so reducing the time in your wait step could potentially lead to better performance. Or if possible you could omitt the wait step and just create a process that waits for 100 messages and then processes them.
    Regards
    Christine

  • Dealing with large volumes of data

    Background:
    I recently "inherited" support for our company's "data mining" group, which amounts to a number of semi-technical people who have received introductory level training in writing SQL queries and been turned loose with SQL Server Management
    Studio to develop and run queries to "mine" several databases that have been created for their use.  The database design (if you can call it that) is absolutely horrible.  All of the data, which we receive at defined intervals from our
    clients, is typically dumped into a single table consisting of 200+ varchar(x) fields.  There are no indexes or primary keys on the tables in these databases, and the tables in each database contain several hundred million rows (for example one table
    contains 650 million rows of data and takes up a little over 1 TB of disk space, and we receive weekly feeds from our client which adds another 300,000 rows of data).
    Needless to say, query performance is terrible, since every query ends up being a table scan of 650 million rows of data.  I have been asked to "fix" the problems.
    My experience is primarily in applications development.  I know enough about SQL Server to perform some basic performance tuning and write reasonably efficient queries; however, I'm not accustomed to having to completely overhaul such a poor design
    with such a large volume of data.  We have already tried to add an identity column and set it up as a primary key, but the server ran out of disk space while trying to implement the change.
    I'm looking for any recommendations on how best to implement changes to the table(s) housing such a large volume of data.  In the short term, I'm going to need to be able to perform a certain amount of data analysis so I can determine the proper data
    types for fields (and whether any existing data would cause a problem when trying to convert the data to the new data type), so I'll need to know what can be done to make it possible to perform such analysis without the process consuming entire days to analyze
    the data in one or two fields.
    I'm looking for reference materials / information on how to deal with the issues, particularly when a large volumn of data is involved.  I'm also looking for information on how to load large volumes of data to the database (current processing of a typical
    data file takes 10-12 hours to load 300,000 records).  Any guidance that can be provided is appreciated.  If more specific information is needed, I'll be happy to try to answer any questions you might have about my situation.

    I don't think you will find a single magic bullet to solve all the issues.  The main point is that there will be no shortcut for major schema and index changes.  You will need at least 120% free space to create a clustered index and facilitate
    major schema changes.
    I suggest an incremental approach to address you biggest pain points.  You mention it takes 10-12 hours to load 300,000 rows, which suggests there may be queries involved in the process which require full scans of the 650 million row table.  Perhaps
    some indexes targeted at improving that process is a good first step.
    What SQL Server version and edition are you using?  You'll have more options with Enterprise (partitioning, row/page compression). 
    Regarding the data types, I would take a best guess at the proper types and run a query with TRY_CONVERT (assuming SQL 2012) to determine counts of rows that conform or not for each column.  Then create a new table (using SELECT INTO) that has strongly
    typed columns for those columns that are not problematic, plus the others that cannot easily be converted, and then drop the old table and rename the new one.  You can follow up later to address columns data corrections and/or transformations. 
    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

  • Split of a Large Volume Outbound Idoc

    Hi,
    anyone who can tell me how split of a large volume outbound idoc works?
    /Elvez

    Hi Elvez,
    One way to split your idoc is to group your idoc into different segments
    You can create segments and group your data logically.
    Go through the following link. It will give you good tips on IDOCs
         http://www.netweaverguru.com/EDI/HTML/IDocBook.htm
      other helpful links are...
         ALE/ IDOC/ XML
         http://www.sapgenie.com/sapgenie/docs/ale_scenario_development_procedure.doc
         http://www.thespot4sap.com/Articles/SAP_XML_Business_Integration.asp
         http://help.sap.com/saphelp_srm30/helpdata/en/72/0fe1385bed2815e10000000a114084/content.htm
    Good luck ..Reward me for the same
    Thanks
    Ashok

  • Retrive SQL from Webi report and process it for large volume of data

    We have a Scenario where, we need to extract large volumes of data into flat files and distribute them from u2018Teradatau2019 warehouse which we usually call them as u2018Extractsu2019. But the requirement is such that, Business users wants to build their own u2018Adhoc Extractsu2019.   The only way, I thought, to achieve this, is Build a universe, create the query, save the report and do not run it. Then write a RAS SDK to retrieve the SQL code from the reports and save into .txt file and process it directly in Teradata?
    Is there any predefined Solution available with SAP BO or any other tool for this kind of Scenarios?

    Hi Shawn,
    Do we have some VB macro to retrieve Sql queries of data providers of all the WebI reports in CMS.
    Any information or even direction where I can get information will be helpful.
    Thanks in advance.
    Ashesh

  • Problem to update very large volume of data for 2LIS_04* extr.

    Hi
    I have problem with jobs for 2LIS_04* extractors using Queued Delta.
    There are interface between R3 system and other production system and 3 or 4 times in the month very large volumen of data has been send to R3.
    Then job runs very long and not pull data to RSA7.
    How to resolve this problem.
    Our R3 system is PI_BASIS 2005_1_620.
    Thanks
    Adam

    U can check these SAP Notes..........it will help u........
    How can downtime be reduced for setup table update
    SAP Note Number: 753654
    Performance improvement for filling the setup tables
    SAP Note Number: 436393
    LBWE: Performance for setup of extract structures
    SAP Note Number: 437672

  • Convert large volumes in Oracle

    I am trying to convert large tables from one instance to another (simple conversion).
    Table has same layout in both systems. Doing conversion with takes ages.
    So now we trying to convert via Oracle (exp/imp).
    The schema-id's are different, but this issue can be tackled.
    Now we found that the Oracle tables in the target system are always created with 'not null' contraint.
    Now the import is impossible as a consequence.
    Any ideas to tackle this, or any other ideas how to transfer large volumes.

    Hi,
    If I understand correctly that you want to copy a table contents from a system to another. At this stage, because of the constraints, it is not allow you to insert records at the target site.
    So, you exported the table from the source system, but the fields without "NOT NULL" constraint. On the other hand, at the target site, same fields have "NOT NULL" constraint.
    Under this circumstance, you may create the table first without "NOT NULL" fields at the destination and import the records in it.
    You can use "brspace -f tbexport ..." for the import/export operations. Check the note 646681 - Reorganizing tables with BRSPACE
    Best regards,
    Orkun Gedik

  • Internal table in sap script

    Hello All ,
    I  have got a internal table with tracking numbers and I want to print all the numbers in that internal table on sap script.
    Please advise.
    Thanks
    Moderator message:  please search for available information before asking.
    locked by: Thomas Zloch on Sep 13, 2010 1:09 PM

    Hi,
    You can create a sub-routine and pass all the table entries in variables and then you can print them.

  • Extracting data from Z-table from SAP R/3 to BW

    Hi all
    I want to extract data from a Z-table from SAP R/3 system to Bw system. Currently I am on BW 3.5. Since it is a Z table I dont have a standard extractor for it & I dont knw how to create it. Can anyone provide me with the step-by-step documentation of how to extract data from a non standard SAP table????

    Hi
    You need to create Generic Datasource on the Z-Table you want to get data from
    Go to RSO2 transaction to create generic datasource .
    You need to give technical name of datasource under datasource type you want and click on create. Then you can give descrption and Application component under which u want see the datasource,
    enter the z table name under view/ table and save.
    here you can click on check boxes to make fields hidden or selection fields.
    Regards
    Ravi
    Edited by: Ravi Naalla on Aug 25, 2009 8:24 AM

  • How to Send Internal table to SAP Spool using Function Modules or Methods?

    Hi Experts,
    How to Send Internal table to SAP Spool using Function Modules or Methods?
    Thanks ,
    Kiran

    This is my code.
    I still get the no ABAP list data for the spool, even tho I can see it sp01?
    REPORT  Z_MAIL_PAYSLIP.
    * Declaration Part *
    tables: PERNR, PV000, T549Q, V_T514D, HRPY_RGDIR.
    infotypes: 0000, 0001, 0105, 0655.
    data: begin of ITAB occurs 0,
      MTEXT(25) type C,
      PERNR like PA0001-PERNR,
      ABKRS like PA0001-ABKRS,
      ENAME like PA0001-ENAME,
      USRID_LONG like PA0105-USRID_LONG,
    end of ITAB.
    data: W_BEGDA like HRPY_RGDIR-FPBEG,
          W_ENDDA like HRPY_RGDIR-FPEND.
    data: RETURN like BAPIRETURN1 occurs 0 with header line.
    data: P_INFO like PC407,
          P_FORM like PC408 occurs 0 with header line.
    data: P_IDX type I,
          MY_MONTH type T549Q-PABRP,
          STR_MY_MONTH(2) type C,
          MY_YEAR type T549Q-PABRJ,
          STR_MY_YEAR(4) type C,
          CRLF(2) type x value '0D0A'.
    data: W_CMONTH(10) type C.
    data: TAB_LINES type I,
          ATT_TYPE like SOODK-OBJTP.
    data: begin of P_INDEX occurs 0,
            INDEX type I,
    end of P_INDEX.
    constants: begin of F__LTYPE, "type of line
       CMD like PC408-LTYPE value '/:',  "command
       TXT like PC408-LTYPE value 's',   "textline
    end of F__LTYPE.
    constants: begin of F__CMD, "commands
      NEWPAGE like PC408-LINDA value '',
    end of F__CMD.
    data: P_LIST like ABAPLIST occurs 1 with header line.
    *data: OBJBIN like SOLISTI1 occurs 10 with header line,
    data: OBJBIN like  LVC_S_1022 occurs 10 with header line,
          DOCDATA like SODOCCHGI1,
          OBJTXT like SOLISTI1 occurs 10 with header line,
          OBJPACK like SOPCKLSTI1 occurs 1 with header line,
          RECLIST like SOMLRECI1 occurs 1 with header line,
          OBJHEAD like SOLISTI1 occurs 1 with header line,
          it_mess_att LIKE solisti1 OCCURS 0 WITH HEADER LINE,
          gd_buffer type string,
          l_no_of_bytes TYPE i,
          l_pdf_spoolid LIKE tsp01-rqident,
          l_jobname     LIKE tbtcjob-jobname.
    data: file_length  type int4,
          spool_id     type rspoid,
          line_cnt     type i.
    *-------------------------------------------------------------------* * INITIALIZATION *
    OBJBIN = ' | '.
    append OBJBIN.
    OBJPACK-HEAD_START = 1.
    data: S_ABKRS like PV000-ABKRS.
    data: S_PABRP like T549Q-PABRP.
    data: S_PABRJ like T549Q-PABRJ.
    * SELECTION SCREEN                                                  *
    selection-screen begin of block BL1.
    parameters: PAY_VAR like BAPI7004-PAYSLIP_VARIANT default 'ESS_PAYSLIPS' obligatory.
    selection-screen end of block BL1.
    START-OF-SELECTION.
      s_ABKRS = PNPXABKR.
      S_PABRP = PNPPABRP.
      s_pabrj = PNPPABRJ.
      w_begda = PN-BEGDA.
      w_endda = PN-ENDDA.
    get pernr.
    *                                 "Check active employees
      rp-provide-from-last p0000 space pn-begda  pn-endda.
      CHECK P0000-STAT2 IN PNPSTAT2.
    *                                 "Check Payslip Mail flag
      rp-provide-from-last p0655 space pn-begda  pn-endda.
      CHECK P0655-ESSONLY = 'X'.
      rp-provide-from-last p0001 space pn-begda  pn-endda.
    *                                 "Find email address
      RP-PROVIDE-FROM-LAST P0105 '0030' PN-BEGDA PN-ENDDA.
      if p0105-usrid_LONG ne ''.
        ITAB-PERNR      = P0001-PERNR.
        ITAB-ABKRS      = P0001-ABKRS.
        ITAB-ENAME      = P0001-ENAME.
        ITAB-USRID_LONG = P0105-USRID_LONG.
        append itab.
        clear itab.
      endif.
      "SY-UCOMM ='ONLI'
    END-OF-SELECTION.
    *------------------------------------------------------------------* start-of-selection.
      write : / 'Payroll Area        : ', S_ABKRS.
      write : / 'Payroll Period/Year : ',STR_MY_MONTH,'-',STR_MY_YEAR. write : / 'System Date : ', SY-DATUM.
      write : / 'System Time         : ', SY-UZEIT.
      write : / 'User Name           : ', SY-UNAME.
      write : / SY-ULINE.
      sort ITAB by PERNR.
      loop at ITAB.
        clear : P_INFO, P_FORM, P_INDEX, P_LIST, OBJBIN, DOCDATA, OBJTXT, OBJPACK, RECLIST, TAB_LINES.
        refresh : P_FORM, P_INDEX, P_LIST, OBJBIN, OBJTXT, OBJPACK, RECLIST.
    *                                                  Retrieve Payroll results sequence number for this run
        select single * from HRPY_RGDIR where PERNR eq ITAB-PERNR
                                        and FPBEG ge W_BEGDA
                                        and FPEND le W_ENDDA
                                        and SRTZA eq 'A'.
    *                                                  Produce payslip for those payroll results
        if SY-SUBRC = 0.
          call function 'GET_PAYSLIP'
            EXPORTING
              EMPLOYEE_NUMBER = ITAB-PERNR
              SEQUENCE_NUMBER = HRPY_RGDIR-SEQNR
              PAYSLIP_VARIANT = PAY_VAR
            IMPORTING
              RETURN          = RETURN
              P_INFO          = P_INFO
            TABLES
              P_FORM          = P_FORM.
          check RETURN is initial.
    *                                                 remove linetype from generated payslip
          loop at p_form.
            objbin = p_form-linda.
            append objbin.
            line_cnt = line_cnt + 1.
          endloop.
          file_length = line_cnt * 1022.
    *                                                 create spool file of paylsip
          CALL FUNCTION 'SLVC_TABLE_PS_TO_SPOOL'
            EXPORTING
              i_file_length = file_length
            IMPORTING
              e_spoolid     = spool_id
            TABLES
              it_textdata   = objbin.
          IF sy-subrc EQ 0.
            WRITE spool_id.
          ENDIF.
          DESCRIBE table objbin.
          DATA PDF LIKE TLINE OCCURS 100 WITH HEADER LINE.
          CALL FUNCTION 'CONVERT_ABAPSPOOLJOB_2_PDF'
            EXPORTING
              SRC_SPOOLID                    = spool_id
              NO_DIALOG                      = ' '
              DST_DEVICE                     = 'MAIL'
    *      PDF_DESTINATION                =
    *    IMPORTING
    *      PDF_BYTECOUNT                  = l_no_of_bytes
    *      PDF_SPOOLID                    = l_pdf_spoolid
    *      LIST_PAGECOUNT                 =
    *      BTC_JOBNAME                    =
    *      BTC_JOBCOUNT                   =
            TABLES
              PDF                            = pdf
            EXCEPTIONS
              ERR_NO_ABAP_SPOOLJOB           = 1
              ERR_NO_SPOOLJOB                = 2
              ERR_NO_PERMISSION              = 3
              ERR_CONV_NOT_POSSIBLE          = 4
              ERR_BAD_DESTDEVICE             = 5
              USER_CANCELLED                 = 6
              ERR_SPOOLERROR                 = 7
              ERR_TEMSEERROR                 = 8
              ERR_BTCJOB_OPEN_FAILED         = 9
              ERR_BTCJOB_SUBMIT_FAILED       = 10
              ERR_BTCJOB_CLOSE_FAILED        = 11
              OTHERS                         = 12
          IF SY-SUBRC <> 0.
    * MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
    *         WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
          ENDIF.
    *Download PDF file C Drive
      CALL FUNCTION 'GUI_DOWNLOAD'
        EXPORTING
          filename = 'C:\itab_to_pdf.pdf'
          filetype = 'BIN'
        TABLES
          data_tab = pdf.
    * Transfer the 132-long strings to 255-long strings
    *  LOOP AT pdf.
    *    TRANSLATE pdf USING ' ~'.
    *    CONCATENATE gd_buffer pdf INTO gd_buffer.
    *  ENDLOOP.
    *  TRANSLATE gd_buffer USING '~ '.
    *  DO.
    *    it_mess_att = gd_buffer.
    *    APPEND it_mess_att.
    *    SHIFT gd_buffer LEFT BY 255 PLACES.
    *    IF gd_buffer IS INITIAL.
    *      EXIT.
    *    ENDIF.
    *  ENDDO.
          OBJHEAD = 'Objhead'.
          append OBJHEAD.
    * preparing email subject
          concatenate W_ENDDA(6)
                    ' Payslip-'
                    ITAB-ENAME+0(28)
                    ITAB-PERNR+4(4) ')'
                 into DOCDATA-OBJ_DESCR.
          DOCDATA-OBJ_NAME = 'Pay Slip'.
          DOCDATA-OBJ_LANGU = SY-LANGU.
          OBJTXT = 'Pay Slip.'.
          append OBJTXT.
    *prepare email lines
          OBJTXT = DOCDATA-OBJ_DESCR.
          append OBJTXT.
          OBJTXT = 'Please find enclosed your current payslip.'.
          append OBJTXT.
    * Write Attachment(Main)
    * 3 has been fixed because OBJTXT has fix three lines
          read table OBJTXT index 3.
    *    DOCDATA-DOC_SIZE = ( 3 - 1 ) * 255 + strlen( OBJTXT ).
          clear OBJPACK-TRANSF_BIN.
          OBJPACK-HEAD_START = 1.
          OBJPACK-HEAD_NUM = 0.
          OBJPACK-BODY_START = 1.
          OBJPACK-BODY_NUM = 3.
          OBJPACK-DOC_TYPE = 'RAW'.
          append OBJPACK.
    * Create Message Attachment
          ATT_TYPE = 'PDF'.
          describe table OBJBIN lines TAB_LINES.
          read table OBJBIN index TAB_LINES.
    *    OBJPACK-DOC_SIZE = ( TAB_LINES - 1 ) * 255 + strlen( OBJBIN ).
          OBJPACK-TRANSF_BIN = 'X'.
          OBJPACK-HEAD_START = 1.
          OBJPACK-HEAD_NUM = 0.
          OBJPACK-BODY_START = 1.
          OBJPACK-BODY_NUM = TAB_LINES.
          OBJPACK-DOC_TYPE = ATT_TYPE.
          OBJPACK-OBJ_NAME = 'ATTACHMENT'.
          OBJPACK-OBJ_DESCR = 'Payslip'.
          append OBJPACK.
    * Create receiver list refresh RECLIST.
          clear RECLIST.
          RECLIST-RECEIVER = itab-USRID_long.
          translate RECLIST-RECEIVER to lower case.
          RECLIST-REC_TYPE = 'U'.
          append RECLIST.
    * Send the document
    *SO_NEW_DOCUMENT_ATT_SEND_API1
          call function 'SO_DOCUMENT_SEND_API1'
            exporting
              DOCUMENT_DATA = DOCDATA
              PUT_IN_OUTBOX = 'X'
              COMMIT_WORK = 'X'
    * IMPORTING
    *   SENT_TO_ALL =
    *   NEW_OBJECT_ID =
            tables
              PACKING_LIST  = OBJPACK
              OBJECT_HEADER = OBJHEAD
              CONTENTS_BIN  = pdf
              CONTENTS_TXT  = OBJTXT
    *   CONTENTS_HEX =
    *   OBJECT_PARA =
    *   OBJECT_PARB =
              RECEIVERS = RECLIST
            exceptions
              TOO_MANY_RECEIVERS = 1
              DOCUMENT_NOT_SENT = 2
              DOCUMENT_TYPE_NOT_EXIST = 3
              OPERATION_NO_AUTHORIZATION = 4
              PARAMETER_ERROR = 5
              X_ERROR = 6
              ENQUEUE_ERROR = 7
              others = 8.
          if SY-SUBRC NE 0.
            ITAB-MTEXT = 'Message Not Sent to : '.
          else.
            ITAB-MTEXT = 'Message Sent to : '.
          endif.
    *    else.
    *      ITAB-MTEXT = 'Message Not Sent to : '.
    *    endif.
        else.
          "SY-SUBRC Not = 0
          ITAB-MTEXT = 'Payroll data not found : '.
        endif.
        "end of SY-SUBRC = 0.
        modify ITAB.
      endloop. "end loop at ITAB
      sort ITAB by MTEXT PERNR.
      loop at ITAB.
        at new MTEXT.
          uline.
          write : / ITAB-MTEXT color 4 intensified on.
          write : / 'Emp. Code' color 2 intensified on,
                 12 'Emp. Name' color 2 intensified on,
                 54 'Email ID' color 2 intensified on.
        endat.
        write : / ITAB-PERNR, 12 ITAB-ENAME, 54 ITAB-USRID_LONG.
      endloop.

  • How to Access table in SAP

    Hi Gurus,
    Actually I am new to SAP and was wondering if somebody could tell me how to access a table in SAP where the relavant data is being stored.
    e.g. If I want to check table VBAK, what wouldbe the menupath/t-code?
    Thank you.
    Juhi Singhania

    Hi Juhi,
    Since you are new to SAP you might want to have this link handy.. it has several useful information including the tables list
    http://www.erpgenie.com/abap/tables.htm
    http://www.erpgenie.com/saptech/transactions.htm
    http://www.sap-img.com/general/find-the-list-of-sap-transaction-codes.htm
    http://www.sap-img.com/basis/useful-sap-system-administration-transactions.htm
    Thanks
    Janani
    award points if helpful

Maybe you are looking for

  • Can not create more than one model on one table

    My table and tablespace, and semnetwork have been created successfully, with following SQL: CREATE TABLESPACE MYONTOLOGY_TBS DATAFILE 'C:\Oracle\oradata\odb1\myontology_tbs.dat' SIZE 1024M REUSE AUTOEXTEND ON NEXT 256M MAXSIZE UNLIMITED SEGMENT SPACE

  • BAPI returns less number of records when called from WebDynpro

    Hi, We have a BAPI which updates some tables and then bring back output in the form of a table. When we execute BAPI from R/3 we get all records. When we execute the BAPI using webdynpro, for the same input values, we are always getting 22 records. T

  • Sap query & reporting

    Hi, Any book that can be suggested for report writer/report painter & query reporting.. that contains almost all for reporting without programming.. Thanks in advance... Vipin Arora..

  • PDF button menu id from PO Page  ?

    HI ALL Experts, i want to know about pdf button menu id from PO From , any body know then please give me solution ASAP . Thanks and Regards Kalpen

  • Another workaround for Failed: 3x crash service down

    For some of us still using Final Cut Pro Studio 2 with Compressor 3.5, there has only been a couple of workarounds with varied success. I stumbled across another one after spending many hours trying to convert an MPEG4 into Apple ProRes 422. (It woul