Data Load : Number of records count

Hi Experts,
          I want to document number of records transferred to BW during an infopackage execution.
          I want to automate the process by running a report in background which will fetch a data from SAP tables about number of records been transfered by all my InfoPackage .
I would like to know how should I proceed with.
         I want to know some System tables which contains same data as that of RSMO transaction displays to us.
Kindly help with valuable replies.

HI,
inorder to get the record counts report you need to create a report based on below tables
rsseldone, rsreqdone, rsldpiot, rsmonfact
Check the below link which explain in detail with the report code as well.
[Data load Quick Stats|http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/90215bba-9a46-2a10-07a7-c14e97bdb764]
This doc also explains how to trigger a mail with the details to all.
Regards
KP

Similar Messages

  • Data statistics | Number of  Records passed how can I remove?

    Hi,
    on several standard output there is, on first page, as example... for BOL printing there is:
    Data statistics Number of
    Records passed         70
    How can I remove it? Each time is a printed page than go on trash.
    Regards.

    Sadly, the only way I know of is that this has to be done PER USER with the instructions below. We had all of our users do this when we went live.
    1.Navigate to a screen in SAP that produces a statistics page when you print.
    2.Start printing the page as normal.
    3.Enter the printer that you would like to print to.
    5.Click on the Properties button.
    6.If a screen appears about line format, click the Green Checkmark (Continue).
    7.On the next screen, click the Specifications button.
    8.For Field name select ALV Statistics.
    9.Click on the Pencil Icon (Change) to the right of the Field default value.
    10.Uncheck the box ALV Statistics
    11.Click the Green Checkmark.
    12.Click on the Copy settings button.
    13.An entry in the box below the Copy settings button will appear.
    14.Click the Green Checkmark to continue.
    15.Click the Green Checkmarks as they appear to save the settings and print your document.  You will need to do this on several screens.
    16.The one page statistics report will no longer print with your documents.
    Brian

  • Master Data Load - Number records not match

    I am loading the 0PLANT table.  In R3, it shows 966 records.   After running the InfoPackage, there are 966 records in the PSA, none in error.  But I get a message - Number of records  requested does not match number transferred (packet 1).
    This is a first use of a new source system to this BW.  All master data loads give me the same error.
    What is setup wrong?

    Hello RoberT,
    & How r u ?
    For the same 0PLANT we load data on daily basis. Not only for this but also for more Master Data Objects. For some of them we used to get this message but not as an Error. On pressing the Update Tree (Refresh) it will change the status.
    Anyway u got the records in the PSA so there is no problem with the data load. What signal u find in the Monitor ? Yellow or Red ? If yellow wait for some time, press Update Tree.
    Best Regards....
    Sankar Kumar
    +91 98403 47141

  • Data Load for 20M records from PSA

    Hi Team,
                   We need to reload a huge volume of data (around 20 million records) of Billing data (2LIS_13_VDITM) PSA to the first level DSO and then to the higher level targets.
    If we are going to run the entire load with one full request from PSA to DSO for 20M records will it have any performance issue?
    Will it be a good approach to split the load based on ‘Billing Document Number’?
    In Case, If we the load by 'Billing Document Number'; will it create any performance issue from the reporting perspective (if we receive the data from multiple requests?) Since most of the report would be ran based on Date and not by 'Billing Document Number'.
    Thanks
    San

    Hi,
    Better solution put the filter based on the year and fiscal year.
    check the how many years of data based on the you can put filter.
    Thanks,
    Phani.

  • NUMBER OF RECORDS COUNT

    I have 10 files in a  folder  all file i loaded in destination table .......but my requirement is i have to maintain file name  and  number of records loaded from source to destination in that loop .....
    final output should be 10 file names and every file no of records .........but
    with out script task

    Inside DFT, just before your OLEDB Destination Task put RowCount Task which will load count to CountVariable.
    Your FE Loop container must be returning File name or File Path to FileNamevariable
    Inside FE Loop container after DFT, you can have one Exec SQL Task which will insert FileName into table like
    INSERT INTO FileNameDetails VALUES(?,?)   --> FileNameVariable, CountVariable
    Please refer step by step here -->
    http://www.wiseowl.co.uk/blog/s359/files-foreach.htm
    Cheers,
    Vaibhav Chaudhari
    [MCTS],
    [MCP]

  • Data Loader inserting duplicate records

    Hi,
    There is an import that we need to run everyday in order to load data from another system into CRM On Demand . I have set up a data loader script which is scheduled to run every morning. The script should perform insert operation.
    Every morning a file with new insert data is available in the same location(generated by someone else) & same name. The data loader script must insert all records in it.
    One morning , there was a problem in the other job and a new file was not produced. When the data loader script ran , it found the old file and re-inserted the records (there were 3 in file). I had specified the -duplicatecheckoption parameter as the external id, since the records come from another system, but I came to know that the option works in the case of update operations only.
    How can a situation like this handled in future? The external id should be checked for duplicates before the insert operation is performed. If we cant check on the data loader side, is it possible to somehow specify the field as 'unique' in the UI so that there is an error if a duplicate record is inserted? Please suggest.
    Regards,

    Hi
    You can use something like this:
    cursor crs is select distinct deptno,dname,loc from dept.
    Now you can insert all the records present in this cursor.
    Assumption: You do not have duplicate entry in the dept table initially.
    Cheers
    Sudhir

  • Data Loading Error : ToomanyError records.

    Hi All,
    I got Data Loading error when I am loading from Flat File.
    Following is the error message :
    Too many error records - update terminated
    Error                             18 in the update
    No SID Found for value '00111805' of Characterstic ZUNIQUEID (Message No 70)
    can anybody help in resolving the issue.
    Regards,
    Chakravarthy

    hi
    Check the format of your charecteristics and key figures .
    Check you put data separators in you flat files approriately
    In the particular charecteristics ZUNIQUEID  ,ensure data is consistent...check the related tables
    Assign points if useful
    Regards
    N Ganesh

  • Master Data Load Failure- duplicate records

    Hi Gurus,
    I am a new member in SDN.
    Now,  work in BW 3.5 . I got a data load failure today. The error message saying that there is 5 duplicate records . The processing is in to PSA and then to infoobject. I checked the PSA and data available in PSA. How can I avoid these duplicate records.
    Please help me, I want to fix this issue immediately.
    regards
    Milu

    Hi Milu,
    If it is a direct update, you willn't have any request for that.
    The Data directly goes to Masterdata tables so don't have Manage Tab for that infoobject to see the request.
    Where as in the case of flexible update you will have update rules from your infosource to the infoobject so you can delete the request in this case.
    Check this link for flexible update of master data
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/37dda990-0201-0010-198f-9fdfefc02412

  • Data Load with 0 records

    Hi,
    How a system should react under following cases:
    1. Full load bringing in 0 records
    2. Init load bringing in 0 records
    3. Delta load bringing in 0 records
    Note: here by 0 records I mean actually the load has no records.
    For each of the above case will the load turn green or remain yellow and then times out.
    I always have different reactions from the system for these cases.  Appreciate view from experts…
    Thank you,
    sam

    Jr roberto setting which you said exists true,
    but i have that setting marked as green.
    i did an init load which pulled in 0 records. this is correct. now eventhough the green is checked for 0 records in rsmo settings the load errored out after the time out setting in infopack
    and the main traffic light is still running...with  No errors could be found. The current process has probably not finished yet.
    any tips..

  • Data loading..few records could not be loaded

    Hi Sapiens
    i am loading data from one source to bw. all records have been loaded except for 10 reocrds which were missing. How can be we load only the 10 missing records.
    i dont want to load all records again.
    Plz guide
    Regs
    Sanju

    Hi Sanju,
    You can load those records by doing a repair full update or full update and by giving the unique selections for those 10 records in your Infopackage in BW>
    In details go to your infopackage, put Full update (Tick the Repair full indicator) and go to selection tab and give Doc Number or some selection criteria by which you can only extract those 10 records.
    Let us know if you facew any problems.
    Thanks
    CK

  • Data Load - Added & Transferred Record Miss Match

    Hi to all,
    I am loading data R/3 -> infoCube in 3.x, I can see that too much difference between Transferred and Added Record , Ex Transferred record -> 5032704 & Added record -> 3505696.
    I have checked there is no routine has been written .
    In Selection I have given a range for Object No.
    For 10000 to 99999 Object No Transferred and Added record is same.
    But KSM100 to KSM999 I am getting difference between added and transferred record.
    Please any one help me .
    Thanks in advance
    shalini

    Not all the times would the records would get added to your fact table. You may write an update rule for "not to update certain records". In such cases you may find the transfered and added records different. Another instance could be when you split the incoming record into 2 records in the update rules. In such an instance the no. of added records would be greater than transfered records.
    Re: difference i transfered records and added records
    Re: manage infocube

  • Data loader : Import -- creating duplicate records ?

    Hi all,
    does anyone have also encountered the behaviour with Oracle Data Loader that duplicate records are created (also if i set the option: duplicatecheckoption=externalid) When i am checking the "import request queue - view" the request parameters of the job looks fine! ->
    Duplicate Checking Method == External Unique ID
    Action Taken if Duplicate Found == Overwrite Existing Records
    but data loader have created new records where the "External Unique ID" is already existent..
    Very strange is that when i create the import manually (by using Import Wizard) exactly the same import does work correct! Here the duplicate checking method works correct and the record is updated....
    I know the data loader has 2 methods, one for update and the other for import, however i do not expect that the import creates duplicates if the record is already existing, rather doing nothing!
    Anyone else experiencing the same ?? I hope that this is not expected behaviour!! - by the way method - "Update" works fine.
    thanks in advance, Juergen
    Edited by: 791265 on 27.08.2010 07:25
    Edited by: 791265 on 27.08.2010 07:26

    Sorry to hear about your duplicate records, Juergen. Hopefully you performed a small test load first, before a full load, which is a best practice for data import that we recommend in our documentation and courses.
    Sorry also to inform you that this is expected behavior --- Data Loader does not check for duplicates when inserting (aka importing). It only checks for duplicates when updating (aka overwriting). This is extensively documented in the Data Loader User Guide, the Data Loader FAQ, and in the Data Import Options Overview document.
    You should review all documentation on Oracle Data Loader On Demand before using it.
    These resources (and a recommended learning path for Data Loader) can all be found on the Data Import Resources page of the Training and Support Center. At the top right of the CRM On Demand application, click Training and Support, and search for "*data import resources*". This should bring you to the page.
    Pete

  • Problem in getting update records count while executeBatch()

    hi,
    I have used "Select in Insert" queries for migrating data from one table to another.
    like
    INSERT INTO TABLE1 (COL1, COL2)
    SELECT COL1,COL2,... FROM TABLE2
    WHERE COL1 = ... AND COL2 = ...;
    Case 1:
    I added these statements as addBatch() & at the end i execute method " executeBatch()". It returned the array of integer having record count for each query respectively.
    But that count was always be -2 i.e. SUCCESS_NO_INFO.
    Case 2:
    If i run the same code with executeUpdate() method , it returned me the correct number of records counts that are inserted into the table.
    I cudn't able to understand that it is failing for case1.
    Can anybody tell the reason for this behaviour .......................
    Edited by: user11187328 on Mar 17, 2010 3:45 AM
    Edited by: user11187328 on Mar 17, 2010 3:46 AM

    hi,
    Thanks again for a correct reply but can u also tell me tht which jar i needs to included.
    There are so many jar files & should i remove old jar files or jvm auto picks the updated jar file.
    Following jar files are shown on the link:::
    ojdbc5.jar (1,996,228 bytes) - Classes for use with JDK 1.5. It contains the JDBC driver classes, except classes for NLS support in Oracle Object and Collection types.
    ojdbc5_g.jar (3,081,328 bytes) - Same as ojdbc5.jar, except that classes were compiled with "javac -g" and contain tracing code.
    ojdbc6.jar (2,111,220 bytes) - Classes for use with JDK 1.6. It contains the JDBC driver classes except classes for NLS support in Oracle Object and Collection types.
    ojdbc6_g.jar (3,401,519 bytes) - Same as ojdbc6.jar except compiled with "javac -g" and contains tracing code.
    ojdbc5dms.jar (2,429,777 bytes) - Same as ojdbc5.jar, except that it contains instrumentation to support DMS and limited java.util.logging calls.
    ojdbc5dms_g.jar (3,101,875 bytes) - Same as ojdbc5_g.jar, except that it contains instrumentation to support DMS.
    ojdbc6dms.jar (2,655,741 bytes) - Same as ojdbc6.jar, except that it contains instrumentation to support DMS and limited java.util.logging calls.
    ojdbc6dms_g.jar (3,423,263 bytes) - Same as ojdbc6_g.jar except that it contains instrumentation to support DMS.
    orai18n.jar (1,656,280 bytes) - NLS classes for use with JDK 1.5, and 1.6. It contains classes for NLS support in Oracle Object and Collection types. This jar file replaces the old nls_charset jar/zip files.
    demo.zip (603,363 bytes) - contains sample JDBC programs.

  • Problem in getting correct update records count while getUpdateCount()

    hi,
    I have used "Select in Insert" queries for migrating data from one table to another.
    like
    INSERT INTO TABLE1 (COL1, COL2)
    SELECT COL1,COL2,... FROM TABLE2
    WHERE COL1 = ... AND COL2 = ...;
    Case 1:
    I added these statements as addBatch() & at the end i execute method " executeBatch()".
    Then i execute
    getUpdateCount() method
    on that prepareStatement ,but that count was not correct all the time.
    Case 2:
    If i run the same code with executeUpdate() method , it returned me the correct number of records counts that are inserted into the table.
    I cudn't able to understand that it is failing for case1.
    Can anybody tell the reason for this behaviour .......................
    Edited by: user11187328 on Mar 18, 2010 4:52 AM

    hi,
    Thanks again for a correct reply but can u also tell me tht which jar i needs to included.
    There are so many jar files & should i remove old jar files or jvm auto picks the updated jar file.
    Following jar files are shown on the link:::
    ojdbc5.jar (1,996,228 bytes) - Classes for use with JDK 1.5. It contains the JDBC driver classes, except classes for NLS support in Oracle Object and Collection types.
    ojdbc5_g.jar (3,081,328 bytes) - Same as ojdbc5.jar, except that classes were compiled with "javac -g" and contain tracing code.
    ojdbc6.jar (2,111,220 bytes) - Classes for use with JDK 1.6. It contains the JDBC driver classes except classes for NLS support in Oracle Object and Collection types.
    ojdbc6_g.jar (3,401,519 bytes) - Same as ojdbc6.jar except compiled with "javac -g" and contains tracing code.
    ojdbc5dms.jar (2,429,777 bytes) - Same as ojdbc5.jar, except that it contains instrumentation to support DMS and limited java.util.logging calls.
    ojdbc5dms_g.jar (3,101,875 bytes) - Same as ojdbc5_g.jar, except that it contains instrumentation to support DMS.
    ojdbc6dms.jar (2,655,741 bytes) - Same as ojdbc6.jar, except that it contains instrumentation to support DMS and limited java.util.logging calls.
    ojdbc6dms_g.jar (3,423,263 bytes) - Same as ojdbc6_g.jar except that it contains instrumentation to support DMS.
    orai18n.jar (1,656,280 bytes) - NLS classes for use with JDK 1.5, and 1.6. It contains classes for NLS support in Oracle Object and Collection types. This jar file replaces the old nls_charset jar/zip files.
    demo.zip (603,363 bytes) - contains sample JDBC programs.

  • Generate target/out file with header record as Record Count ?

    Hi Kareem, Please try the below approach. Pipeline 1: Load actual data(without header with record count) from source to target. Let say your file name is intermediate1.dat Pipeline 2: Take the target from pipeline 1 as source and create the header with count of source file using an aggregator. The filename of target for pipeline 2 will be your final file(header and detail data). Pipeline 3: Take the target of pipeline 1 again and do 1-to-1 load to the target file of second pipeline. In session properties, dont forget to tick the check box append if exists for the third pipeline target. There may be other simple approaches also. If you have no time in hand try the above approach. Let me know if you find any issues. Thanks,Deeshan.

    Generate target/out file with header record as Record Count ? Out file:---------------------------Record Count :2000  Coulmn1, Column2...Data, data........

Maybe you are looking for