Amount of records loaded to dso is not same as in psa

i performed a loading from psa to dso. i have 2 datasource under this dso, the amount of records loaded from psa for this 2 datasources to dso is not consistent. the psa for the 1st datasource having 3k records and the 2nd datasource having 5k records, when i perform the loading for both of this datasource to dso, the records is less. do anyone here know why is this so?

hi,
DSO have overwrite option and hence you have lesser records.
chk if you have enough key fields in DSO, so that you can reduce the number of records getting overwritten.
Ramesh

Similar Messages

  • Amount of records loaded to PSA is not same as in r/3???

    hi experts,
    I am extracting data from 0EMPLOYEE_0022_ATTR data source from r/3, in the extractor checker there are 120 records while in PSA there are only 21 records. there was no selection in infopackage level, what could be the reason. plz tell me how to rectify this.
    regards
    vadlamudi

    Hi Everyone:
    To solve the above problem please follow below steps:
    Step 1 : Check the Data in RSA3 for the 0EMPLOYEE_0022_ATTR  & View the Data in Alv Grid
    Step 2: Sort the Start Date in Ascending Order & Identify the Least Start Date
                     Assumption i have identified : 01.01.1980 is the least Start Date
    BI Side:
    Step3 : Go to the 0EMPLOYEE_022_ATTR Datasource respective Infopackage
    Step 4: Select Update Tab
    Step 5: Give the Identified least Start Date in RSA3  (Ex: 01.01.1980)
    Step 6: Execute the InfoPackage
    Done....All records will come as how you see in the RSA3 on R/3 Or ECC Side.
    Other Notes: 0EMPLOYEE_022_ATTR have F- Full Update default in R/3 no option for Delta. No Delta Mechanism for this DataSource.
    Regards,
    Satish Reddy

  • Load random swf but not same

    someone here helped with the code to load random SWF files
    into a movie clip...however, i need to make sure it doesnt load the
    one that's already playing, that it goes on to a new one
    everytime..

    i dont think saying thank you really does it, but really,
    thank you.. you've helped me countless number of times, and it's
    only made me better at what I do, so I really am great full .
    Thanks again

  • Data load from DSO to cube fails

    Hi Gurus,
    The data loads have failed last night and when I try to dig inside the process chains I find out that the cube which should be loaded from DSO is not getting loaded.
    The DSO has been loaded without errors from ECC.
    Error Message say u201DThe unit/currency u2018source currency 0currencyu2019 with the value u2018spaceu2019 is assigned to the keyfigure u2018source keyfigure.ZARAMTu2019 u201D
    I looked in the  PSA has about 50, 000 records .
    and all data packages have green light and all amounts have 0currency assigned
    I went in to the DTP and looked at the error stack it has nothing in it then I  changed the error handling option from u2018 no update,no reportingu2019 to u2018valid records update, no reporting (resquest red)u2019 and executed then the error stactk showed 101 records
    The ZARAMT filed has 0currency blank for all these records
    I tried to assign USD to them and  the changes were saved and I tried to execute again but then the message says that the request ID should be repaired or deleted before the execution. I tried to repair it says can not be repaired so I deleted it and executed . it fails and the error stack still show 101 records.when I look at the records the changes made do not exist anymore.
    If I delete the request ID before changing and try to save the changes then they donu2019t get saved.
    What should I do to resolve the issue.
    thanks
    Prasad

    Hi Prasad....
    Error Stack is request specific.....Once you delete the request from the target the data in error stach will also get deleted....
    Actually....in this case what you are suppose to do is :
    1) Change the error handling option to u2018valid records update, no reporting (resquest red)u2019 (As you have already done....) and execute the DTP......all the erroneous records will get sccumulated in the error stack...
    2) Then correct the erroneos records in the Error Stack..
    3) Then in the DTP----in the "Update tab"....you will get the option "Error DTP".......if it is already not created you will get the option as "Create Error DT".............click there and execute the Error DTP..........The Error DTP will fetch the records from the Error Stack and will create a new request in the target....
    4) Then manually change the status of the original request to Green....
    But did you check why the value of this field is blank....if these records again come as apart of delta or full then your load will again fail.....chcek in the source system and fix it...for permanent solution..
    Regards,
    Debjani......

  • Total number of records loaded into ODS and in case of Infocube

    hai
    i loaded some datarecords from Oracle SS in ODS and Infocube.
    My SOurceSytem guy given some datarecords by his selection at Oracle source system side.
    how can i see that 'how many data records are loaded into ODS and Infocube'.
                     i can check in monitor , but that not correct(becz i loaded second , third time by giving the ignore duplicate records). So i think in monitor , i wont get the correct number of datarecords loaded in case of ODS and Infocube.
    So is there any transaction code or something to find number records loaded in case of ODS and Infocube .
    ps tell me
    i ll assing the points
    bye
    rizwan

    HAI
    I went into ODS manage and see the 'transferred' and 'added' data records .Both are same .
    But when i total the added data records then it comes 147737.
    But when i check in active table(BIC/A(odsname)00 then toal number of entries come 1,37,738
    why it is coming like that difference.......
    And in case of infocube , how can i find total number of records loaded into Infocube.(not in infocube).
               Like any table for fact table and dimension tables.
    pls tell me
    txs
    rizwan

  • Delta records are not loading from DSO to info cube

    My query is about delta loading from DSO to info cube. (Filter used in selection)
    Delta records are not loading from DSO to Info cube. I have tried all options available in DTP but no luck.
    Selected "Change log" and "Get one request only" and run the DTP, but 0 records got updated in info cube
    Selected "Change log" and "Get all new data request by request", but again 0 records got updated
    Selected "Change log" and "Only get the delta once", in that case all delta records loaded to info cube as it was in DSO and  gave error message "Lock Table Overflow" .
    When I run full load using same filter, data is loading from DSO to info cube.
    Can anyone please help me on this to get delta records from DSO to info cube?
    Thanks,
    Shamma

    Data is loading in case of full load with the same filter, so I don't think filter is an issue.
    When I follow below sequence, I get lock table overflow error;
    1. Full load with active table with or without archive
    2. Then with the same setting if I run init, the final status remains yellow and when I change the status to green manually, it gives lock table overflow error.
    When I chnage the settings of DTP to init run;
    1. Select change log and get only one request, and run the init, It is successfully completed with green status
    2. But when I run the same DTP for delta records, it does not load any data.
    Please help me to resolve this issue.

  • Data load from DSO to cube

    Hi gurus
    We have a typical problem, we have to combined 2 records into one when they reach to DSO.
    One source is flat file and the other is R3. so few fields I am geting from flat file and few from R3 . They are creating one record when they loaded to DSO . Now I am getting one record in active data table but when I load the data to cube from that DSO ( Data is going from change log to cube ) to cube , I am getting 2 seperate records ( one which is loaded from flat file and one which is loaded from R3 ) in cube which I dont want. I want only one record in cube just like i have in my active data table in DSO.
    I cant get the data from Active data table because I need delta load.
    Would you please advise what can I do to get that one record in Cube?
    Pl help

    Ravi
    I am sending the data thro DTP only but is there any solution to get one record bcos in other scenario I am getting data from 2 different ERP sources and getting one record in DSO and cube as well.
    but that is not happening for this second scenario where i am getting data from FLAT file and ERP and traing to create one record.

  • Partial data loading from dso to cube

    Dear All,
    I am loading the data from dso to the cube through DTP. The problem is that some records are getting added  through data
    packet 1 to the cube around 50 crore records while through data packet2 records are not getting added.
    It is full load from dso to cube.
    I have tried deleting the request and executing DTP but again the same no of records get added to the cube through datapacket 1   and after that records are not added to the cube through datapacket 1  ;request remains in yellow state only.
    Please suggest .

    Nidhuk,
    Data load transfers package by package. Your story sounds like it got stuck in second package or something. Suggest you check package size, try to increase to a higher number to see if anything  changes ? 50 records per package kind of low, your load should not spread out into too many packages.
    Regards. Jen
    Edited by: Jen Yakimoto on Oct 8, 2010 1:45 AM

  • Adding leading zeros before data loaded into DSO

    Hi
    In below PROD_ID... In some ID leading zeros are missing before data loaded into BI from SRM into PROD_ID. Data type is character. If leading zeros are missing then data activation of DSO is failed due to missing zeros and have to manually add them in PSA table. I want to add leading zeros if they're missing before data loaded into DSO.... total character length is 40.. so e.g. if character is 1502 then there should be 36 zeros before it and if character is 265721 then there should be 34 zeros. Only two type of character is coming either length is 4 or 6 so there will be always need to 34 or 36 zeros in front of them if zeros are missing.
    Can we use CONVERSION_EXIT_ALPHPA_INPUT functional module ? As this is char so I'm not sure how to use in that case.. Do need to convert it first integer?
    Can someone please give me sample code? We're using BW 3.5 data flow to load data into DSO.... please give sample code and where need to write code either in rule type or in start routine...

    Hi,
    Can you check at info object level, what kind of conversion routine it used by.
    Use T code - RSD1, enter your info object and display it.
    Even at data source level also you can see external/internal format what it maintained.
    if your info object was using ALPHA conversion then it will have leading 0s automatically.
    Can you check from source how its coming, check at RSA3.
    if your receiving this issue for records only then you need to check those records.
    Thanks

  • My DSO does not activate.how do i see contents os data package?

    My Dso has data in new data table but it status turns RED when I am trying to activate. I used standard data source 0fi_gl_4 and standard DSO 0FIGL_O02 . pretty straight forward. no added fields or objects. I have deleted 2 requests from previous loads and executed DTP . i did full update.then new data table has new data now but i could not activate.
    pls help what should i check for
    REQUEST STATUS is 'Error occured during activation process'
    REQUEST FOR REPORT AVAILABLE - 'request available for reporting'
    In the LOG i can see RED on Data package 000039 when I click on it
    the error is
    -value electronic account statum of characteristic 0DOC_HD_TXT cont
      long text  looked like it is INVALID CHARACTER
    how do i see the contents in a particular data package (000039) ???
    thanks
    Edited by: Ramya27v on Dec 12, 2011 1:56 AM

    Hi,
    This can happen if the size of teh data package from R/3 to PSA is larger than the size of the data package from PSA to DSO. Assume till PSA the data package size is 50000 and you are receiving 9 data packages the total number of records equals to 50000 X 9. Now from PSA to DSO suppose the data package size is 10000 then number of data packages will increase from 9 to 45 that is 9 X 50000/10000.
    If you are having problem in data package 39 (in the above mentioned scenario) then you should move to data package 39 X 10000 /50000 that is data package 7 of the PSA and rectify the corresponding record in the PSA and load it to the DSO.
    Similarly you will have to calculate the data package number of the PSA in your scenario.
    Navesh

  • Unique Data Record Settings in DSO

    Hello Experts,
    I have checked Unique Data Record Settings in DSO.
    Error Message while Activation of DSO through Process Chains.
    Records already available in Activation Table
    Duplicate Records
    Activation error occurs & it specifies the Req No/Record No.
    But in reality there are no duplicate records available in the specified Record No.
    I faced this issue several times, why does this happen.
    I guess this Unique Data Records Setting doesnt work.
    Regards,
    KV

    Hi,
    I am not sure why unique data setting is set, This is generally used when data with the same key will not come again or you are flushing the DSO and loading everyday. In normal cases it may happen for e.g there can be two records with the same sales document number as user might change the sales document twice.
    So i think in u r case u dont need the check mark. Let me know if u have doubt.
    Regards,
    Viren

  • Impact if FIGL10 DSO is not staged b4 FIGL10 Cube

    Heallo SDNer's,
    How wud it impact if I dont stage 0FIGL_O10 - DSO between 0FIGL_C10 ?
    Structures wud be similar - no additional fields in Cube.
    Even SAP defined DSO n Cube have similar fields.
    Datasource is <b>delta capable</b> - 0FI_GL_10.
    Some background - > After having enhanced FIGL10 DSO with some fields which do exist in the datasource I have to have 19Keys to determine the unique records or to pull all records from PSA w/o any records being overwritten.
    However, datasource relevant OSS note sugggested to take d help of artificial keys ( for the same DSO...here 1 artificial key will concatinate 4 keys  ) .
    <b>Data volumes will be less - 200thousands a month.</b>
    Wud SAP suggested method ( artificial keys ) be a better bet ?
    Please share your thoughts.....any inputs wud really help

    Jr Roberto,
    You can do this. In BW system.
    1. Go to table RSOLTPSOURCE.
    2. Enter 0FI_GL_10 as datasource.
    3. Check the field delta process value, refer that value to table RODELTAM.
    That will tell you what kind of data is coming from that extractor. Based on the delta data (After image, before image and after image, etc), you can decide whether you need  DSO or not.
    -Saket

  • Data loading from DSO to Cube

    Hi,
    I have a question,
    In book TBW10 i read about the data load from DSO to InfoCube
    " We feed the change log data to the InfoCube, 10, -10, and 30 add to the correct 30 value"
    My question is cube already have 10 value, if we are sending 10, -10 and 30 Values(delta), the total should be 40 instead of 30.
    Please some one explaine me.
    Thanks

    No, it will not be 40.
    It ll be 30 only.
    Since cube already has 10, so before image ll nullify it by sending -10 and then the correct value in after immage ll be added as 30.
    so it ll be like this 10-10+30 = 30.
    Thank-You.
    Regards,
    Vinod

  • Data load from DSO to Cube in BI7?

    Hi All,
    We just migrated a dataflow from 3.5 to 7 in development and moved to production. So till now in production, the dataloads happend using the infopackages.
    a. Infopackage1 from datasource to ODS and
    b. Infopackage2 from ODS to the CUBE.
    Now after we transported the migrated dataflow to production, to load the same infoproviders I use
    1. Infopackage to load PSA.
    2. DTP1 to load from PSA to DSO.
    3. DTP2 to load from DSO to CUBE.
    step1 and step2 works fine but when I run the DTP2 it is getting terminated. But now when I tried the step b (above), it loads the CUBE fine using the infopackage. So I am unable to understand why the DTP failed and why the infopackage load is successful. In order to use the DTP do we need to do any cleanup when using it for first time? Please let me know if you have any suggestions.
    Please note that the DSO already has data loaded using infopackage.  (Is this causing the problem?)
    Thanks,
    Sirish.

    Hi Naveen,
    Thanks for the Reply. The creation of DTP is not possible without a transformation.
    The transformation has been moved to production successfully.

  • What is the best practice of deleting large amount of records?

    hi,
    I need your suggestions on best practice of deleting large amount of records of SQL Azure regularly.
    Scenario:
    I have a SQL Azure database (P1) to which I insert data every day, to prevent the database size grow too fast, I need a way to  remove all the records which is older than 3 days every day.
    For on-premise SQL server, I can use SQL Server Agent/job, but, since SQL Azure does not support SQL Job yet, I have to use a Web job which scheduled to run every day to delete all old records.
    To prevent the table locking when deleting too large amount of records, in my automation or web job code, I limit the amount of deleted records to
    5000 and batch delete count to 1000 each time when calling the deleting records stored procedure:
    1. Get total amount of old records (older then 3 days)
    2. Get the total iterations: iteration = (total count/5000)
    3. Call SP in a loop:
    for(int i=0;i<iterations;i++)
       Exec PurgeRecords @BatchCount=1000, @MaxCount=5000
    And the stored procedure is something like this:
     BEGIN
      INSERT INTO @table
      SELECT TOP (@MaxCount) [RecordId] FROM [MyTable] WHERE [CreateTime] < DATEADD(DAY, -3, GETDATE())
     END
     DECLARE @RowsDeleted INTEGER
     SET @RowsDeleted = 1
     WHILE(@RowsDeleted > 0)
     BEGIN
      WAITFOR DELAY '00:00:01'
      DELETE TOP (@BatchCount) FROM [MyTable] WHERE [RecordId] IN (SELECT [RecordId] FROM @table)
      SET @RowsDeleted = @@ROWCOUNT
     END
    It basically works, but the performance is not good. One example is, it took around 11 hours to delete around 1.7 million records, really too long time...
    Following is the web job log for deleting around 1.7 million records:
    [01/12/2015 16:06:19 > 2f578e: INFO] Start getting the total counts which is older than 3 days
    [01/12/2015 16:06:25 > 2f578e: INFO] End getting the total counts to be deleted, total count:
    1721586
    [01/12/2015 16:06:25 > 2f578e: INFO] Max delete count per iteration: 5000, Batch delete count
    1000, Total iterations: 345
    [01/12/2015 16:06:25 > 2f578e: INFO] Start deleting in iteration 1
    [01/12/2015 16:09:50 > 2f578e: INFO] Successfully finished deleting in iteration 1. Elapsed time:
    00:03:25.2410404
    [01/12/2015 16:09:50 > 2f578e: INFO] Start deleting in iteration 2
    [01/12/2015 16:13:07 > 2f578e: INFO] Successfully finished deleting in iteration 2. Elapsed time:
    00:03:16.5033831
    [01/12/2015 16:13:07 > 2f578e: INFO] Start deleting in iteration 3
    [01/12/2015 16:16:41 > 2f578e: INFO] Successfully finished deleting in iteration 3. Elapsed time:
    00:03:336439434
    Per the log, SQL azure takes more than 3 mins to delete 5000 records in each iteration, and the total time is around
    11 hours.
    Any suggestion to improve the deleting records performance?

    This is one approach:
    Assume:
    1. There is an index on 'createtime'
    2. Peak time insert (avgN) is N times more than average (avg). e.g. supposed if average per hour is 10,000 and peak time per hour is 5 times more, that gives 50,000. This doesn't have to be precise.
    3. Desirable maximum record to be deleted per batch is 5,000, don't have to be exact.
    Steps:
    1. Find count of records more than 3 days old (TotalN), say 1,000,000.
    2. Divide TotalN (1,000,000) with 5,000 gives the number of deleted batches (200) if insert is very even. But since it is not even and maximum inserts can be 5 times more per period, set number of deleted batches should be 200 * 5 = 1,000.
    3. Divide 3 days (4,320 minutes) with 1,000 gives 4.32 minutes.
    4. Create a delete statement and a loop that deletes record with creation day < today - (3 days ago - 3.32 * I minutes). (I is the number of iterations from 1 to 1,000)
    In this way the number of records deleted in each batch is not even and not known but should mostly within 5,000 and even you run a lot more batches but each batch will be very fast.
    Frank

Maybe you are looking for