Re-load from ECC6

Weu2019re on BI7, but using the old BW 3.x Dataflow, no new BI7 Dataflow.
Because of the data quality problem, I would like to delete all the Request from the existing data model (i.e InfoCube, Aggregate, ODS, Master Data, PSA) and perform the full load for all this again.
1.) For InfoCube, I will right-click the InfoCube ->Manage ->Request tab, then select the Request and click on the Delete button manually.
2) For Aggregate, in InfoCube ->Roll Up tab, click on Aggregates button, then open the InfoCube, then click on Deactivate button. (after the re-load, I need to click on Activate button again)
3) For ODS, I will right-click the ODS -> Manage ->Request tab, then select the Request and click on the Delete button manually.
4) PSA, I will right-click on each Datasource ->Manage, then select the Request and click on the Delete button manually.
Questions:
A) What about the ODS Change Log table?  How can I delete them?
B) What I understand is Master Data will always be overwritten. So there is no need for me to delete the data for each Master Data correct? Because when I re-load all the data from ECC6, all the data in the Master Data will be overwritten, correct?
C) Is there any other thing that I need to take care in order for me to re-load all the data again from ECC6 into BW?

Ganesh,
Exactly how we can delete the Change Log via SE16?
What happen if we didn't delete the Change Log? Will it cause duplicate record later?
1.) For InfoCube, I will right-click the InfoCube ->Manage ->Request tab, then select the Request and click on the Delete button manually.
Is there a way to delete all the Request in the InfoCube in background instead of one by one manually?
Also, what happen if the Request in the InfoCube has been compressed, and it also have Aggregate? How to delete them?

Similar Messages

  • Data Load from ECC6 in GC or LC

    Hi Friends,
    Have a small query. When i extract data from ECC6. Should i take the data in GC (Group currency) or in LC (Local Currency).
    If i do the extract in local currency how does the system handle the currency conversion from Local currency to Group currency. What is the normal procedure.
    What is the use of taking TC (Transaction currency) as i understand, the system shall convert the transaction currency into local currency in ECC6 so why do i need TC in BCS.
    Shall be thankful if you could reply ASAP.
    Cheers
    Parmanand

    Hi
    Hey it will affect; wont it because when i take the value in TC the value shall be different whereas when i take it in group currency it shall be different. For eg My group currency is EUR and my local currency is GBP. When the system posts a document in ECC6 for e.g. £100 then the group currency entry shall be 120 Euros (Considering a currency conversion rate of 1.2 between GBP to EURO). In such a case how can i take LC = GC, will it not be wrong.
    It shall be correct where the LC and GC are the same currency. for eg if my transaction in LC is in EUR then the GC amount shall remain the same as the GC is also EUR.
    Please guide.
    Cheers
    Parmanand.

  • Data load from ECC6 Unicode to BI7 MDMP

    Hello Contributors,
    We are upgrading R3 system to ECC6 Unicode but BI7 is still MDMP system. I'm testing data load and there are some errors related with unicode conversion.
    ECC has multiple langusges like English, Korean, Chinese etc...
    BI7 MDMP has English and Korean.
    Error1 :  Korean characters are broken in 0CUSTOMER_TEXT.
       As you know customer name doesn't have language code in R3 and Korean text are being used for Korean customers. After loading the data, English characters are okay but Korean characters  are broken and displayed as #
    Error2 :  Values of fields are merged each other if there is Korean characters in 0CUST_SALES_ATTR.
       We have Korean characters in some of attributes of 0CUST_SALES. So if a record has Korean characters in one of the field then the following fields are merged each other. But if there is no Korean characters then it is loaded correctly.
    For an example;
    Values of R3 extractor
        Cust#      Name               Field1          Field2     Field3
        1234     Korean Text               ABCD          EFGH     JKLM
        3456     English Text               OPQR          STUV     WXYZ
    Values of PSA table after loading
        Cust#      Name               Field1          Field2     Field3
        1234     Korean Text AB     CDEF          GHJK     LM     
        3456     English Text              OPQR          STUV     WXYZ
    (*if there are Korean characters then the following fields are divided and a part is updated in to the previous field)
    Please help me if you have any ideas to resolve this issues.
    Best Regards
    HD Sung.

    Convert your BI system to Unicode.  MDMP is not supported in this scenario, as you have found.

  • How can I calculate the load from all modules in ECC6.0 for our production

    Hello Experts,
    I have doubt , how can I check the load from all modules like FI, SD, MM, HR, PP, BASIS, ............etc from sap level. Can you explain from which transaction and how to check the workload for each module seperately from sap level. We are wondering about the size utilization of each module. Our production database is R3 ECC 6.0 on Oracle 10.2 on AIX operating systems.
    Thanks for your help in this regard.
    Thanks & Regards,
    Haseem.

    Hi
    Check in ST07 the application monitor.
    Regards
    Bhaskar

  • Steps for Master data upload from ECC6.0 to BI7.0

    Hi experts,
    I need to load material master and customer master data from ECC6.0 to BI7.0 and i could not see the steps to do so in sdn.
    Can anyone please give me the steps from the beging till end usign Business content 0material_attr/text/Hier .
    I Will assign deserving points to all the answers.
    Thanks in advance.

    Hi,
    Steps:
    1. Go to  RSA5 in  ECC6 to Install Business Content D.S.
    2.If you want do any change then Go to RSA6 in ECC6
    3. Replicate DS in BW .
    4. Create Transformation (to connect DS to MD).
    5 Create Info package ( to load from S/S to PSA ).
    6. Create DTP ( to load from PSA to MD Objects ).
    Please let me know if i am missing any steps.
    Ali.

  • Chinese characters scrambled when loading from DS to BW

    Hi, I've been pulling my hair out with this issue.
    I have a flat file containing Chinese text. When I load this in BW using 'FLATFILE' as a source system, it works fine. BW shows the correct Chinese characters.
    When I do the same load using BODI, I get funny characters.
    When I use BODI to load from one flat file into another flat file, the Chinese characters remain correct.
    What do I need to do to make sure I get the right Chinese characters in BW when loading from BODI?
    BODI is installed on Unix on Oracle 10.
    I run the jobs as batch processes.
    The dsconfig.txt has got:
    AL_Engine=<default>_<default>.<default>
    There are no locale settings in al_env.sh
    BW target is UTF-8 codepage.
    File codepage is BIG5-HKSCS
    BODI is set up as a Unicode system in SAP BW.
    When loading flat file to flat file, I get a message:
    DATAFLOW: The specified locale <eng_gb.iso-8859-1> has been coerced to <Unicode (UTF-16)
    because the datastore <TWIN_FF_CUSTOMER_LOCAL> obtains data in <BIG5-HKSCS> codepage.
    JOB: Initializing transcoder for datastore <TWIN_FF_CUSTOMER_LOCAL> to transcode between
    engine codepage<Unicode (UTF-16)>  and datastore codepage <BIG5-HKSCS>
    When loading to BW the messages are almost the same, but now the last step in UTF-16 to UTF-8.
    I read the wiki post which definitely helped me to understand the rationale behind code page, but now I ran out of ideas what else to check ( http://wiki.sdn.sap.com/wiki/display/BOBJ/Multiple+Codepages )
    Any help would be greatly appreciated.
    Jan.

    Hi all. Thanks for the Inputs. This is what I got when I clicked on the Details Tab of the Monitor....
    Error when transferring data; communication error when analyzing
    Diagnosis
    Data packages or InfoPackages are missing in BI but there were no apparent processing errors in the source system. It is therefore probable that there was an error in the data transfer.
    The analysis tried to read the ALE outbox of the source system. This lead to error .
    It is possible that there is no connection to the source system.
    Procedure
    Check the TRFC overview in the source system.
    Check the connection to the source system for errors and check the authorizations and profiles of the remote user in both the BI and source systems.
    Check th ALE outbox of the source system for IDocs that have not been updated.

  • Full load from a DSO to a cube processes less records than available in DSO

    We have a scenario, where every Sunday I have to make a full load from a DSO with OnHand Stock information to a cube, where I register on material and stoer level a counter if there is stock available.
    The DTP has no filters at all and has a semantic group on 0MATERIAL and 0PLANT.
    The key in the DSO is:
    0MATERIAL
    0PLANT
    0STOCKTYPE
    0STOR_LOC
    0BOM
    of which only 0MATERIAL, 0PLANT and 0STORE_LOC are later used in the transformation.
    As we had a growing number of records, we decided to delete in the START routine all records, where the inventory is not GT zero, thus eliminating zero and negative inventory records.
    Now comes the funny part of the story:
    Prior to these changes I would [in a test system, just copied from PROD] read some 33 million of records and write out the same amount of records. Of course, after the change we expected to write out less. To my total surprise I was reading now however 45 million of records with the same unchanged DTP, and writing out the expected less records.
    When checking the number of records in the DSO I found the 45 million, but cannot explain why in the loads before we only retrieved some 33 millions from the same unchanged amount of records.
    When checking in PROD - same result: we have some 45 million records in the DSO, but when we do the full load from the DSO to the cube the DTP only processes some 33 millions.
    What am I missing - is there a compression going on? Why would the amount of records in a DSO differ from the amount of records processed in the DataPackgages when I am making a FULL load without any filter restrictions and only a semantic grouping in place for parts of the DSO key?
    ANY idea, thought is appreciated.

    Thanks Gaurav.
    I did check if there were more/any loads doen inbetween - there were none in the test system.  As I mentioned that it was a new copy from PROD to TEST, I compared the number of entries in the DSO and that seems to be a match between TEST and PROD, ok a some more in PROD but they can be accounted for. In test I loaded the day before the changes were imported to have a comparison, and between that load and the one ofter the changes were imported nothing in the DSO was changed.
    Both DTPs in TEST and PW2 load from actived DSO [without archive]. The DTPs were not changed in quite a while - so I ruled that one out. Same with activation of data in the DSO - this DSO get's loaded and activated in PROD daily via process chain and we load daily deltas into the cube in question. Only on Sundays, for the begin of the new week/fiscal period, we need to make a full load to capture all materials per site with inventory. The deltas loaded during the week are less than 1 million, but the difference between the number of records in the DSO and the amount processed in the DataPackages is more than 10 millions per full load even in PROD.
    I really appreciated the knowledgable answer, I just wished you would pointed out something that I missed out on.

  • Delta records are not loading from DSO to info cube

    My query is about delta loading from DSO to info cube. (Filter used in selection)
    Delta records are not loading from DSO to Info cube. I have tried all options available in DTP but no luck.
    Selected "Change log" and "Get one request only" and run the DTP, but 0 records got updated in info cube
    Selected "Change log" and "Get all new data request by request", but again 0 records got updated
    Selected "Change log" and "Only get the delta once", in that case all delta records loaded to info cube as it was in DSO and  gave error message "Lock Table Overflow" .
    When I run full load using same filter, data is loading from DSO to info cube.
    Can anyone please help me on this to get delta records from DSO to info cube?
    Thanks,
    Shamma

    Data is loading in case of full load with the same filter, so I don't think filter is an issue.
    When I follow below sequence, I get lock table overflow error;
    1. Full load with active table with or without archive
    2. Then with the same setting if I run init, the final status remains yellow and when I change the status to green manually, it gives lock table overflow error.
    When I chnage the settings of DTP to init run;
    1. Select change log and get only one request, and run the init, It is successfully completed with green status
    2. But when I run the same DTP for delta records, it does not load any data.
    Please help me to resolve this issue.

  • Data load from DSO to cube

    Hi gurus
    We have a typical problem, we have to combined 2 records into one when they reach to DSO.
    One source is flat file and the other is R3. so few fields I am geting from flat file and few from R3 . They are creating one record when they loaded to DSO . Now I am getting one record in active data table but when I load the data to cube from that DSO ( Data is going from change log to cube ) to cube , I am getting 2 seperate records ( one which is loaded from flat file and one which is loaded from R3 ) in cube which I dont want. I want only one record in cube just like i have in my active data table in DSO.
    I cant get the data from Active data table because I need delta load.
    Would you please advise what can I do to get that one record in Cube?
    Pl help

    Ravi
    I am sending the data thro DTP only but is there any solution to get one record bcos in other scenario I am getting data from 2 different ERP sources and getting one record in DSO and cube as well.
    but that is not happening for this second scenario where i am getting data from FLAT file and ERP and traing to create one record.

  • Data load from DSO to cube fails

    Hi Gurus,
    The data loads have failed last night and when I try to dig inside the process chains I find out that the cube which should be loaded from DSO is not getting loaded.
    The DSO has been loaded without errors from ECC.
    Error Message say u201DThe unit/currency u2018source currency 0currencyu2019 with the value u2018spaceu2019 is assigned to the keyfigure u2018source keyfigure.ZARAMTu2019 u201D
    I looked in the  PSA has about 50, 000 records .
    and all data packages have green light and all amounts have 0currency assigned
    I went in to the DTP and looked at the error stack it has nothing in it then I  changed the error handling option from u2018 no update,no reportingu2019 to u2018valid records update, no reporting (resquest red)u2019 and executed then the error stactk showed 101 records
    The ZARAMT filed has 0currency blank for all these records
    I tried to assign USD to them and  the changes were saved and I tried to execute again but then the message says that the request ID should be repaired or deleted before the execution. I tried to repair it says can not be repaired so I deleted it and executed . it fails and the error stack still show 101 records.when I look at the records the changes made do not exist anymore.
    If I delete the request ID before changing and try to save the changes then they donu2019t get saved.
    What should I do to resolve the issue.
    thanks
    Prasad

    Hi Prasad....
    Error Stack is request specific.....Once you delete the request from the target the data in error stach will also get deleted....
    Actually....in this case what you are suppose to do is :
    1) Change the error handling option to u2018valid records update, no reporting (resquest red)u2019 (As you have already done....) and execute the DTP......all the erroneous records will get sccumulated in the error stack...
    2) Then correct the erroneos records in the Error Stack..
    3) Then in the DTP----in the "Update tab"....you will get the option "Error DTP".......if it is already not created you will get the option as "Create Error DT".............click there and execute the Error DTP..........The Error DTP will fetch the records from the Error Stack and will create a new request in the target....
    4) Then manually change the status of the original request to Green....
    But did you check why the value of this field is blank....if these records again come as apart of delta or full then your load will again fail.....chcek in the source system and fix it...for permanent solution..
    Regards,
    Debjani......

  • Data loading from DSO to Cube

    Hi,
    I have a question,
    In book TBW10 i read about the data load from DSO to InfoCube
    " We feed the change log data to the InfoCube, 10, -10, and 30 add to the correct 30 value"
    My question is cube already have 10 value, if we are sending 10, -10 and 30 Values(delta), the total should be 40 instead of 30.
    Please some one explaine me.
    Thanks

    No, it will not be 40.
    It ll be 30 only.
    Since cube already has 10, so before image ll nullify it by sending -10 and then the correct value in after immage ll be added as 30.
    so it ll be like this 10-10+30 = 30.
    Thank-You.
    Regards,
    Vinod

  • Data load from DSO to Cube in BI7?

    Hi All,
    We just migrated a dataflow from 3.5 to 7 in development and moved to production. So till now in production, the dataloads happend using the infopackages.
    a. Infopackage1 from datasource to ODS and
    b. Infopackage2 from ODS to the CUBE.
    Now after we transported the migrated dataflow to production, to load the same infoproviders I use
    1. Infopackage to load PSA.
    2. DTP1 to load from PSA to DSO.
    3. DTP2 to load from DSO to CUBE.
    step1 and step2 works fine but when I run the DTP2 it is getting terminated. But now when I tried the step b (above), it loads the CUBE fine using the infopackage. So I am unable to understand why the DTP failed and why the infopackage load is successful. In order to use the DTP do we need to do any cleanup when using it for first time? Please let me know if you have any suggestions.
    Please note that the DSO already has data loaded using infopackage.  (Is this causing the problem?)
    Thanks,
    Sirish.

    Hi Naveen,
    Thanks for the Reply. The creation of DTP is not possible without a transformation.
    The transformation has been moved to production successfully.

  • Invokin SQL*Loader from a stored procedure

    I try to invoke SQL*LOADER from within a database package by using external C procedure (the procedure calls the system() C function) but the loader generates the following error in its log file :
    SQL*Loader -523: error -2 writing to file (STDERR)
    and no data is uploaded.
    I have tried to use system() from within database procedures to execute OS commands and it works. Does anyone know what is the problem with using system() to execute "sqlldr <parameters>"? Is there some other way to call the loader from within a stored PL/SQL procedure?
    Thank you very much for your help.
    Aneta Valova
    null

    Hi
    What is your task and why you are trying to invoke SQL*Loader from strorage procedure or package? Maybe the redirecting of stderr will resolve your problem but thik is it the best way to do your job.
    I am not sure, that invoking other executables from Oracle instance is good idea.
    Regards
    null

  • Partial data loading from dso to cube

    Dear All,
    I am loading the data from dso to the cube through DTP. The problem is that some records are getting added  through data
    packet 1 to the cube around 50 crore records while through data packet2 records are not getting added.
    It is full load from dso to cube.
    I have tried deleting the request and executing DTP but again the same no of records get added to the cube through datapacket 1   and after that records are not added to the cube through datapacket 1  ;request remains in yellow state only.
    Please suggest .

    Nidhuk,
    Data load transfers package by package. Your story sounds like it got stuck in second package or something. Suggest you check package size, try to increase to a higher number to see if anything  changes ? 50 records per package kind of low, your load should not spread out into too many packages.
    Regards. Jen
    Edited by: Jen Yakimoto on Oct 8, 2010 1:45 AM

  • Problem during  Data Warehouse Loading (from staging table to Cube)

    Hi All,
    I have created a staging Module in owb to load my flat files to my staging tables.I have created an Warehouse module to load my staging tables to Dimension and Cube that I have created.
    My senario:
    I have a temp_table_transaction which had loaded my flat files to it .This table had loaded with 168,271,269 milion record as through this flat file.
    I have created a mapping in owb which loaded my temp_table_transaction which has join with other tables and some expression and convert function that these numbers fill to a new table called stg_tbl_transaction in my staging module.Running this mapping takes 3 hours and 45 minutes with this configue of my mapping:
    Default operation mode in running parameter of Mapp config=Set based
    My dimesion filled correctly but I have two problem when I want to transfer my staging table to my Cube:
    #1 Problem:
    i have created a cube is called transaction_cube with owb and it generated and deployed correctly.
    i have created a map to fill my cube with 168,271,268 milon recodes in staging table was called stg_tbl_transaction and deployed it to server (my cube map operating mode is set based)
    but after running this map it did not complete after 9 hour and I forced to cancel my running's map by kill its sessions .I want to know this time for loading this capacity of data is acceptable or for this capacity of data we should spend more time.Please let me know if anybody has any Issue.
    #2 Problem
    To test my map I have created a map with configure set based in operation modes and select my stg_tbl_transaction as source with 168,271,268 records in it and I have created another table to transfer and load my data in it.I wanted to test the time we should spend on this simple map but after 5 hours my data had not loaded in new table.I want to know where is my problem.Should I have set something in configue of map or anothe things.Please guide me about these problems.
    CONFIGURATION OF MY SERVER:
    i run owb on two socket xeon 5500 series with 192 GB ram and disks with RAID 10 Array
    Regards,
    Sahar

    For all of you
    It is possible to load from Infoset to Cube we did it, and it was ok.
    Data are really loaded from Infoset (Cube + master dat) to cube.
    When you create a transformation under a cube Infoset is proposed, and it works fine ....
    Now the process is no more operationnal and i don't understand why .....
    Load from infoset to cube is possible, i can send you screen shot if you want ....
    Christophe

Maybe you are looking for