# of records in a full load dont square with d # in an Init with data trans

Heallo to all BI/FI SDNer's,
FI AP n AR extractors like 0FI_AR_3 & 0FI_AP_3 pull 1785 records to PSA when loaded full but an Init with datatransfer for the same pull only 1740 records to PSA.
I am very skeptical dat even a delta aft a repair full will not bring all records.
What update methodologies are qualified for AP & AR ?
OSS notes I found really dont answer my concern here......please comment SDNer's !!!
Message was edited by:
        Jr Roberto

Somehow it worked after redoing !!

Similar Messages

  • Each time master data full load removes previous data and load with new ?

    We load company code (0COMP_CODE) master data (full load).
    1st day, we check company code master data, the record count is 150, 2nd day, the record count is 90.  It sounds like the master data full load each time would clean the previous data and load with new data, am I right?  If what I guess is right, then what setup is controlling this?
    Thanks

    I dont think it does cleanup.
    MD records in the new load simply overwrite the records already present in the master data.
    I mean if same record comes again its overwritten.If new record it gets added.
    I wouldnt expect number of records to reduce drastically from 150 to 90.
    Maybe before MD activation records could be more(as there will be both M(modified) and A(active) records).
    cheers,
    Vishvesh

  • 0HR_PA_0  Monthly extraction Vs Full Load

    Hello Everyone,
    We are using the standard headcount datasource 0HR_PA_0 and we are facing an issue with the extraction.
    When we do a load for the month of March 2012 we get 2913 records in RSA3 and BW and when we remove this selection and do a full load we get only 2908 records for the month of March 2012.
    We are not sure why a difference of 5 employees is coming.
    If its an authorization issue then in both cases ( load with month selection and without month selection ) data should be missing but its not so. When we run the load only for March 2012 we get correct records but a full load misses 5 employee
    Has anyone faced this issue before?
    Please help with your inputs
    Regards,
    Manish Sharma

    Hi Alex,
    According to SAP , these datasources dont support delta update
    http://help.sap.com/saphelp_nw04/helpdata/en/7d/eab4391de05604e10000000a114084/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/29/02b43947f80c74e10000000a114084/content.htm
    You have to work out differrnt method for capturing changes.
    Arun
    Assign pts if useful

  • Full load error

    HI
    When i am trying to do a full load from ods to cube it gives the error SAPSQL_ARRAY_INSERT_DUPREC. I went to infosource and reduced the package size to 500kb and trying also.till this point no luckk..
    Inputs are much appreciated.
    Thanks and regards
    loyee
    Edited by: Bi power on Oct 15, 2009 3:21 PM

    The error occurs when there is an internal table which is defined with a unique key, but the select statement returns multiple records for a key combination.
    Include more conditions in the where clause of the select statement like a date field.
    Coming to your scenario,  you are trying to do a full load from dso --> cube. Did you delete the data from the cube? Otherwise the  records will get aggregated leading to incorrect data.

  • Full load from a DSO to a cube processes less records than available in DSO

    We have a scenario, where every Sunday I have to make a full load from a DSO with OnHand Stock information to a cube, where I register on material and stoer level a counter if there is stock available.
    The DTP has no filters at all and has a semantic group on 0MATERIAL and 0PLANT.
    The key in the DSO is:
    0MATERIAL
    0PLANT
    0STOCKTYPE
    0STOR_LOC
    0BOM
    of which only 0MATERIAL, 0PLANT and 0STORE_LOC are later used in the transformation.
    As we had a growing number of records, we decided to delete in the START routine all records, where the inventory is not GT zero, thus eliminating zero and negative inventory records.
    Now comes the funny part of the story:
    Prior to these changes I would [in a test system, just copied from PROD] read some 33 million of records and write out the same amount of records. Of course, after the change we expected to write out less. To my total surprise I was reading now however 45 million of records with the same unchanged DTP, and writing out the expected less records.
    When checking the number of records in the DSO I found the 45 million, but cannot explain why in the loads before we only retrieved some 33 millions from the same unchanged amount of records.
    When checking in PROD - same result: we have some 45 million records in the DSO, but when we do the full load from the DSO to the cube the DTP only processes some 33 millions.
    What am I missing - is there a compression going on? Why would the amount of records in a DSO differ from the amount of records processed in the DataPackgages when I am making a FULL load without any filter restrictions and only a semantic grouping in place for parts of the DSO key?
    ANY idea, thought is appreciated.

    Thanks Gaurav.
    I did check if there were more/any loads doen inbetween - there were none in the test system.  As I mentioned that it was a new copy from PROD to TEST, I compared the number of entries in the DSO and that seems to be a match between TEST and PROD, ok a some more in PROD but they can be accounted for. In test I loaded the day before the changes were imported to have a comparison, and between that load and the one ofter the changes were imported nothing in the DSO was changed.
    Both DTPs in TEST and PW2 load from actived DSO [without archive]. The DTPs were not changed in quite a while - so I ruled that one out. Same with activation of data in the DSO - this DSO get's loaded and activated in PROD daily via process chain and we load daily deltas into the cube in question. Only on Sundays, for the begin of the new week/fiscal period, we need to make a full load to capture all materials per site with inventory. The deltas loaded during the week are less than 1 million, but the difference between the number of records in the DSO and the amount processed in the DataPackages is more than 10 millions per full load even in PROD.
    I really appreciated the knowledgable answer, I just wished you would pointed out something that I missed out on.

  • Need to post Full Load data (55,000 records) to the target system.

    Hi All,
    We are getting the data from SAP HR system and we need to post this data to the partner system. So we configured Proxy(SAP) to File(Partner) scenario. We need to append the data of each message to the target file. Scince this is a very critical interface, we have used the dedicated queues. The scenario is working fine in D. When the interface transported to Q, they tested this interface with full load i.e with 55,000 messages.All messages are processed successfully in Integration Engine and to process in Adapter engine, it took nearly 37 hrs. We need to post all 55,000 records with in 2 hrs.
    The design of this interface is simple. We have used direct mapping and the size of each message is 1 KB. But need to append all messages to one file at the target side.We are using Advantco sFTP as receiver adapter and proxy as a sender.
    Could you please suggest a solution to process all 55,000 messages with in 2hrs.
    Thanks,
    Soumya.

    Hi Soumya,
    I understand your scenario as, HR data has be send to third party system once in a day. I guess, they are synchronizing employee (55,000) data in third party system with SAP HR data, daily.
    I would design this scenario as follows:-
    I will ask ABAPer to write a ABAP program, which run at 12:00, pickup 55,000 records from SAP HR tables and place them in one file. That file will be placed in SAP HR file system (you can see it using al11). At 12:30, PI File channel will pick up the file and transfer the file to third party target system as it is, without any transformation. File to File, pass through scenario (no ESR objects). Now, ask the target system to take the file, run their program (they should have some SQL routines). That SQL program will insert these records into target system tables.
    If 55,000 records make huge file at SAP HR system, ask ABAPer to split it into parts. PI will pick them in sequence based on file name.
    In this approach, I would ask both SAP HR (sender) and third party (target) system people to be flexible. Otherwise, I would say, it is not technically possible with current PI resources. In my opinion, PI system is middleware, not system in which huge computations can be done. If messages are coming from different systems, then collection in middleware makes sense. In your case, collecting large number of messages from single system, at high frequency is not advisable. 
    If third party target system people are not flexible, then go for File to JDBC scenario. Ask SAP HR ABAPer to split input file into more number of files (10-15, you PI system should be able to handle). At receiver JDBC, use native SQL. You need java mapping to construct, SQL statements in PI. Donu2019t convert flat file to JDBC XML structure, in your case PI cannot handle huge XML payload.
    You have to note, hardware upgrade is very difficult (you need lot of approvals depending your client process) and very costly. In my experience hardware upgrade will take 2-3 months.
    Regards,
    Raghu_Vamsee

  • Full load and eliminated records

    Hi gurus,
    I'm using the 0TR_LP_1 extractor to extract data into BI7 system.
    This extractor only support full load.
    The infopackage does not have any restriction same goes to the DTP
    When i run the extraction , the number of records that transferred (105) in and the number of record added (30) are not the same but the description shows that this is full load.
    On what basis system actually eliminate the records ? How to investigate further on this. I could not verify the records since this are the dummy records created on ECC.
    In the source system i have run RSA3 and the number of records are 105.
    Please help
    TQ

    Lookup at transformation and find does filter is happening coz of routines.
    Aggregation of data in source pacakge may result in reduction in total number of records.
    Omission of log
    etc may be the reason of change is no. of records..
    In r/3 rsa3, all the log of a order will be there 
    eg: a order which is created on 8am has undergone several changes will look like below in rsa3
    Creation       10304   Pen   19 qty  29.12.2009 8 am
    modification 10304   Pen   18 qty  29.12.2009 9 am
    modification 10304   Pen   17 qty  29.12.2009 10 am
    modification 10304   Pen   15 qty  29.12.2009 11 am
    modification 10304   Pen   25 qty  29.12.2009 12 pm
    Same replicates in psa
    In bw the data transferred will 5 and added will be 1
    the final version of truth ie "modification 10304   Pen   25 qty  29.12.2009 12 pm" will be loaded.

  • Delta update data error. Can we do a full load with bad records?

    Hello,
    We are working with SAP BW 7.0. And we had a problem with a delta update. It has made the delta update correctly, however, a register is loaded incorrectly, when this register was correct in the source system.
    We just made a load of that record and now we have it right in the master data. But we must now update the InfoCube. The data came up with a delta load, and now to load only this record we must make a full load (with only 1 register), then in the infocube make a delete selection of the wrong record.
    The problem is that we have doubts about how this would affect the load delta, because loads are scheduled for each day and then lost as the source of where to start the next load and we have problems with delta loads the next few days.
    Thank you.

    hi,
    What is your delta extractor (LIS or not LIS), What is you target (DSO or cube).
    depending on your soruce and target procedure is not the same but you can do it in every cases :
    in case of not LIS
    just reload with full IP to PSA
    in case of LIS
    delete setup tables
    do a restructuration for your record (if you can with selection condition)
    in case of cube in BW
    do a simple full upload
    in case of DSO
    do a full upload in repair mode (if dataflow is 3.x) else just use DTP.
    But if your target is DSO and DSO load other infocube after be sure that they are not corrupted
    Cyril

  • Record count is different for Full load and Init with data transfer

    Hi all,
    We have a data source for which delta is enabled on Calendar Day.
    If I run the full load record count is 2670.But If I run the delta record count is 2665.5 records are missing.
    Would you please help on this.Why those records are missing.

    Hi,
    Actually for Full Load say you have 50 records. When you run it will upload 50 records.
    Now if you run delta if there is no records changes it wouldnt update because there is no update in the existing records.
    only delta is done when there is changes in the records.

  • Full load Concept

    Hi all,
    Can somebody explain me why there are full loads that should not be deleted from the Infocube?
    Theoratically, a full brings all data from the Source System, but we have fulls that only bring the data from a specific day and therefore we need to keep them on the cube/ods.
    Example:
    In our ODS we keep the full requests. If they are deleted the data can not be anymore reported.
    Now we only have data starting from 23.05 and even that we load a full the data previously to this day is not being extracted.
    The extractor is also not bringing the data of the past.
    Is this a extarctor configuration or there´s some way to bring all data that is on the R/3 system?
    Thanks,
    Marta.

    Hi Marta,
    Yes you are very much correct. Full load pulls in all the data from the source system, but it pulls in the data for the restrictions mentioned in infopackage ..
    Now say we are doing a full load to a cube and also to ODS. And say we dont have any selection restriction ..
    As you know ODS has overwrite functionality .. while the cube does nt have it ..
    Now say we do the full load continously .. Now the same records will b e pulled everytime .. And In ODS if we have set overwrite for data values in the update rule, It will overwrite only, and we will get the latest data only .. But in case of cube all these key figure will keep getting added up as the same data is being loaded multiple times ..
    Hence in this case we delete the earlier requests for a cube .. Now say we do full load with some selection restrictios(say, quarter wise, month wise) .. In this case we dont delete the earlier requests if the selection restrictions are mutually exclusive ..
    And for the extractor issue, could you please elaborate on that?
    Hope it helps ..

  • Full Load" and "Full load with Repair full request"

    Hello Experts,
    Can any body share with me what is the difference between a "Full Load" and "Full load with Repair full request"?
    Regards.

    Hi......
    What is function of full repair?? what it does?
    How to delete init from scheduler?? I dont see any option like that in infopackage
    For both of you question there is a oss note 739863-Repairing data in BW ..........Read the following.....
    Symptom
    Some data is incorrect or missing in the PSA table or in the ODS object (Enterprise Data Warehouse layer).
    Other terms
    Restore data, repair data
    Reason and Prerequisites
    There may be a number of reasons for this problem: Errors in the relevant application, errors in the user exit, errors in the DeltaQueue, handling errors in the customers posting procedure (for example, a change in the extract structure during production operation if the DeltaQueue was not yet empty; postings before the Delta Init was completed, and so on), extractor errors, unplanned system terminations in BW and in R/3, and so on.
    Solution
    Read this note in full BEFORE you start actions that may repair your data in BW. Contact SAP Support for help with troubleshooting before you start to repair data.
    BW offers you the option of a full upload in the form of a repair request (as of BW 3.0B). If you want to use this function, we recommend that you use the ODS object layer.
    Note that you should only use this procedure if you have a small number of incorrect or missing records. Otherwise, we always recommend a reinitialization (possibly after a previous selective deletion, followed by a restriction of the Delta-Init selection to exclude areas that were not changed in the meantime).
    1. Repair request: Definition
    If you flag a request as a repair request with full update as the update mode, it can be updated to all data targets, even if these already contain data from delta initialization runs for this DataSource/source system combination. This means that a repair request can be updated into all ODS objects at any time without a check being performed. The system supports loading by repair request into an ODS object without a check being performed for overlapping data or for the sequence of the requests. This action may therefore result in duplicate data and must thus be prepared very carefully.
    The repair request (of the "Full Upload" type) can be loaded into the same ODS object in which the 'normal' delta requests run. You will find this request under the "Repair Request" option in the InfoPackage (Maintenance) menu.
    2. Prerequisites for using the "Repair Request" function
    2.1. Troubleshooting
    Before you start the repair action, you should carry out a thorough analysis of the possible cause of the error to make sure that the error cannot recur when you execute the repair action. For example, if a key figure has already been updated incorrectly in the OLTP system, it will not change after a reload into BW. Use transaction RSA3 (Extractor Checker) in the source system for help with troubleshooting. Another possible source of the problem may be your user exit. To ensure that the user exit is correct, first load a user exit with a Probe-Full request into the PSA table and check whether the data is correct. If it is not correct: Search for the error in the exit user. If you do not find it, we recommend that you deactivate the user exit for testing purposes and request a new Full Upload. It If the data arrives correctly, it is highly probable that the error is indeed in the user exit.
    We always recommend that you load the data into the PSA table in the first step and check the result there.
    2.2. Analyze the effects on the downstream targets
    Before you start the Repair request into the ODS object, make sure that the incorrect data records are selectively deleted from the ODS object. However, before you decide on selective deletion, you should read the Info Help for the "Selective Deletion" function, which you can access by pressing the extra button on the relevant dialog box. The activation queue and the ChangeLog remain unchanged during the selective deletion of the data from the ODS object, which means that the incorrect data is still in the change log afterwards. After the selective deletion, you therefore must not reconstruct the ODS object if it is reconstructed from the ChangeLog. (Reconstruction is usually from the PSA table but, if the data source is the ODS object itself, the ODS object is reconstructed from its ChangeLog). You MUST read the recommendations and warnings about this (press the "Info" button).
    You MUST also take into account the fact that the delta for the downstream data targets is created from the changelog. If you perform selective deletion and then reload data into the deleted area, this may result in data inconsistencies in the downstream data targets.
    If you only use MOVE and do not use ADD for updates in the ODS object, selective deletion may not be required in some cases (for example, if incorrect records only have to be changed, rather than deleted). In this case, the DataMart delta also remains intact.
    2.3. Analysis of the selections
    You must be very precise when you perform selective deletion: Some applications do not provide the option of selecting individual documents for the load process. Therefore, you must first ensure that you can load the same range of documents into BW as you would delete from the ODS object. This note provides some application-specific recommendations to help you "repair" the incorrect data records.
    If you updated the data from the ODS object into the InfoCube, you can also delete it there using the "Selective deletion" function. However, if it is compressed at document level there and deletion is no longer possible, you must delete the InfoCube content and fill the data in the ODS object again after repair.
    You can only perform this action after a thorough analysis of all effects of selective data deletion. We naturally recommend that you test this first in the test system.
    The procedure generally applies for all SAP applications/extractors. The application determines the selections. For example, if you cannot use the document number for selection but you can select documents for an entire period, then you are forced to delete and then update documents for the entire period in the data target. Therefore, it is important to look first at the selections in the InfoPackage exactly before you delete data from the data target.
    Some applications have additional special features:
    Logistics cockpit: As preparation for the repair request, delete the SetUp table (if you have not already done so) and fill it selectively with concrete document numbers (or other possible groups of documents determined by the selection). Execute the Repair request.
    Caution: You can currently use the transactions that fill SetUp tables with reconstruction data to select individual documents or entire ranges of documents (at present, it is not possible to select several individual documents if they are not numbered in sequence).
    FI: The Repair request for the Full Upload is not required here. The following efficient alternatives are provided: In the FI area, you can select documents that must be reloaded into BW again, make a small change to them (for example, insert a period into the assignment text) and save them -> as a result, the document is placed in the delta queue again and the previously loaded document under the same number in the BW ODS object is overwritten. FI also has an option for sending the documents selectively from the OLTP system to the BW system using correction programs (see note 616331).
    3. Repair request execution
    How do you proceed if you want to load a repair request into the data target? Go to the maintenance screen of the InfoPackage (Scheduler), set the type of data upload to "Full", and select the "Scheduler" option in the menu -> Full Request Repair -> Flag request as repair request -> Confirm. Update the data into the PSA and then check that it is correct. If the data is correct, continue to update into the data targets.
    And also search in forum, will get discussions on this
    Full repair loads
    Regarding Repair Full Request
    Instead of doing of all these all steps.. cant I reload that failed request again??
    If some goes wrong with delta loads...it is always better to do re-init...I mean dlete init flag...full repair..all those steps....If it is an infocube you can go for full update also instead of full repair...
    Full Upload:
    In full upload all the data records are fetched....It is similar to full repair..Incase of infocube to recover missed delta records..we can run full upload.....but in case of ODS  does'nt support full upload and delta upload parallely...so inthis case you have to go for full repair...otherwise delta mechanism will get corrupted...
    Suppose your ODS activation is failing since there is a Full upload request in the target.......then you can convert full upload to full repair using the program : RSSM_SET_REPAIR _ FULL_FLAG
    Hope this helps.......
    Thanks==points as per SDN.
    Regards,
    Debjani.....
    Edited by: Debjani  Mukherjee on Oct 23, 2008 8:32 PM

  • What is the diffrence between full load and delta load in DTP

    hI ,
    I am trying to load the data into CUBE from another cube using DTP ..
    There are 2 DTPS ..
    1: DTP with full load
    2: DTP with DELTA load ..
    what is the diffrence betwen thse two in DTP ...
    Please can somebody help me

    1: DTP with full load  - will update all the requests in PSA/source to the target,
    2: DTP with DELTA load - will update only new requests to the datatarget
    The system doesnt distinguish new records on the basis of changed records, rather by the request. Thats the reason you have datamart status to indicate if the request has been loaded to further datatargets.

  • Error in process chain for PCA full load

    Hello everyone,
    I'm trying to use a process chain in order to delete a previous full load of plan data in a cube prior to the new load (to avoid double records). The successor job in the process chain is loading a delta of actual data into the same cube (same info source).
    When executing the process chain (and the included info package (full load)), the setting "Automatic loading of similar/identical requests from info cube" in the info package is not working (I have ticked "full or init, data-/infosource are the same")...
    I have checked that the function itself works as I have executed the info package manually with success. So the problem is the chain somehow.
    In the chain I just execute the info package as usual... so to my understanding, it should work the same way as if I executed it manually. Or am I wrong? Is some additional setting required in the chain in order to make it work?
    Any ideas?
    Thanks,
    Fredrik

    Hi Fredrik,
    not all settings in infopackages work in chains in the same way they do while running the package manually. Mostly you can check that with pressing F1 on the setting. In your case, you need to add a process type for deleting the data to the chain. In your chain maintenance, look at process types and then in load processes .... There you will find the type you need.
    kind regards
    Siggi

  • Errors: ORA-00054 & ORA-01452 while running DAC Full Load

    Hi Friends,
    Previously, I ran full load...it went well. And, I did some sample reports also in BI APPS 7.9.6.2
    Now, I modified few parameters as per the Business and I try to run Full Load again...But I struck with few similar errors. I cleared couple of DB Errors.
    Please, help me out to solve the below errors.
    1. ANOMALY INFO::: Error while executing : TRUNCATE TABLE:W_SALES_BOOKING_LINE_F
    MESSAGE:::com.siebel.etl.database.IllegalSQLQueryException: DataWarehouse:TRUNCATE TABLE W_SALES_BOOKING_LINE_F
    ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
    --I checked W_SALES_BOOKING_LINE_F, it contain s data.
    2. ANOMALY INFO::: Error while executing : CREATE INDEX:W_GL_REVN_F:W_GL_REVN_F_U1
    MESSAGE:::java.lang.Exception: Error while execution : CREATE UNIQUE INDEX
         W_GL_REVN_F_U1
    ON
         W_GL_REVN_F
         INTEGRATION_ID ASC
         ,DATASOURCE_NUM_ID ASC
    NOLOGGING
    with error DataWarehouse:CREATE UNIQUE INDEX
         W_GL_REVN_F_U1
    ON
         W_GL_REVN_F
         INTEGRATION_ID ASC
         ,DATASOURCE_NUM_ID ASC
    NOLOGGING
    ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found
    -- Yes, I found duplicate values in this table W_GL_REVN_F. But, how can I rectify it. I did some engineering, but failed.
    please tell me the steps to acheive....
    Thanks in advance..
    Stone

    Hi, Please see the answers (in bold) below.
    1. ANOMALY INFO::: Error while executing : TRUNCATE TABLE:W_SALES_BOOKING_LINE_F
    MESSAGE:::com.siebel.etl.database.IllegalSQLQueryException: DataWarehouse:TRUNCATE TABLE W_SALES_BOOKING_LINE_F
    ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
    --I checked W_SALES_BOOKING_LINE_F, it contain s data.
    Just restart the load, It seems like your DB processes are busy and the table still has a  lock on it which means something is not yet Commited/Rolled Back.
    If this issue repeats you can mail your DBA and ask him to look in to the issue
    2. ANOMALY INFO::: Error while executing : CREATE INDEX:W_GL_REVN_F:W_GL_REVN_F_U1
    MESSAGE:::java.lang.Exception: Error while execution : CREATE UNIQUE INDEX
    W_GL_REVN_F_U1
    ON
    W_GL_REVN_F
    INTEGRATION_ID ASC
         ,DATASOURCE_NUM_ID ASC
         NOLOGGING
         with error DataWarehouse:CREATE UNIQUE INDEX
         W_GL_REVN_F_U1
         ON
         W_GL_REVN_F
         INTEGRATION_ID ASC
         ,DATASOURCE_NUM_ID ASC
         NOLOGGING
         ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found
         -- Yes, I found duplicate values in this table W_GL_REVN_F. But, how can I rectify it. I did some engineering, but failed.
         please tell me the steps to achieve....
    please execute this sql and get the duplicate values. If the count is less you can delete the records based on ROW_WID
    How  many duplicates do you have in total?
    *1. SELECT INTEGRATION_ID,DATASOURCE_NUM_ID,count(*) FROM W_GL_REVN_F*
    GROUP BY INTEGRATION_ID, DATASOURCE_NUM_ID
    HAVING COUNT()>1*
    *2. SELECT ROW_WID,DATASOURCE_NUM_ID,INTEGRATION_ID FROM W_GL_REVN_F*
    WHERE INTEGRATION_ID= (from 1st query)
    *3. DELETE from W_GL_REVN_F where ROW_WID=( from 2nd query)*
    Hope this helps !!

  • How to do a full load in DAC for a particular Module?

    Hi,
    We have one Execution plan loading all the BI Analytics module as of now.
    1)Financials,
    2)Procurement and Spend,
    3)Supply Chain and Order Management and
    4)EAM
    Issue is if  i go to Tools-->ETL Management-->Reset Data Sources  in DAC, it refreshes dates for all the Tables in the warehouse. ( and hence full load for all the Modules happens when i start the execution plan)
    I dont want full load for other modules, just want to do full load for a particular Module lets say EAM Module ( and also want to make sure that this particular full load doesnt create issues with the  data for other modules)
    Let me know how to achieve this.
    Any help in this regard would be highly appreciated.
    Thanks
    Ashish

    I can get a list of all the Facts and dimensions for a particular module from BI Apps Content guide... and then i can go and make refresh dates as Null for those Tables.
    Now should i run the execution plan for one particular module or an Execution plan containing all the modules?
    Cause i got this from someone..
    "The Data models for the different modules have many common dimensions, so a reload of one module could leave the facts in another in an invalid state as the dimension keys would be moved.
    Therefore it should not be expected that you can reload a module in isolation.
    However, you can selectively do a reload of any table by resetting it's refresh date in the DAC console.
    Then the DAC will determine the dependent objects that will also need a full load in order to maintain data consistency."
    Thanks
    Ashish

Maybe you are looking for

  • TS1569 Wireless Keyboard Pairing Problems

    I have an iMac mid 2007. Everything was working fine until today. I replaced the batteries in my wireless keyboard. When I turned on my computer it wouldn't recognize the keyboard and I am having difficulty pairing it again. The green light turns on

  • Posting Key 29 Special GL debit

    Dear All:                     I have made PO field compulsory while booking down payment to vendor through f-48 by making PO field required in OB41 MM for vendors for posting key 29 (Special GL debit). But problem is i want to make it optional for ve

  • Unable to view documents - related to territory management?

    Hi In our project, the users are assigned to the territories via org positions. In a typical instance, out of the two users assigned to the same territories, one is able to view a document and the other is not. I have checked the position/ territory

  • Looking to learn SCM

    Hi, I am looking to learn SAP SCM, I dont have any knowledge regarding this except knowledge in MM, SD and PP. I would like to know Is prior knowledge required in BW? Is any body having Step-by-step materials or books available?

  • Summing of currency and quantity fields individually

    hi experts From Table BSIS fields are BELNR, BUDAT, DMBTR, MENGE required output is quantity and currencey fields should be sum at line item level.write a program ? regds Venky