Why full load...........?

Hi ..
Can someone thrpw some light .. why full load is required when init load is serving the purpose....
Init is supporting both the delta also , else full will not ...
Thanks in advance ...
Regards

Dear Snigdha,
Full Load helps in loading the data with our selections ( No Sequence is required ) . It doesnot trigger any Delta, You can only Init the selections for those you would require delta data to flow. In certain Scenario's like where the data is very Huge and delta is only required for current year changes, at that point it is wise to do full loads till last year and just run an Init for this year to get changes correspondingly, and as you know that all the sources doesnot support Delta ( Master data ) . so for them we require a Full Load in the Package. so thats how you are given the option. How effectively you use the FULL Load option along with Init is to be decided on the scenario..
Hope we all gave you the Idea you require.
Thanks,
Krish

Similar Messages

  • Adobe reader suddenly full load

    no error message , no windows log error. don't know why full load....
    anyone saw it before?
    Thanks.

    my windows have problem not adobe reader.
    thanks.

  • Full load from a DSO to a cube processes less records than available in DSO

    We have a scenario, where every Sunday I have to make a full load from a DSO with OnHand Stock information to a cube, where I register on material and stoer level a counter if there is stock available.
    The DTP has no filters at all and has a semantic group on 0MATERIAL and 0PLANT.
    The key in the DSO is:
    0MATERIAL
    0PLANT
    0STOCKTYPE
    0STOR_LOC
    0BOM
    of which only 0MATERIAL, 0PLANT and 0STORE_LOC are later used in the transformation.
    As we had a growing number of records, we decided to delete in the START routine all records, where the inventory is not GT zero, thus eliminating zero and negative inventory records.
    Now comes the funny part of the story:
    Prior to these changes I would [in a test system, just copied from PROD] read some 33 million of records and write out the same amount of records. Of course, after the change we expected to write out less. To my total surprise I was reading now however 45 million of records with the same unchanged DTP, and writing out the expected less records.
    When checking the number of records in the DSO I found the 45 million, but cannot explain why in the loads before we only retrieved some 33 millions from the same unchanged amount of records.
    When checking in PROD - same result: we have some 45 million records in the DSO, but when we do the full load from the DSO to the cube the DTP only processes some 33 millions.
    What am I missing - is there a compression going on? Why would the amount of records in a DSO differ from the amount of records processed in the DataPackgages when I am making a FULL load without any filter restrictions and only a semantic grouping in place for parts of the DSO key?
    ANY idea, thought is appreciated.

    Thanks Gaurav.
    I did check if there were more/any loads doen inbetween - there were none in the test system.  As I mentioned that it was a new copy from PROD to TEST, I compared the number of entries in the DSO and that seems to be a match between TEST and PROD, ok a some more in PROD but they can be accounted for. In test I loaded the day before the changes were imported to have a comparison, and between that load and the one ofter the changes were imported nothing in the DSO was changed.
    Both DTPs in TEST and PW2 load from actived DSO [without archive]. The DTPs were not changed in quite a while - so I ruled that one out. Same with activation of data in the DSO - this DSO get's loaded and activated in PROD daily via process chain and we load daily deltas into the cube in question. Only on Sundays, for the begin of the new week/fiscal period, we need to make a full load to capture all materials per site with inventory. The deltas loaded during the week are less than 1 million, but the difference between the number of records in the DSO and the amount processed in the DataPackages is more than 10 millions per full load even in PROD.
    I really appreciated the knowledgable answer, I just wished you would pointed out something that I missed out on.

  • Error in Transaction Data - Full Load

    Hello All,
        This is the current scenario that I am working on:
    There is a process chain which has two transaction data load (FULL LOADS) processes to the same cube.In the process monitor everything seems okay (data loads seem fine) but overall status for both loads failed due to 'Error in source system/extractor' and it says 'error in data selection'.
    Processing is set to data targets only.
    On doing a manage on the cube, I found 3 old requests that were red and NOT set to QM status red. So I set them to QM status red and Deleted them and the difference I saw was that the subsequent requests became available for Reporting.
    Now this data load which is a full load takes for ever - I dont even know why I do not see a initialize delta update option there - can Anyone tell me why I dont see that.
    And, coming to the main question, how do I get the process chain completed - will I have to repeat the data loads or what options do I have to have a succesfully running process chain or at least these 2 full loads of transaction data.
    Thank you - points will be assigned for helpful answers
    - DB
    Edited by: Darshana on Jun 6, 2008 12:01 AM
    Edited by: Darshana on Jun 6, 2008 12:05 AM

    One interesting discovery I just found in R/3, was this job log with respect to the above process chain:
    it says that the job was cancelled in R/3 because the material ledger currencies were changed.
    the process chain is for inventory management and the data load process that get cancelled are for  the job gets cancelled in the source system:
    1. Material Valuation: period ending inventories
    2. Material Valuation: prices
    The performance assistant says this but I am not sure how far can I work on the R/3 side to rectify this:
    Material ledger currencies were changed
    Diagnosis
    The currencies currently set for the material ledger and the currency types set for valuation area 6205 differ from those set at conversion of the data (production startup).
    System Response
    The system does not allow you to post transactions after changing the currency settings to ensure consistency.
    Procedure
    Replace the current settings with the those entered at production
    start-up.
    If you wish to change the currency settings, you must use programs to convert data from the old to the new currencies.
    Inform your system administrator
    Anyone knowledgable in this area please give your inputs.
    - DB

  • Problems with Full loads/Decreased query performance in Prod

    We have a table which serves as a base for a complex view. The table has roughly around 10 Million records and its a daily full load(I know, that delta loads are much better for handling large sets of data, but this information is very dynamic and with the business time constraints and project deliverables, the best decision was to do a Full load).
    This is the process we follow:
    > Drop Indexes (All columns individual indexes which are used inside the complex view as joins)
    > Truncate table
    > Load data
    > Recreate indexes.
    All the above steps are performed from SAP Dataservices Thru scripts and sql() function to execute the command, no manual intervention what so ever.
    After the job is successfully completed, the view doesn't refresh at all(It sits there forever). The same job when run across same volume of production data in Test environment performs much faster.
    Then, how do I refresh the view is manually log into SQL Developer, drop all the indexes on the parent table, and re-create all the indexes in the same order as Dataservices script. It performs very well after till the next load (the next morning).
    Any suggestions would be very helpful.
    My main question is why does it run faster, when I drop and recreate the indexes? and doesnt complete when the indexes are created by the SQL() from data services tool.
    Tried:
    Explain Plan(in dev, Test, Prod): Query cost vaired accross environments but returned results with same return times (In Production after manual Index creation)
    Tuning advisor (Only in Test) DBA evaluated it to be good.
    Thanks
    Nash
    DB Version Oracle 11.0.7
    Dataservices 3.2

    BluShadow and Harman
    Thank You!
    Im using a regular view, not a materialized view. and yes the query plan is completely different from Test and Production. In test the query was completely running on Hash based joins whereas in Production its using Nested Joins in the execution plan.
    Will try to gather statistics after the load and as per BluShadow, will look on the way of writing a function that makes a call to Database.
    Thank you all for taking sometime. I will try to test this out starting today and will extend tests over a couple of days.
    Regards
    Nash

  • Cube full load failed

    Hi Experts,
    When i am loading the data from ODS to cube (FUll load) load is failed due to the below error.
    Short dump in the Warehouse
    Diagnosis
    The data update was not completed. A short dump has probably been logged in BW providing information about the error.
    System response
    "Caller 70" is missing.
    Further analysis:
    Search in the BW short dump overview for the short dump belonging to the request. Pay attention to the correct time and date on the selection screen.
    You get a short dump list using the Wizard or via the menu path "Environment -> Short dump -> In the Data Warehouse".
    Error correction:
    Follow the instructions in the short dump.
    and i checked in ST22 and the error analysis is,
    An exception occurred. This exception is dealt with in more detail below
    . The exception, which is assigned to the class 'CX_SY_OPEN_SQL_DB', was
      neither
    caught nor passed along using a RAISING clause, in the procedure "WRITE_ICFA
      "(FORM)"
    Since the caller of the procedure could not have expected this exception
      to occur, the running program was terminated.
    The reason for the exception is:
    The database system recognized that your last operation on the database
    would have led to a deadlock.
    Therefore, your transaction was rolled back
    to avoid this.
    ORACLE always terminates any transaction that would result in deadlock.
    The other transactions involved in this potential deadlock
    are not affected by the termination.
    Please can any one tell me why the error occured and how to resolved.
    Thanks in advance
    David

    David,
    It appears that there was a Table lock when you executed your DTP. This means that there was a multiple read on any of the tables used by DTP at the same time. This has resulted in the error. Check your PC's once.
    As of now, delete the request in IC and reload!
    -VA

  • Full load Concept

    Hi all,
    Can somebody explain me why there are full loads that should not be deleted from the Infocube?
    Theoratically, a full brings all data from the Source System, but we have fulls that only bring the data from a specific day and therefore we need to keep them on the cube/ods.
    Example:
    In our ODS we keep the full requests. If they are deleted the data can not be anymore reported.
    Now we only have data starting from 23.05 and even that we load a full the data previously to this day is not being extracted.
    The extractor is also not bringing the data of the past.
    Is this a extarctor configuration or there´s some way to bring all data that is on the R/3 system?
    Thanks,
    Marta.

    Hi Marta,
    Yes you are very much correct. Full load pulls in all the data from the source system, but it pulls in the data for the restrictions mentioned in infopackage ..
    Now say we are doing a full load to a cube and also to ODS. And say we dont have any selection restriction ..
    As you know ODS has overwrite functionality .. while the cube does nt have it ..
    Now say we do the full load continously .. Now the same records will b e pulled everytime .. And In ODS if we have set overwrite for data values in the update rule, It will overwrite only, and we will get the latest data only .. But in case of cube all these key figure will keep getting added up as the same data is being loaded multiple times ..
    Hence in this case we delete the earlier requests for a cube .. Now say we do full load with some selection restrictios(say, quarter wise, month wise) .. In this case we dont delete the earlier requests if the selection restrictions are mutually exclusive ..
    And for the extractor issue, could you please elaborate on that?
    Hope it helps ..

  • InfoSpoke Delta - Full loads

    Hello All,
    I have an Infospoke that's in PROD with Delta update sourcing from an ODS.
    We have planned a full load.
    With selection option 06-31-2005 to 06-31-2007
    changed the update mode from delta to full and started the full load on
    06-01-2007 and completed the load successfully on 06-09-2007.
    restored the delta once the data is consumed from the /BIC/OHXXXX tables on
    06-22-2007 (full load was huge).
    During this process of full load, ODS was fed with deltas (load to ODS was not stopped).
    Users are complainig delta data is missing during that period (06/01 - 06/09)in which full load was performed.
    Can someone throw some light on why did the delta data is missing only for that period? Is not putting the ODS load on Hold the reason for this failure which messed up the delta pointer.
    Full points assured.

    Hi,
    enabling delta load:
    chk the below link for step by step instructions
    http://help.sap.com/saphelp_nw04s/helpdata/en/44/97433e99ee70dbe10000000a1553f6/frameset.htm
    which describes:1.Delta Load Management Framework Overview
    2.Enabling Delta Load
    3.Initializing Delta Load

  • Infospoke error for full load

    Experts
    when I run full load info spoke through process chain ,It failed ,saying sql error1652. Can you please tell me why this is happening and it is happening only since 2 days. Is it b cos of large volume of data ?
    Any advice greatly appreciated.
    Many thanks

    This error occurs due to temp space problem..Unable to extend TEMP segment tablespace...
    You need to alter the tablespace TEMP default storage (pctincrease 1) and immediately after alter the tablespace TEMP default storage(pctincrease 0)
    Check with your DBA to do this..
    Regards
    Manga(Assign points if it helps!!)

  • Executing Full load for one task

    Hi Gurus,
    execution plan failed with some errors( unique constraint) errors. now i decided to run a full load for the failed task? is it possible? if possible, can some one give me the exact steps to be followed?
    Regards,
    Nagarjuna

    If you are getting unique constraint error, you can change the index to non unique via the DAC, let it complete and then debug why you are getting duplicate data. By forcing it to full implies that sourece is not providing duplicates but the incremental loads are whoich may be a different issue than source data being bad.

  • Full load works, but delta fails - "Error in the Extractor"

    Good morning,
    We are using datasource 3FI_SL_ZZ_SI (Special Ledger line items) to load a cube, and are having trouble with the delta loads.  If I run a full load, everything runs fine.  If I run a delta load, it will initially fail with an error that simply states "Error in the Extractor" (no long text).  If I repeat the delta load, it completes successfully with 0 records returned.  If I then rerun the delta, I get the error again.
    I've run extractions using RSA3, but they work fine - as I would expect since the full loads work.  Unfortunately, I have not been able to find why the deltas aren't working.  After searching the Forums, I've tried replicating the datasource, checked the job log in R/3 (nothing), and run the program RS_TRANSTRU_ACTIVATE_ALL, all to no avail.
    Any ideas?
    Thanks
    We're running BW 3.5, R/3 4.71

    And it's just that easy....
    Yes, it appears this is what the problem was.  I'd been running the delta init without data transfer, and it was failing during the first true delta run.  Once I changed the delta init so that it transferred data, the deltas worked fine.  This was in our development system.  I took a look in our production system where deltas have been running for quite some time, and it turns out the delta initialization there was done with data transfer. 
    Thank you very much!

  • Source for Init with data transfer and full load

    Hi Experts,
    1. Please tell me where the data comes when we do the
        following actions :-
        a) Init with data transfer
        b) Full load
      i want to know the source of data i.e (setup table or main table or any other table)
    Helpful if you can provide the data-flow.
    kindly tell which is prefrable and why?
    Regards,
    RG

    Hi.......
    When you do init with data transfer it will read data from the set up table only..........but beside this it will also set the init flag..........due to this from next time any new records will go to the delta queue...........and next time when you run delta load it will pick records from delta queue...........
    Now suppose data are alrady available in the BW side...........then you can run init without data transfer.......this will only set the init flag.........it will not pick any records.........
    Extraction structure gets data from set up table in case of full upload ............and in case of delta load from delta queue.......
    Regards,
    Debjani...........
    Edited by: Debjani  Mukherjee on Sep 20, 2008 11:45 AM

  • Delta and Full load from a single Delta ODS

    I have delivered 0DS 0GT_DS01 that is being loaded from R/3 as a delta update. This ODS is then loaded to a delivered InfoCube 0GT_C01 as a delta.
    My question is this, can I create update rules and a Infopackage to do a full load from 0GT_DS01 ODS to my custom ODS ZGT_DS01?
    I kind of remember from my BW310 class that in release 3.5,  if I have an ODS to InfoCube Delta update all other updates from that ODS must be a delta also.
    Regards,
    Mike...

    Siva,
    I created my update rule without a problem.
    I created the Infopackage with my ZGT_DS01 ODS at the target, the update mode as FULL and I saved the InfoPackage. After SAVE I went to the drop-down menu, Scheduler ==> Initialization For This Datasource and there is a Delta present.
    The Infopackage works OK when I run is from RSA1, but when I put the Infopackage in Process ChainI get and error "Delta upload is only possible after successful initialization". What I see in the Manage screen for my ZGT_DS01 ODS is a request for a full load followed by a request for a Delta.
    Any Ideas why the infopackage works in RSA1 but not in a process chain?
    Regards,
    Mike...

  • Delete and full load into ODS versus Full load into ODS

    Can someone please advise, that in the context of DSO being populated with overwrite function, would the FULL LOAD in following two options (1 and 2) take the same amount of time to execute or one would take lesser than the other. Please also explain the rationale behind your answer
    1. Delete of data target contents of ODS A, Followed by FULL LOAD into ODS A, followed by activation of ODS
    2. Plain FULL LOAD into ODS A (i.e. without the preceeding delete data target contents step), Followed by activation of ODS
    Repeating the question - whether FULL LOAD in 1 will take the same time as FULL LOAD in 2, if one takes lesser than the other then why so.
    Context : Normal 3.5 ODS with seperate change log, new and active record tables.

    Hi,
    I think the main difference is that you with the process "Delete and full load" you will only get the full loaded data inside the dso. With full load you the system uses the modify logic (overwrite for all datasets already in the dso) and insert for all new datasets which are not inside yet inside the dso
    Regards
    Juergen

  • DTP Delta and Full Load...

    Dear All,
    I'm working with SAP BW 7.3 and I have a standard data flow, starting with DataSource, DSO and InfoCube. My process chain faced an error and for last week or so I could not fetch deltas. To remove the error I had deleted all records from InfoCube, since I had all requests available into DSO. Then I manually loaded Full Load from DSO to InfuCube. But next day when my process chain executed it brought all precious and new records again into InfoCube, as the delta was properly fetched into DSO but it did not come into InfoCube, as it was coming before. When I checked Active and Change Log tables of DSO these two had same number of records.
    1. What could be the reason that delta DTP between DSO and InfoCube is not fetching only delta records, it was fetching deltas records before full load?
    2. Have I made a mistake or missed any setting in DTP while executing full Load DTP between DSO and InfoCube?
    3. What are these options for in Extraction Tab of DTP and when we use following options and why:
        i.   Active Table (with Archive)
        ii.  Acitve Table (Without)
        iii. Archive (Full Extraction only)
        iv. Change Log
    I will appreciate your reply.
    Many thanks!
    Tariq Ashraf

    Hi Tariq,
    Please check the below:
    Delta Init. Extraction from
    Active Table (with Archive)
    The data is read from the DSO active table and from the archived data.
    Active Table (Without Archive)
    The data is only read from the active table of a DSO. If there is data in the archive or in near-line storage at the time of extraction, this data is not extracted.
    Archive (Full Extraction Only)
    The data is only read from the archive data store. Data is not extracted from the active table.
    Change Log
    the data is read from the change log and not the active table of the DSO.
    Hope this answers your question. Let me know if any required.
    Regarsd
    Ramesh V

Maybe you are looking for

  • App-V 5 Full Infrastructure Apps take long time to stream to the client

    Hi was wondering if anyone has the same issue as i am or knows a fix for this, below is my problem and the troubleshooting i have done. Overview of problem App-V 5 apps delivered via App-V 5 full infrastructure take a long time to stream to the clien

  • New IPad 3 will not connect to 2WIRE from AT&T

    Hi, I just bought the new IPad and its taking me through the setup screens, when I get to the wi-fi setup I see my router (2WIRE) I click on it, I put in the password and it says "Not correct Password" I know it is the correct password, I even change

  • Print html problem

    I have a problem in VI documentation. I print the VI (the block diagram is much bigger than one screen) into html, and choose vi documentation. I got an error of 'Failure saving file xxx.html'. I am using LabVIEW 8.6 and winxp. Ryan Solved! Go to Sol

  • Uploading from iPhoto '08 using Ofoto Express

    I have always used Ofoto Express (from Kodak Gallery) to upload my photos from iPhoto. Now that I have iPhoto'08 I can't do it. When I open up Ofoto it doesn't give me the choice to get my pictures from iPhoto. Does anyone have any ideas? When I go t

  • Calling CATT through report program

    hai folks!       well i am facing some problem while working with CATT. i am making use of the transaction VT02N. my problem is i am passing some records to CATT by using   excel sheet.here some records are processed succesfully   and some are not.no