Product Full load Syndication

Hi All ,
What is the best process to syndicate full load from MDM ??
We have more than 40 thousand products in the repository , when ever there is a requirement to do a full load . We syndicate all the products to file and send them to XI which would then post the messages to ECC as IDOC . This process is taking 10 hrs+  to see the updated product in ECC .
Current requirement is to find a better way to do this so that the processing time could be reduced.
Please let me know how this could be achieved .
Thanks and Regards,
KLK

Hi K.L.K,
Please try using LSMW for uploading the data.
Hope it helps.
Thnak you,
Priti

Similar Messages

  • Error in Transaction Data - Full Load

    Hello All,
        This is the current scenario that I am working on:
    There is a process chain which has two transaction data load (FULL LOADS) processes to the same cube.In the process monitor everything seems okay (data loads seem fine) but overall status for both loads failed due to 'Error in source system/extractor' and it says 'error in data selection'.
    Processing is set to data targets only.
    On doing a manage on the cube, I found 3 old requests that were red and NOT set to QM status red. So I set them to QM status red and Deleted them and the difference I saw was that the subsequent requests became available for Reporting.
    Now this data load which is a full load takes for ever - I dont even know why I do not see a initialize delta update option there - can Anyone tell me why I dont see that.
    And, coming to the main question, how do I get the process chain completed - will I have to repeat the data loads or what options do I have to have a succesfully running process chain or at least these 2 full loads of transaction data.
    Thank you - points will be assigned for helpful answers
    - DB
    Edited by: Darshana on Jun 6, 2008 12:01 AM
    Edited by: Darshana on Jun 6, 2008 12:05 AM

    One interesting discovery I just found in R/3, was this job log with respect to the above process chain:
    it says that the job was cancelled in R/3 because the material ledger currencies were changed.
    the process chain is for inventory management and the data load process that get cancelled are for  the job gets cancelled in the source system:
    1. Material Valuation: period ending inventories
    2. Material Valuation: prices
    The performance assistant says this but I am not sure how far can I work on the R/3 side to rectify this:
    Material ledger currencies were changed
    Diagnosis
    The currencies currently set for the material ledger and the currency types set for valuation area 6205 differ from those set at conversion of the data (production startup).
    System Response
    The system does not allow you to post transactions after changing the currency settings to ensure consistency.
    Procedure
    Replace the current settings with the those entered at production
    start-up.
    If you wish to change the currency settings, you must use programs to convert data from the old to the new currencies.
    Inform your system administrator
    Anyone knowledgable in this area please give your inputs.
    - DB

  • Problems with Full loads/Decreased query performance in Prod

    We have a table which serves as a base for a complex view. The table has roughly around 10 Million records and its a daily full load(I know, that delta loads are much better for handling large sets of data, but this information is very dynamic and with the business time constraints and project deliverables, the best decision was to do a Full load).
    This is the process we follow:
    > Drop Indexes (All columns individual indexes which are used inside the complex view as joins)
    > Truncate table
    > Load data
    > Recreate indexes.
    All the above steps are performed from SAP Dataservices Thru scripts and sql() function to execute the command, no manual intervention what so ever.
    After the job is successfully completed, the view doesn't refresh at all(It sits there forever). The same job when run across same volume of production data in Test environment performs much faster.
    Then, how do I refresh the view is manually log into SQL Developer, drop all the indexes on the parent table, and re-create all the indexes in the same order as Dataservices script. It performs very well after till the next load (the next morning).
    Any suggestions would be very helpful.
    My main question is why does it run faster, when I drop and recreate the indexes? and doesnt complete when the indexes are created by the SQL() from data services tool.
    Tried:
    Explain Plan(in dev, Test, Prod): Query cost vaired accross environments but returned results with same return times (In Production after manual Index creation)
    Tuning advisor (Only in Test) DBA evaluated it to be good.
    Thanks
    Nash
    DB Version Oracle 11.0.7
    Dataservices 3.2

    BluShadow and Harman
    Thank You!
    Im using a regular view, not a materialized view. and yes the query plan is completely different from Test and Production. In test the query was completely running on Hash based joins whereas in Production its using Nested Joins in the execution plan.
    Will try to gather statistics after the load and as per BluShadow, will look on the way of writing a function that makes a call to Database.
    Thank you all for taking sometime. I will try to test this out starting today and will extend tests over a couple of days.
    Regards
    Nash

  • Delta load takes longer than Full load

    Currently we are doing full load every time it takes 15 minutes to complete the cube load. When I dod delta it takes 40 minutes .
    I was adivse to do "Reorganize /BI0/F0FIGL_B10 to apply the values to all the table/index blocks".
    If I do this in Quality and Production will it be huge impact in both the boxes.
    Please let me know is this a correct solutions. I and expecting BW expert feedback for this .
    Thanks

    Hi,
    first of all you need to identify at which stage the time is consumed. Is it at extraction time or the posting time to PSA, ODS, Cube or maybe in some routines. You can get this info in the monitor of the request by checking the timestamps of the entries.
    In your case I think it will be at extraction time and best way to find out what's going on is to schedule a upload and do a sql-trace for the extraction via transaction st05. Check the trace list for select statements with a long response time and check the indexes on the tables. You might need to create some indexes.
    regards
    Siggi
    PS: I guess the reason is that in case of a delta some change documents has to be read by the extractor.
    Message was edited by: Siegfried Szameitat

  • EXACT DIFFERENCE BETWEEN  FULL LOAD AND REPAIRFULL LOAD?

    HI Champ,
    Can anyone explain me the exact difference between  full load and Repairfull load?Give e some senario where we can go for this?Please.....
    10zin

    Hi,
    Full repair can be said as a Full with selections. But the main use or advantage of Full repair load is that it wont affect delta loads in the system. If you load a full to a target with deltas running you again will have to initialize them for deltas to continue. But if you do full repair it wont affect deltas.
    This is normally done we when we lose some data or there is data mismatch between source system and BW.
    Check the OSS Note 739863 'Repairing data in BW' for all the details
    Symptom
    Some data is incorrect or missing in the PSA table or in the ODS object (Enterprise Data Warehouse layer).
    There may be a number of reasons for this problem: Errors in the relevant application, errors in the user exit, errors in the DeltaQueue, handling errors in the customers posting procedure (for example, a change in the extract structure during production operation if the DeltaQueue was not yet empty; postings before the Delta Init was completed, and so on), extractor errors, unplanned system terminations in BW and in R/3, and so on.
    Solution
    Read this note in full BEFORE you start actions that may repair your data in BW. Contact SAP Support for help with troubleshooting before you start to repair data.
    BW offers you the option of a full upload in the form of a repair request (as of BW 3.0B). If you want to use this function, we recommend that you use the ODS object layer.
    Note that you should only use this procedure if you have a small number of incorrect or missing records. Otherwise, we always recommend a reinitialization (possibly after a previous selective deletion, followed by a restriction of the Delta-Init selection to exclude areas that were not changed in the meantime).
    1. Repair request: Definition
    If you flag a request as a repair request with full update as the update mode, it can be updated to all data targets, even if these already contain data from delta initialization runs for this DataSource/source system combination. This means that a repair request can be updated into all ODS objects at any time without a check being performed. The system supports loading by repair request into an ODS object without a check being performed for overlapping data or for the sequence of the requests. This action may therefore result in duplicate data and must thus be prepared very carefully.
    The repair request (of the "Full Upload" type) can be loaded into the same ODS object in which the 'normal' delta requests run. You will find this request under the "Repair Request" option in the InfoPackage (Maintenance) menu.
    2. Prerequisites for using the "Repair Request" function
    2.1. Troubleshooting
    Before you start the repair action, you should carry out a thorough analysis of the possible cause of the error to make sure that the error cannot recur when you execute the repair action. For example, if a key figure has already been updated incorrectly in the OLTP system, it will not change after a reload into BW. Use transaction RSA3 (Extractor Checker) in the source system for help with troubleshooting. Another possible source of the problem may be your user exit. To ensure that the user exit is correct, first load a user exit with a Probe-Full request into the PSA table and check whether the data is correct. If it is not correct: Search for the error in the exit user. If you do not find it, we recommend that you deactivate the user exit for testing purposes and request a new Full Upload. It If the data arrives correctly, it is highly probable that the error is indeed in the user exit.
    We always recommend that you load the data into the PSA table in the first step and check the result there.
    2.2. Analyze the effects on the downstream targets
    Before you start the Repair request into the ODS object, make sure that the incorrect data records are selectively deleted from the ODS object. However, before you decide on selective deletion, you should read the Info Help for the "Selective Deletion" function, which you can access by pressing the extra button on the relevant dialog box. The activation queue and the ChangeLog remain unchanged during the selective deletion of the data from the ODS object, which means that the incorrect data is still in the change log afterwards. After the selective deletion, you therefore must not reconstruct the ODS object if it is reconstructed from the ChangeLog. (Reconstruction is usually from the PSA table but, if the data source is the ODS object itself, the ODS object is reconstructed from its ChangeLog). You MUST read the recommendations and warnings about this (press the "Info" button).
    You MUST also take into account the fact that the delta for the downstream data targets is created from the changelog. If you perform selective deletion and then reload data into the deleted area, this may result in data inconsistencies in the downstream data targets.
    If you only use MOVE and do not use ADD for updates in the ODS object, selective deletion may not be required in some cases (for example, if incorrect records only have to be changed, rather than deleted). In this case, the DataMart delta also remains intact.
    2.3. Analysis of the selections
    You must be very precise when you perform selective deletion: Some applications do not provide the option of selecting individual documents for the load process. Therefore, you must first ensure that you can load the same range of documents into BW as you would delete from the ODS object. This note provides some application-specific recommendations to help you "repair" the incorrect data records.
    If you updated the data from the ODS object into the InfoCube, you can also delete it there using the "Selective deletion" function. However, if it is compressed at document level there and deletion is no longer possible, you must delete the InfoCube content and fill the data in the ODS object again after repair.
    You can only perform this action after a thorough analysis of all effects of selective data deletion. We naturally recommend that you test this first in the test system.
    The procedure generally applies for all SAP applications/extractors. The application determines the selections. For example, if you cannot use the document number for selection but you can select documents for an entire period, then you are forced to delete and then update documents for the entire period in the data target. Therefore, it is important to look first at the selections in the InfoPackage exactly before you delete data from the data target.
    Some applications have additional special features:
    Logistics cockpit: As preparation for the repair request, delete the SetUp table (if you have not already done so) and fill it selectively with concrete document numbers (or other possible groups of documents determined by the selection). Execute the Repair request.
    Caution: You can currently use the transactions that fill SetUp tables with reconstruction data to select individual documents or entire ranges of documents (at present, it is not possible to select several individual documents if they are not numbered in sequence).
    FI: The Repair request for the Full Upload is not required here. The following efficient alternatives are provided: In the FI area, you can select documents that must be reloaded into BW again, make a small change to them (for example, insert a period into the assignment text) and save them -> as a result, the document is placed in the delta queue again and the previously loaded document under the same number in the BW ODS object is overwritten. FI also has an option for sending the documents selectively from the OLTP system to the BW system using correction programs (see note 616331).
    3. Repair request execution
    How do you proceed if you want to load a repair request into the data target? Go to the maintenance screen of the InfoPackage (Scheduler), set the type of data upload to "Full", and select the "Scheduler" option in the menu -> Full Request Repair -> Flag request as repair request -> Confirm. Update the data into the PSA and then check that it is correct. If the data is correct, continue to update into the data targets."
    Refer.
    Repair full request
    Re: Repair full request
    Steps to perform repair full request
    full repair request
    repair full request
    Re: Repair Full Request
    Thanks,
    JituK

  • FULL LOAD FOR A SINGLE INFOBJECTS IN ODS

    Hi,
    I am in a scenario, where I need to do a repair full load for one of my ODS object which is extracting data from R/3 system.
    But the problems lies in extraction time, the estimated time to complete the full load is 90hour....which quite impossible to afford in production environment. Hence looking for a solution through which I can load only the modified info object of the ODS.
    Can anybody share some thoughts on this please?
    Regards,
    Kironmoy Banerjee.

    It seems in our case, modifying the transfer structure is troublesome as it involves lot of routines in many info object..
    Can someone suggest me what method I should proceed with to bypass this too long running full load, but need to have a data refresh to that particular infobjects (at least) to make the changes available on old data.
    Any suggestion will be highly appreciated.
    B.R,
    Kironmoy Banerjee

  • Full load with breaking down the loads in particular range - any progrom?

    Hello all,
    We are trying to do the first full load to our BW production system. We are doing Full load and then we are going to do the Init without data transfer to get the delta going.
    The full load is so big that we have to break down the loads by particular ranges, we have done it once previosuly by creating 30 infopackages and typing the consecutive ranges in the Infopackages selection criteria.
    I am just wondering, is there any other way we can do it which is much more easier, any code or program rather than having to create so many Infopackages and stuff.
    Please help with any suggestions/advice.
    Thanks in advance,

    Hello sasi,
    i am planning to use process chain, but what I am doing is since the load is so big I am putting range in the data selection of each infopackage (e.g. STNUMBER = 1000000000 to 1000005000) and there are such infopackages which gets the range in increasing order of 5000. (e.g. second infopackage will be STNUMBER = 1000005001 to 1000010000)
    So I was wondering if there was easy way to do this instead of creating like this 30 infopackages manually and putting them in process chain.
    anymore suggestions
    Thanks

  • Full load works, but delta fails - "Error in the Extractor"

    Good morning,
    We are using datasource 3FI_SL_ZZ_SI (Special Ledger line items) to load a cube, and are having trouble with the delta loads.  If I run a full load, everything runs fine.  If I run a delta load, it will initially fail with an error that simply states "Error in the Extractor" (no long text).  If I repeat the delta load, it completes successfully with 0 records returned.  If I then rerun the delta, I get the error again.
    I've run extractions using RSA3, but they work fine - as I would expect since the full loads work.  Unfortunately, I have not been able to find why the deltas aren't working.  After searching the Forums, I've tried replicating the datasource, checked the job log in R/3 (nothing), and run the program RS_TRANSTRU_ACTIVATE_ALL, all to no avail.
    Any ideas?
    Thanks
    We're running BW 3.5, R/3 4.71

    And it's just that easy....
    Yes, it appears this is what the problem was.  I'd been running the delta init without data transfer, and it was failing during the first true delta run.  Once I changed the delta init so that it transferred data, the deltas worked fine.  This was in our development system.  I took a look in our production system where deltas have been running for quite some time, and it turns out the delta initialization there was done with data transfer. 
    Thank you very much!

  • Full Load" and "Full load with Repair full request"

    Hello Experts,
    Can any body share with me what is the difference between a "Full Load" and "Full load with Repair full request"?
    Regards.

    Hi......
    What is function of full repair?? what it does?
    How to delete init from scheduler?? I dont see any option like that in infopackage
    For both of you question there is a oss note 739863-Repairing data in BW ..........Read the following.....
    Symptom
    Some data is incorrect or missing in the PSA table or in the ODS object (Enterprise Data Warehouse layer).
    Other terms
    Restore data, repair data
    Reason and Prerequisites
    There may be a number of reasons for this problem: Errors in the relevant application, errors in the user exit, errors in the DeltaQueue, handling errors in the customers posting procedure (for example, a change in the extract structure during production operation if the DeltaQueue was not yet empty; postings before the Delta Init was completed, and so on), extractor errors, unplanned system terminations in BW and in R/3, and so on.
    Solution
    Read this note in full BEFORE you start actions that may repair your data in BW. Contact SAP Support for help with troubleshooting before you start to repair data.
    BW offers you the option of a full upload in the form of a repair request (as of BW 3.0B). If you want to use this function, we recommend that you use the ODS object layer.
    Note that you should only use this procedure if you have a small number of incorrect or missing records. Otherwise, we always recommend a reinitialization (possibly after a previous selective deletion, followed by a restriction of the Delta-Init selection to exclude areas that were not changed in the meantime).
    1. Repair request: Definition
    If you flag a request as a repair request with full update as the update mode, it can be updated to all data targets, even if these already contain data from delta initialization runs for this DataSource/source system combination. This means that a repair request can be updated into all ODS objects at any time without a check being performed. The system supports loading by repair request into an ODS object without a check being performed for overlapping data or for the sequence of the requests. This action may therefore result in duplicate data and must thus be prepared very carefully.
    The repair request (of the "Full Upload" type) can be loaded into the same ODS object in which the 'normal' delta requests run. You will find this request under the "Repair Request" option in the InfoPackage (Maintenance) menu.
    2. Prerequisites for using the "Repair Request" function
    2.1. Troubleshooting
    Before you start the repair action, you should carry out a thorough analysis of the possible cause of the error to make sure that the error cannot recur when you execute the repair action. For example, if a key figure has already been updated incorrectly in the OLTP system, it will not change after a reload into BW. Use transaction RSA3 (Extractor Checker) in the source system for help with troubleshooting. Another possible source of the problem may be your user exit. To ensure that the user exit is correct, first load a user exit with a Probe-Full request into the PSA table and check whether the data is correct. If it is not correct: Search for the error in the exit user. If you do not find it, we recommend that you deactivate the user exit for testing purposes and request a new Full Upload. It If the data arrives correctly, it is highly probable that the error is indeed in the user exit.
    We always recommend that you load the data into the PSA table in the first step and check the result there.
    2.2. Analyze the effects on the downstream targets
    Before you start the Repair request into the ODS object, make sure that the incorrect data records are selectively deleted from the ODS object. However, before you decide on selective deletion, you should read the Info Help for the "Selective Deletion" function, which you can access by pressing the extra button on the relevant dialog box. The activation queue and the ChangeLog remain unchanged during the selective deletion of the data from the ODS object, which means that the incorrect data is still in the change log afterwards. After the selective deletion, you therefore must not reconstruct the ODS object if it is reconstructed from the ChangeLog. (Reconstruction is usually from the PSA table but, if the data source is the ODS object itself, the ODS object is reconstructed from its ChangeLog). You MUST read the recommendations and warnings about this (press the "Info" button).
    You MUST also take into account the fact that the delta for the downstream data targets is created from the changelog. If you perform selective deletion and then reload data into the deleted area, this may result in data inconsistencies in the downstream data targets.
    If you only use MOVE and do not use ADD for updates in the ODS object, selective deletion may not be required in some cases (for example, if incorrect records only have to be changed, rather than deleted). In this case, the DataMart delta also remains intact.
    2.3. Analysis of the selections
    You must be very precise when you perform selective deletion: Some applications do not provide the option of selecting individual documents for the load process. Therefore, you must first ensure that you can load the same range of documents into BW as you would delete from the ODS object. This note provides some application-specific recommendations to help you "repair" the incorrect data records.
    If you updated the data from the ODS object into the InfoCube, you can also delete it there using the "Selective deletion" function. However, if it is compressed at document level there and deletion is no longer possible, you must delete the InfoCube content and fill the data in the ODS object again after repair.
    You can only perform this action after a thorough analysis of all effects of selective data deletion. We naturally recommend that you test this first in the test system.
    The procedure generally applies for all SAP applications/extractors. The application determines the selections. For example, if you cannot use the document number for selection but you can select documents for an entire period, then you are forced to delete and then update documents for the entire period in the data target. Therefore, it is important to look first at the selections in the InfoPackage exactly before you delete data from the data target.
    Some applications have additional special features:
    Logistics cockpit: As preparation for the repair request, delete the SetUp table (if you have not already done so) and fill it selectively with concrete document numbers (or other possible groups of documents determined by the selection). Execute the Repair request.
    Caution: You can currently use the transactions that fill SetUp tables with reconstruction data to select individual documents or entire ranges of documents (at present, it is not possible to select several individual documents if they are not numbered in sequence).
    FI: The Repair request for the Full Upload is not required here. The following efficient alternatives are provided: In the FI area, you can select documents that must be reloaded into BW again, make a small change to them (for example, insert a period into the assignment text) and save them -> as a result, the document is placed in the delta queue again and the previously loaded document under the same number in the BW ODS object is overwritten. FI also has an option for sending the documents selectively from the OLTP system to the BW system using correction programs (see note 616331).
    3. Repair request execution
    How do you proceed if you want to load a repair request into the data target? Go to the maintenance screen of the InfoPackage (Scheduler), set the type of data upload to "Full", and select the "Scheduler" option in the menu -> Full Request Repair -> Flag request as repair request -> Confirm. Update the data into the PSA and then check that it is correct. If the data is correct, continue to update into the data targets.
    And also search in forum, will get discussions on this
    Full repair loads
    Regarding Repair Full Request
    Instead of doing of all these all steps.. cant I reload that failed request again??
    If some goes wrong with delta loads...it is always better to do re-init...I mean dlete init flag...full repair..all those steps....If it is an infocube you can go for full update also instead of full repair...
    Full Upload:
    In full upload all the data records are fetched....It is similar to full repair..Incase of infocube to recover missed delta records..we can run full upload.....but in case of ODS  does'nt support full upload and delta upload parallely...so inthis case you have to go for full repair...otherwise delta mechanism will get corrupted...
    Suppose your ODS activation is failing since there is a Full upload request in the target.......then you can convert full upload to full repair using the program : RSSM_SET_REPAIR _ FULL_FLAG
    Hope this helps.......
    Thanks==points as per SDN.
    Regards,
    Debjani.....
    Edited by: Debjani  Mukherjee on Oct 23, 2008 8:32 PM

  • Repair full load

    Hi,
    I have  "Billing document" number( 9000182)  in ODS .
    For the debugging purpose I want to do  " REPAIR FULL LOAD' only for this document .
    can i do repiar FULL LOAD" for this documetn in Production system?. Is there any problem if i do repair full load. any data mismatch occurs.
    I can do selective deletion and i can do " REPAIR FULL LOAD' but my doubt is any problem will come as it is ' PRODUCTION System".
    Please confirm .U r suggestions will be helpful.
    i tried in Q system but that document was not there in R/3.

    Hi Kotha,
    If you are doing this for debugging purpose,Then no need to delete the data from ODS.
    Follow the below steps.
    1) Run  IP with options "load till PSA" and selections for Billing no 9000182(If you think it will run for longtime,do full repair or run it as full load)
    2)If you want to check thi data  in CUBE's as well then run  in simulation mode using Update rules with respect to each target.
    Hope this helps.
    Regards,
    Venkatesh

  • URGENT!pls hlpDAC Full Load always in 'Running' status at a particular task

    Hi Friends,
    I started a full load yesterday.There are totally 257 tasks.The load went fine without issues till 248th task.But while executing 249th task(Load into Activity Fact),it is always in 'Running' status and is not getting completed even after executing for 2 hours. I checked in the informatica workflow monitor and found that the workflow is in 'running' state and is not getting completed. When right-clicked the session and selected run properties,I can see that 0 rows are inserted into the target table.So I manually tried to stop the workflow.Even after that the task is always in 'Stopping' status and is not getting stopped.Then I manually aborted the workflow.
    Below is the session log file.Could you please check and let me know.
    Regards,
    Vijay
    Edited by: vijayobi on Jul 22, 2011 4:26 AM

    Hi Friends,
    We executed a Full-Load again on Saturday i.e 23rd July 2011.This time we allowed the task 'Load into Activity Fact_CUSTOM' to execute without stopping it manully like we did in the previous data load.It got executed for 3 hours and 45 minutes and then 'Failed' giving the following error ORA-01652(unable to extend temp segment by string in tablespace string).This task got executed successfully in our dev environment.Below is what we found in the sessio .log file and help us resolve this issue.Please revert back as soon as possible as we have this issue in our prod environment.
    2011-07-23 14:56:07 : ERROR : (8128 | LKPDP_25:READER_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : RR_4035 : SQL Error [
    ORA-01652: unable to extend temp segment by 128 in tablespace TEMP
    Database driver error...
    Function Name : Execute
    SQL Stmt : SELECT distinct LOOKUP_TABLE.ROW_WID AS ROW_WID, LOOKUP_TABLE.GEO_WID AS GEO_WID, LOOKUP_TABLE.INTEGRATION_ID AS INTEGRATION_ID, LOOKUP_TABLE.DATASOURCE_NUM_ID AS DATASOURCE_NUM_ID, LOOKUP_TABLE.EFFECTIVE_FROM_DT AS EFFECTIVE_FROM_DT, LOOKUP_TABLE.EFFECTIVE_TO_DT AS EFFECTIVE_TO_DT FROM W_PARTY_D LOOKUP_TABLE,W_ACTIVITY_FS LEFT OUTER JOIN W_CUSTOMER_ACCOUNT_DON (W_ACTIVITY_FS.CUSTOMER_ACCOUNT_ID=W_CUSTOMER_ACCOUNT_D.INTEGRATION_IDAND W_ACTIVITY_FS.DATASOURCE_NUM_ID=W_CUSTOMER_ACCOUNT_D.DATASOURCE_NUM_ID)WHERECOALESCE(W_ACTIVITY_FS.CUSTOMER_ID,W_CUSTOMER_ACCOUNT_D.PARTY_ID)=LOOKUP_TABLE.INTEGRATION_IDAND W_ACTIVITY_FS.DATASOURCE_NUM_ID=LOOKUP_TABLE.DATASOURCE_NUM_IDAND COALESCE(W_ACTIVITY_FS.PLANNED_START_DT,W_ACTIVITY_FS.CREATED_DT) >= LOOKUP_TABLE.EFFECTIVE_FROM_DT AND COALESCE(W_ACTIVITY_FS.PLANNED_START_DT,W_ACTIVITY_FS.CREATED_DT) < LOOKUP_TABLE.EFFECTIVE_TO_DTORDER BY LOOKUP_TABLE.INTEGRATION_ID, LOOKUP_TABLE.DATASOURCE_NUM_ID, LOOKUP_TABLE.EFFECTIVE_FROM_DT, LOOKUP_TABLE.EFFECTIVE_TO_DT, LOOKUP_TABLE.ROW_WID, LOOKUP_TABLE.GEO_WID -- ORDER BY INTEGRATION_ID,DATASOURCE_NUM_ID,EFFECTIVE_FROM_DT,EFFECTIVE_TO_DT,ROW_WID,GEO_WID
    Oracle Fatal Error
    Database driver error...
    Function Name : Execute
    SQL Stmt : SELECT distinct LOOKUP_TABLE.ROW_WID AS ROW_WID, LOOKUP_TABLE.GEO_WID AS GEO_WID, LOOKUP_TABLE.INTEGRATION_ID AS INTEGRATION_ID, LOOKUP_TABLE.DATASOURCE_NUM_ID AS DATASOURCE_NUM_ID, LOOKUP_TABLE.EFFECTIVE_FROM_DT AS EFFECTIVE_FROM_DT, LOOKUP_TABLE.EFFECTIVE_TO_DT AS EFFECTIVE_TO_DT FROM W_PARTY_D LOOKUP_TABLE,W_ACTIVITY_FS LEFT OUTER JOIN W_CUSTOMER_ACCOUNT_DON (W_ACTIVITY_FS.CUSTOMER_ACCOUNT_ID=W_CUSTOMER_ACCOUNT_D.INTEGRATION_IDAND W_ACTIVITY_FS.DATASOURCE_NUM_ID=W_CUSTOMER_ACCOUNT_D.DATASOURCE_NUM_ID)WHERECOALESCE(W_ACTIVITY_FS.CUSTOMER_ID,W_CUSTOMER_ACCOUNT_D.PARTY_ID)=LOOKUP_TABLE.INTEGRATION_IDAND W_ACTIVITY_FS.DATASOURCE_NUM_ID=LOOKUP_TABLE.DATASOURCE_NUM_IDAND COALESCE(W_ACTIVITY_FS.PLANNED_START_DT,W_ACTIVITY_FS.CREATED_DT) >= LOOKUP_TABLE.EFFECTIVE_FROM_DT AND COALESCE(W_ACTIVITY_FS.PLANNED_START_DT,W_ACTIVITY_FS.CREATED_DT) < LOOKUP_TABLE.EFFECTIVE_TO_DTORDER BY LOOKUP_TABLE.INTEGRATION_ID, LOOKUP_TABLE.DATASOURCE_NUM_ID, LOOKUP_TABLE.EFFECTIVE_FROM_DT, LOOKUP_TABLE.EFFECTIVE_TO_DT, LOOKUP_TABLE.ROW_WID, LOOKUP_TABLE.GEO_WID -- ORDER BY INTEGRATION_ID,DATASOURCE_NUM_ID,EFFECTIVE_FROM_DT,EFFECTIVE_TO_DT,ROW_WID,GEO_WID
    Oracle Fatal Error].
    2011-07-23 14:56:07 : ERROR : (8128 | LKPDP_25:READER_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : BLKR_16004 : ERROR: Prepare failed.
    2011-07-23 14:56:07 : INFO : (8128 | WRITER_1_*_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : WRT_8333 : Rolling back all the targets due to fatal session error.
    2011-07-23 14:56:07 : ERROR : (8128 | TRANSF_1_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : TM_6085 : A fatal error occurred at transformation [mplt_SIL_ActivityFact.LKP_W_PARTY_D_With_Geo_Wid], and the session is terminating.
    2011-07-23 14:56:07 : ERROR : (8128 | TRANSF_1_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : TM_6085 : A fatal error occurred at transformation [mplt_SIL_ActivityFact.EXP_Decode_CustomerId], and the session is terminating.
    2011-07-23 14:56:07 : ERROR : (8128 | TRANSF_1_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : TM_6085 : A fatal error occurred at transformation [mplt_SIL_ActivityFact.EXP_Decode_CustomerId], and the session is terminating.
    2011-07-23 14:56:07 : ERROR : (8128 | TRANSF_1_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : TM_6085 : A fatal error occurred at transformation [mplt_SIL_ActivityFact.LKP_W_CUSTOMER_ACCOUNT_D_With_Party_ID], and the session is terminating.
    2011-07-23 14:56:07 : ERROR : (8128 | TRANSF_1_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : TM_6085 : A fatal error occurred at transformation [mplt_SIL_ActivityFact.LKP_W_CUSTOMER_ACCOUNT_D_With_Party_ID], and the session is terminating.
    2011-07-23 14:56:07 : ERROR : (8128 | TRANSF_1_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : TM_6085 : A fatal error occurred at transformation [mplt_SIL_ActivityFact.EXPTRANS], and the session is terminating.
    2011-07-23 14:56:07 : ERROR : (8128 | TRANSF_1_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : TM_6085 : A fatal error occurred at transformation [mplt_SIL_ActivityFact.EXPTRANS], and the session is terminating.
    2011-07-23 14:56:07 : ERROR : (8128 | TRANSF_1_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : TM_6085 : A fatal error occurred at transformation [FIL_ETL_PROC_WID], and the session is terminating.
    2011-07-23 14:56:07 : ERROR : (8128 | TRANSF_1_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : TM_6085 : A fatal error occurred at transformation [FIL_ETL_PROC_WID], and the session is terminating.
    2011-07-23 14:56:07 : ERROR : (8128 | TRANSF_1_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : TM_6085 : A fatal error occurred at transformation [MPLT_Get_ETL_Proc_WID.Exp_Decide_Etl_Proc_Wid], and the session is terminating.
    2011-07-23 14:56:07 : INFO : (8128 | WRITER_1_*_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : WRT_8325 : Final rollback executed for the target [W_ACTIVITY_F] at end of load
    2011-07-23 14:56:07 : ERROR : (8128 | TRANSF_1_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : TM_6085 : A fatal error occurred at transformation [MPLT_Get_ETL_Proc_WID.Exp_Decide_Etl_Proc_Wid], and the session is terminating.
    2011-07-23 14:56:07 : INFO : (8128 | MANAGER) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : PETL_24007 : Received request to stop session run. Attempting to stop worker threads.
    2011-07-23 14:56:07 : INFO : (8128 | WRITER_1_*_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : WRT_8035 : Load complete time: Sat Jul 23 14:56:07 2011
    Thanks in advance.
    Vinay

  • Project Analytics 7.9.6.1 - Error while running a full load

    Hi All,
    I am performing a full load for Projects Analytics and get the following error,
    =====================================
    ERROR OUTPUT
    =====================================
    1103 SEVERE Wed Nov 18 02:49:36 WST 2009 Could not attach to workflow because of errorCode 36331 For workflow SDE_ORA_CodeDimension_Gl_Account
    1104 SEVERE Wed Nov 18 02:49:36 WST 2009
    ANOMALY INFO::: Error while executing : INFORMATICA TASK:SDE_ORA11510_Adaptor:SDE_ORA_CodeDimension_Gl_Account:(Source : FULL Target : FULL)
    MESSAGE:::
    Irrecoverable Error
    Error while contacting Informatica server for getting workflow status for SDE_ORA_CodeDimension_Gl_Account
    Error Code = 36331:Unknown reason for error code 36331
    Pmcmd output :
    Session log initialises NULL value to mapping parameter MPLT_ADI_CODES.$$CATEGORY. This is then used insupsequent SQL and results in ORA-00936: missing expression error following are the initialization section and the load section containing the error in the log
    Initialisation
    DIRECTOR> VAR_27028 Use override value [DataWarehouse] for session parameter:[$DBConnection_OLAP].
    DIRECTOR> VAR_27028 Use override value [ORA_11_5_10] for session parameter:[$DBConnection_OLTP].
    DIRECTOR> VAR_27028 Use override value [ORA_11_5_10.DATAWAREHOUSE.SDE_ORA11510_Adaptor.SDE_ORA_CodeDimension_Gl_Account_Segments.log] for session parameter:[$PMSessionLogFile].
    DIRECTOR> VAR_27027 Use default value [] for mapping parameter:[MPLT_ADI_CODES.$$CATEGORY].
    DIRECTOR> VAR_27028 Use override value [4] for mapping parameter:[MPLT_SA_ORA_CODES.$$DATASOURCE_NUM_ID].
    DIRECTOR> VAR_27028 Use override value [DEFAULT] for mapping parameter:[MPLT_SA_ORA_CODES.$$TENANT_ID].
    DIRECTOR> TM_6014 Initializing session [SDE_ORA_CodeDimension_Gl_Account_Segments] at [Wed Nov 18 02:49:11 2009].
    DIRECTOR> TM_6683 Repository Name: [repo_service]
    DIRECTOR> TM_6684 Server Name: [int_service]
    DIRECTOR> TM_6686 Folder: [SDE_ORA11510_Adaptor]
    DIRECTOR> TM_6685 Workflow: [SDE_ORA_CodeDimension_Gl_Account_Segments] Run Instance Name: [] Run Id: [17]
    DIRECTOR> TM_6101 Mapping name: SDE_ORA_CodeDimension_GL_Account_Segments [version 1].
    DIRECTOR> TM_6963 Pre 85 Timestamp Compatibility is Enabled
    DIRECTOR> TM_6964 Date format for the Session is [MM/DD/YYYY HH24:MI:SS]
    DIRECTOR> TM_6827 [C:\Informatica\PowerCenter8.6.1\server\infa_shared\Storage] will be used as storage directory for session [SDE_ORA_CodeDimension_Gl_Account_Segments].
    DIRECTOR> CMN_1802 Session recovery cache initialization is complete.
    DIRECTOR> TM_6703 Session [SDE_ORA_CodeDimension_Gl_Account_Segments] is run by 32-bit Integration Service [node01_ASG596138], version [8.6.1], build [1218].
    MANAGER> PETL_24058 Running Partition Group [1].
    MANAGER> PETL_24000 Parallel Pipeline Engine initializing.
    MANAGER> PETL_24001 Parallel Pipeline Engine running.
    MANAGER> PETL_24003 Initializing session run.
    MAPPING> CMN_1569 Server Mode: [UNICODE]
    MAPPING> CMN_1570 Server Code page: [MS Windows Latin 1 (ANSI), superset of Latin1]
    MAPPING> TM_6151 The session sort order is [Binary].
    MAPPING> TM_6185 Warning. Code page validation is disabled in this session.
    MAPPING> TM_6156 Using low precision processing.
    MAPPING> TM_6180 Deadlock retry logic will not be implemented.
    MAPPING> TM_6307 DTM error log disabled.
    MAPPING> TE_7022 TShmWriter: Initialized
    MAPPING> DBG_21075 Connecting to database [orcl], user [DAC_REP]
    MAPPING> CMN_1716 Lookup [mplt_ADI_Codes.Lkp_Master_Map] uses database connection [Relational:DataWarehouse] in code page [MS Windows Latin 1 (ANSI), superset of Latin1]
    MAPPING> CMN_1716 Lookup [mplt_ADI_Codes.Lkp_Master_Code] uses database connection [Relational:DataWarehouse] in code page [MS Windows Latin 1 (ANSI), superset of Latin1]
    MAPPING> CMN_1716 Lookup [mplt_ADI_Codes.Lkp_W_CODE_D] uses database connection [Relational:DataWarehouse] in code page [MS Windows Latin 1 (ANSI), superset of Latin1]
    MAPPING> TM_6007 DTM initialized successfully for session [SDE_ORA_CodeDimension_Gl_Account_Segments]
    DIRECTOR> PETL_24033 All DTM Connection Info: [<NONE>].
    MANAGER> PETL_24004 Starting pre-session tasks. : (Wed Nov 18 02:49:14 2009)
    MANAGER> PETL_24027 Pre-session task completed successfully. : (Wed Nov 18 02:49:14 2009)
    DIRECTOR> PETL_24006 Starting data movement.
    MAPPING> TM_6660 Total Buffer Pool size is 32000000 bytes and Block size is 128000 bytes.
    READER_1_1_1> DBG_21438 Reader: Source is [asgdev], user [APPS]
    READER_1_1_1> BLKR_16051 Source database connection [ORA_11_5_10] code page: [MS Windows Latin 1 (ANSI), superset of Latin1]
    READER_1_1_1> BLKR_16003 Initialization completed successfully.
    WRITER_1_*_1> WRT_8147 Writer: Target is database [orcl], user [DAC_REP], bulk mode [OFF]
    WRITER_1_*_1> WRT_8221 Target database connection [DataWarehouse] code page: [MS Windows Latin 1 (ANSI), superset of Latin1]
    WRITER_1_*_1> WRT_8124 Target Table W_CODE_D :SQL INSERT statement:
    INSERT INTO W_CODE_D(DATASOURCE_NUM_ID,SOURCE_CODE,SOURCE_CODE_1,SOURCE_CODE_2,SOURCE_CODE_3,SOURCE_NAME_1,SOURCE_NAME_2,CATEGORY,LANGUAGE_CODE,MASTER_DATASOURCE_NUM_ID,MASTER_CODE,MASTER_VALUE,W_INSERT_DT,W_UPDATE_DT,TENANT_ID) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
    WRITER_1_*_1> WRT_8124 Target Table W_CODE_D :SQL UPDATE statement:
    UPDATE W_CODE_D SET SOURCE_CODE_1 = ?, SOURCE_CODE_2 = ?, SOURCE_CODE_3 = ?, SOURCE_NAME_1 = ?, SOURCE_NAME_2 = ?, MASTER_DATASOURCE_NUM_ID = ?, MASTER_CODE = ?, MASTER_VALUE = ?, W_INSERT_DT = ?, W_UPDATE_DT = ?, TENANT_ID = ? WHERE DATASOURCE_NUM_ID = ? AND SOURCE_CODE = ? AND CATEGORY = ? AND LANGUAGE_CODE = ?
    WRITER_1_*_1> WRT_8124 Target Table W_CODE_D :SQL DELETE statement:
    DELETE FROM W_CODE_D WHERE DATASOURCE_NUM_ID = ? AND SOURCE_CODE = ? AND CATEGORY = ? AND LANGUAGE_CODE = ?
    WRITER_1_*_1> WRT_8270 Target connection group #1 consists of target(s) [W_CODE_D]
    WRITER_1_*_1> WRT_8003 Writer initialization complete.
    READER_1_1_1> BLKR_16007 Reader run started.
    WRITER_1_*_1> WRT_8005 Writer run started.
    WRITER_1_*_1> WRT_8158
    Load section
    *****START LOAD SESSION*****
    Load Start Time: Wed Nov 18 02:49:16 2009
    Target tables:
    W_CODE_D
    READER_1_1_1> RR_4029 SQ Instance [mplt_BC_ORA_Codes_GL_Account_Segments.Sq_Fnd_Flex_Values] User specified SQL Query [SELECT
    FND_FLEX_VALUES.FLEX_VALUE_SET_ID,
    FND_FLEX_VALUES.FLEX_VALUE,
    MAX(FND_FLEX_VALUES_TL.DESCRIPTION),
    FND_ID_FLEX_SEGMENTS.ID_FLEX_NUM,
    FND_ID_FLEX_SEGMENTS.APPLICATION_COLUMN_NAME
    FROM
    FND_FLEX_VALUES,
    FND_FLEX_VALUES_TL,
    FND_ID_FLEX_SEGMENTS,
    FND_SEGMENT_ATTRIBUTE_VALUES
    WHERE
    FND_FLEX_VALUES.FLEX_VALUE_ID = FND_FLEX_VALUES_TL.FLEX_VALUE_ID AND FND_FLEX_VALUES_TL.LANGUAGE ='US' AND
    FND_ID_FLEX_SEGMENTS.FLEX_VALUE_SET_ID =FND_FLEX_VALUES.FLEX_VALUE_SET_ID AND
    FND_ID_FLEX_SEGMENTS.APPLICATION_ID = 101 AND
    FND_ID_FLEX_SEGMENTS.ID_FLEX_CODE ='GL#' AND
    FND_ID_FLEX_SEGMENTS.ID_FLEX_NUM =FND_SEGMENT_ATTRIBUTE_VALUES.ID_FLEX_NUM AND
    FND_SEGMENT_ATTRIBUTE_VALUES.APPLICATION_ID =101 AND
    FND_SEGMENT_ATTRIBUTE_VALUES.ID_FLEX_CODE = 'GL#' AND
    FND_ID_FLEX_SEGMENTS.APPLICATION_COLUMN_NAME=FND_SEGMENT_ATTRIBUTE_VALUES.APPLICATION_COLUMN_NAME AND
    FND_SEGMENT_ATTRIBUTE_VALUES.ATTRIBUTE_VALUE ='Y'
    GROUP BY
    FND_FLEX_VALUES.FLEX_VALUE_SET_ID,
    FND_FLEX_VALUES.FLEX_VALUE,
    FND_ID_FLEX_SEGMENTS.ID_FLEX_NUM,
    FND_ID_FLEX_SEGMENTS.APPLICATION_COLUMN_NAME]
    READER_1_1_1> RR_4049 SQL Query issued to database : (Wed Nov 18 02:49:17 2009)
    READER_1_1_1> RR_4050 First row returned from database to reader : (Wed Nov 18 02:49:17 2009)
    LKPDP_3> DBG_21312 Lookup Transformation [mplt_ADI_Codes.Lkp_W_CODE_D]: Lookup override sql to create cache: SELECT W_CODE_D.SOURCE_NAME_1 AS SOURCE_NAME_1, W_CODE_D.SOURCE_NAME_2 AS SOURCE_NAME_2, W_CODE_D.MASTER_DATASOURCE_NUM_ID AS MASTER_DATASOURCE_NUM_ID, W_CODE_D.MASTER_CODE AS MASTER_CODE, W_CODE_D.MASTER_VALUE AS MASTER_VALUE, W_CODE_D.W_INSERT_DT AS W_INSERT_DT, W_CODE_D.TENANT_ID AS TENANT_ID, W_CODE_D.DATASOURCE_NUM_ID AS DATASOURCE_NUM_ID, W_CODE_D.SOURCE_CODE AS SOURCE_CODE, W_CODE_D.CATEGORY AS CATEGORY, W_CODE_D.LANGUAGE_CODE AS LANGUAGE_CODE FROM W_CODE_D
    WHERE
    W_CODE_D.CATEGORY IN () ORDER BY DATASOURCE_NUM_ID,SOURCE_CODE,CATEGORY,LANGUAGE_CODE,SOURCE_NAME_1,SOURCE_NAME_2,MASTER_DATASOURCE_NUM_ID,MASTER_CODE,MASTER_VALUE,W_INSERT_DT,TENANT_ID
    LKPDP_3> TE_7212 Increasing [Index Cache] size for transformation [mplt_ADI_Codes.Lkp_W_CODE_D] from [1000000] to [4734976].
    LKPDP_3> TE_7212 Increasing [Data Cache] size for transformation [mplt_ADI_Codes.Lkp_W_CODE_D] from [2000000] to [2007040].
    READER_1_1_1> BLKR_16019 Read [625] rows, read [0] error rows for source table [FND_ID_FLEX_SEGMENTS] instance name [mplt_BC_ORA_Codes_GL_Account_Segments.FND_ID_FLEX_SEGMENTS]
    READER_1_1_1> BLKR_16008 Reader run completed.
    LKPDP_3> TM_6660 Total Buffer Pool size is 609824 bytes and Block size is 65536 bytes.
    LKPDP_3:READER_1_1> DBG_21438 Reader: Source is [orcl], user [DAC_REP]
    LKPDP_3:READER_1_1> BLKR_16051 Source database connection [DataWarehouse] code page: [MS Windows Latin 1 (ANSI), superset of Latin1]
    LKPDP_3:READER_1_1> BLKR_16003 Initialization completed successfully.
    LKPDP_3:READER_1_1> BLKR_16007 Reader run started.
    LKPDP_3:READER_1_1> RR_4049 SQL Query issued to database : (Wed Nov 18 02:49:18 2009)
    LKPDP_3:READER_1_1> CMN_1761 Timestamp Event: [Wed Nov 18 02:49:18 2009]
    LKPDP_3:READER_1_1> RR_4035 SQL Error [
    ORA-00936: missing expression
    Could you please suggest what the issue might be and how it can be fixed?
    Many thanks,
    Kiran

    I have continued related detains in the following thread,
    Mapping Parameter  $$CATEGORY not included in the parameter file (7.9.6.1)
    Apologies for the inconvenience.
    Thanks,
    Kiran

  • What is the diffrence between full load and delta load in DTP

    hI ,
    I am trying to load the data into CUBE from another cube using DTP ..
    There are 2 DTPS ..
    1: DTP with full load
    2: DTP with DELTA load ..
    what is the diffrence betwen thse two in DTP ...
    Please can somebody help me

    1: DTP with full load  - will update all the requests in PSA/source to the target,
    2: DTP with DELTA load - will update only new requests to the datatarget
    The system doesnt distinguish new records on the basis of changed records, rather by the request. Thats the reason you have datamart status to indicate if the request has been loaded to further datatargets.

  • Full load from a DSO to a cube processes less records than available in DSO

    We have a scenario, where every Sunday I have to make a full load from a DSO with OnHand Stock information to a cube, where I register on material and stoer level a counter if there is stock available.
    The DTP has no filters at all and has a semantic group on 0MATERIAL and 0PLANT.
    The key in the DSO is:
    0MATERIAL
    0PLANT
    0STOCKTYPE
    0STOR_LOC
    0BOM
    of which only 0MATERIAL, 0PLANT and 0STORE_LOC are later used in the transformation.
    As we had a growing number of records, we decided to delete in the START routine all records, where the inventory is not GT zero, thus eliminating zero and negative inventory records.
    Now comes the funny part of the story:
    Prior to these changes I would [in a test system, just copied from PROD] read some 33 million of records and write out the same amount of records. Of course, after the change we expected to write out less. To my total surprise I was reading now however 45 million of records with the same unchanged DTP, and writing out the expected less records.
    When checking the number of records in the DSO I found the 45 million, but cannot explain why in the loads before we only retrieved some 33 millions from the same unchanged amount of records.
    When checking in PROD - same result: we have some 45 million records in the DSO, but when we do the full load from the DSO to the cube the DTP only processes some 33 millions.
    What am I missing - is there a compression going on? Why would the amount of records in a DSO differ from the amount of records processed in the DataPackgages when I am making a FULL load without any filter restrictions and only a semantic grouping in place for parts of the DSO key?
    ANY idea, thought is appreciated.

    Thanks Gaurav.
    I did check if there were more/any loads doen inbetween - there were none in the test system.  As I mentioned that it was a new copy from PROD to TEST, I compared the number of entries in the DSO and that seems to be a match between TEST and PROD, ok a some more in PROD but they can be accounted for. In test I loaded the day before the changes were imported to have a comparison, and between that load and the one ofter the changes were imported nothing in the DSO was changed.
    Both DTPs in TEST and PW2 load from actived DSO [without archive]. The DTPs were not changed in quite a while - so I ruled that one out. Same with activation of data in the DSO - this DSO get's loaded and activated in PROD daily via process chain and we load daily deltas into the cube in question. Only on Sundays, for the begin of the new week/fiscal period, we need to make a full load to capture all materials per site with inventory. The deltas loaded during the week are less than 1 million, but the difference between the number of records in the DSO and the amount processed in the DataPackages is more than 10 millions per full load even in PROD.
    I really appreciated the knowledgable answer, I just wished you would pointed out something that I missed out on.

  • CPU only clocks up to 1.2GHz on full load

    I use Boot Camp on my Late 2007 17" 2.4GHz Macbook Pro, with fresh installs of Snow Leopard and Windows 7. I've used Windows 7 to diagnose my problem. I'm using Core Temp to retrieve my CPU performance info and Prime95 to stress-test the hardware.
    Definite Problem:
    My CPU will not operate at any higher than 1.2GHz under full load. It idles at 800MHz at 42 degrees Celsius, and warms up to 62 degrees Celsius under full load. I expect it to operate at 2.4GHz like it once did.
    Best-Guess Diagnosis:
    My computer has no battery (it physically expanded beyond functional size), and I read somewhere that the computer is designed to throttle down when no battery is present in order to prevent the computer from drawing more power than is available to it via the 85 watt charger.
    Best-Guess Solutions:
    1) If my diagnosis is correct, replacing the battery should undo the throttling cap. This is not ideal as it is not a long-term solution (batteries can continue to explode in the future).
    2) I'd like to modify my computer's configuration to release the throttling cap/force full frequency. If possible, this would be the ideal long-term solution. This may not be possible if the charger is indeed bottlenecking my CPU's performance.
    I'm looking for insight into my best-guess solutions or new solutions. Thanks for taking time to check out my inquiry.

    Please carefully read and do both iMac SMC and PRAM reset s, if you need to do each 2X. It also may help to restart in Safe Mode to clear some caches.

Maybe you are looking for