Data inconsistencies

Hi,
I have a mulltiprovider and based on which query has been developed. It consists of a cube and an ODS objects..Now, data are coming from R/3 systems and is reaching the cube through the intervening ODS..
In the Cube we have two aggregated key figures defined i.e. 0VALSTCKQTY: Quantity of valuated stock
0VALSTCKVAL: Value of valuated stock..
The settings of 0VALSTCKQTY are :
Aggregation: SUM
Exception Aggregation: Last value
Inflow: 0RECVALSTCK  Quantity received into valuated stock
outflow:0ISSVALSTCK   Quantity issued from valuate
The settings of 0VALSTCKVAL are :
Aggregation: SUM
Exception Aggregation: Last value
Inflow : 0RECVS_VAL      Value received into valuated stock
outflow : 0ISSVS_VAL      Value issued from valuated stock
Now in the query level we have developed some formulas and calculated key figures to achieve some desired result. The aggregated keyfigues are part of the calculation..
Now when we run the query, we see the data or the desired data is getting multiplied by 3x during display of the results..
We are tyring to trouble shoot this process to elevaiate the data inconsistencies..
Please assist with your valauable feedback

Hi
First the basics. From the key figures I take it that one of the cubes on which the multiprovider is based on is 0IC_C03. Usually a multiplier (e.g. result is 2x or 3x the desired result) means that the setup table has not been deleted during initialisation of the delta.
If that is not the case, then check the identification in the multiprovider. It is not enough to simply select the 'proposal' option, but you need to be carefull how the characteristics and key figures are matched up.  If the same key figures are in more than one of the cubes belonging to the multiprovier (e.g. 0RECVALSTCK)  and selected for all cubes, it may also result in the values being double ( or triple).
Hope this helps.

Similar Messages

  • EIC- Data Inconsistencies in Productive System

    Dear EIC Experts,
    Have a query regarding data inconsistencies in the productive system. I will illustrate with an example,
    An Activity was created and a follow up activity created on that and assigned to a resolver group.
    There is an entry in SCMG_T_CASE_ATTR with Status 20 (Allocated)
    and an entry in THREIC_ACTIVITY, with a whole load of data like creator org unit and owner org unit missing, all it has are the activity categories and the user id.
    An entry exists in THREIC_FOLLOWUP with all relevant fields filled but Worlflow ID is blank. The status in this table is 40 (delivered).
    So it looks as if no workflow was created and it did not go to anybody to action. And the activity cannot be closed and is just lying there, as the owner cannot take any action till the follow up status becomes completed.
    1) What could be the possible reasons for this to happen? I have already checked that workflows are all activated and there are other activities being raised and clsoed just fine.
    2) Is there anyway I can use any utility tools to correct the data in the back end in such a way that thia activity can be clsoed down?
    I have a few hundred activities like these (from 30000 overall activities) which cannot be closed because of various data inconsistencies. Any ideas are appreciated.

    Hi Harish
    I have worked on multiple EIC implementations and I have never had this issue though the follow up functionality has been set up correctly and by the details in your email my guess is that it is not at your location.
    It is possible to fix these 100 or so and set them to closed but it is to much to type out in this forum.
    I would check out www.eicexperts.com as they are experts in EIC and they may walk you through it as they are very helpful.

  • PR overall release data inconsistencies

    Hi Guru,
    can anyone advise if anyone experience data inconsistencies issues in EBAN table if PR overall release is implemented? would appreciate if you could share the SAP OSS note if you have. Thanks in advance

    Hi,
    i recall seeing on the web that there might be data inconsistencies in EBAN if we activate header release. But i could not find the link subsequently. However, when i do testing for overall release, it seems ok so far. Thanks for replying anyway

  • Master data inconsistencies

    Hello,
    How can i check and repair master data inconsistencies ? I know that such a program exists, can you tell me what is it?
    Regards,
    Jorge Diogo

    Hi Jorge,
    Here it is:
    RSDMD_CHECKPRG_ALL
    Regards,
    Diogo.

  • IPhoto file creation date inconsistencies during drag and drop

    I have noticed that if I drag and drop a photo from iPhoto to Finder, the file creation dates in Finder are inconsistent.
    (This question is related to drag and drop only and not File->Export, which always uses the export date and timestamp for the file creation date and thus does not suit my needs).
    TEST A -- If the EXIF DateTimeOriginated is 01/01/2013, and today's date is 03/03/2013, then:
    In some cases when I drag a file to Finder, the EXIF date is used as the file modification/creation date in Finder
    In some cases, today's date is used as the file modification/creation date in Finder
    In some cases, a date in between the EXIF date and today's date is used
    It appears that for case A1, these are files that do not have a modified copy.  That is, if you select the photo in iPhoto and then click "File" -> "Reveal In Finder", the "Modified File" choice will be greyed out.
    For cases A2 & A3, iPhoto has inexplicably decided to create modified versions of these files either today or sometime in the past.
    TEST B -- I have read that unexplained modifications are tied to the auto-rotate function in cameras, and it does seem to be the case when I performed the test below:
    Select a large group of landscape format photos (these would not have been auto-rotated), then drag and drop to Finder.  The file creation dates are set to the EXIF date
    Add some portrait photos to the group in (1).  Now the file creation date of ALL photos including the non auto-rotated photos are set to the current date
    The behaviour in B2 is clearly wrong, since the landscape photos should be the same as in B1.  This is bug #1.
    Furthermore, iPhoto appears to be inconsistent with when these modifications are made.  For example, I dragged & dropped an auto-rotated photo on 02/02/2013, then dragged & dropped it again today, then the file creation date in Finder (and also the date of the modified file in iPhoto, as shown in iPhoto File->Reveal In Finder->Modified File) can either the EXIF date (01/01/2013), the late of the last drag & drop (02/02/2013), or today's date (03/03/2013); there does not appear to be any rhyme or reason to this.  This is bug #2.
    In any case, saying "you should never use drag & drop in iPhoto" (as I have read in some other forum posts) isn't a solution, because Apple should either (a) support this function correctly or (b) remove it altogether.  Furthermore, I regularly burn photos to disk for others so having the file date and timestamps correctly set to the EXIF date helps keeping the photos sorted in the directory listings for multiple OS, so File->Export isn't a solution.

    File data is file data. Exif is photo data. A file is not a photo.  It's a container for a photo.
    When you export you're not exporting a file. You're exporting a Photo. The medium of export is a new file. That file is created at the time of export, so that's its creation date. The Photo within that file is dated by the Exif.
    There are apps that will modify the file date to match the Exif.
    The variation you're seeing is likely due to the changes in how the iPhoto library works over the past few versions. Drag and drop is handy, but is not a substitute for exporting, nor intended to be.

  • Flexible update master data inconsistencies

    Dear experts,
    I have the following issue with flexible update master data for 0employee.
    0employee is getting loaded from 3 different sources. here is the example below;
    employee  address   phone compcode
    1                germany   123     
    1                denmark                001
    1                UK
    here employee is a key for 0employee what happens to above data after attribute change run.
    can any one let me know what will be final results.  my requirement is i have to get company code at any circumstances. it there a proper sequence of load to be followed in this case. your inputs is highly appreciated with points.
    Cheers,
    VSN

    If 0employee gets data from 3 different sources it will overwrite the existing records when same employee '1' is coming from Germany , Den , UK .. Only the recent record will be availble in MD tables..
    You need to add SOURCESYSTEM as compounding object to 0EMPLOYEE.
    Put constant GE for 0SOURCE SYSTEM in transformations
    constant Den for 0SOURCE SYSTEM in transformations
    constant UK for 0SOURCE SYSTEM in transformations
    when you execute MD
    Source system ...employee address phone compcode
    GE.............................1...................... germany.... 123
    Den.............................1...................... Denmark.... 123
    UK.............................1......................UK .............. 123

  • Communication express dates inconsistencies

    hello
    an user sent some emails via connector on the 3/7 but these emails never arrived. one moth later on the 4/8 this user retyped these emails and sent them again. then she connected to the uwc and in the sent items she could see the emails with date of 3/7 but when she opened them it was in fact the mails van 4/8. I am just wondering what causes this behaviour and how to correct the dates or show the correct messages with the correct dates?
    thank you in advance.
    Mariog

    mario_garcia wrote:
    I have asked more questions about what they mean never arrived. they just said 'had lost emails'.. Hmm... I'm a little surprised it took over a month before they tried to resend it and bring this to your attention. This alone makes me a little 'suspicious' of their story.
    as far as the messages in the message store are concerned. the date in the header is from 4/8. and if i do a ls -l in the folder where these messages are stored, it also shows 4/8, however in communications express, sent items you see them with date of 3/7.You need to be careful of which folders the customer is looking at and which folders you are looking at. Remember that Outlook by default stores emails in the "Sent Items" folder and Comms Express stores them in the "Sent" folder. So make sure the user is actually looking at the correct folder -- this would explain how the dates could be different. Otherwise it isn't possible for an email to have a date header and a timestamp of 4/8 then show as 3/7 in Comms Express.
    Regards,
    Shane.

  • Inconsistencies in Master data

    Hi All.
    I would to know whether is there any transcation to check the inconsistencies in Master Data, just like running reconcilation/comparision of Transcation Data (/sapapo/ccr)
    Regards
    Raja kiran

    Raja,
    SAP does not provide a report that will detect all inconsistencies between Product master in APO and the Material Master in ECC.
    Most fields that exist in the ECC Material Masters do not even exist in APO.  Many fields are not intended to be in synch with APO.
    It is generally not necessary to report the APO/ECC product master differences for 'synchronized' fields on a regular basis.  In the rare case that they fall out of synch, it is usually sufficient to run RIMODINI report in ECC against the Material Master Integration model. Some companies even add RIMODINI steps to their daily IM jobs in ECC.
    In my experience, the major problem with ECC<>APO master data inconsistencies is poorly written enhancements.
    Best Regards,
    DB49

  • Can data in a Project 2010 file be synced with PWA timesheet data submitted against an un-published task?

    I recently learned that best practice when you're mid-project and have a task that should no longer be charged to is to use the "Close Task to Update" option in Project Server 2010 instead of setting the "Publish" field to "No"
    to remove the task from team members' timesheets because removal of task entries can lead to data inconsistencies.
    Prior to learning this, one of these data inconsistencies was introduced into one of my project files. Specifically, I set the "Publish" status to "No" on completed task without realizing that a team member had submitted a correction
    to a previous time period's timesheet that had not yet been approved. These corrected data are now shown in the Reporting Database, but they did not get carried through to the project file.
    Is there a way to reconcile the data shown in the project file with the data in the Reporting Database?

    Prasanna,
    Here's what I think happened:
    User submitted incorrect actuals against Task A on his timesheet. (In addition to the expected hours against the current period [TS1], he had accidentally submitted actuals against a future time period [TS2] that was not part of my current review window.)
    Not realizing he had incorrectly reported his time, I accepted the actuals and published them to the project.
    I updated the project, changing the Publish flag for Task A to No since no further work was expected against the task.
    The following week, the user resubmitted a correct version of TS2.
    The reporting database got updated with the new TS2 data.
    Everything in my approval queue that was submitted by this user has been approved and published; however, the data in my project file does not match what I am seeing in the reporting database.
    Do I just need to change the Publish flag for Task A back to Yes to get my project file to capture the changes that were made during the resubmission of TS2?

  • CO_PA DATA SOURCE ISSUE

    Hi,
    I am trying to display a COPA data source, i have given proper operating concern name and tried with costing based and account based it is throwing below error message:
    Table entry missing for data source.
    when i look into the details it is showing below information.
    an attempt was made to extract data using this data source.
    this is not possible due to missing of control entries in the system tables.
    System Response:
    The process was terminated.
    Procedure:
    if the data source was transported into this system, check the import logs for errors.
    I tried to debug it from KEB2, here also i am getting same error message.
    Please provide me the answer to over come this issue.

    Hello,
    The administration of the delta method for CO-PA DataSources occurs in part in the OLTP system. In particular, the time up until which the data has already been extracted is stored in the control tables of the DataSource. Since the control tables for the delta method for the extractor are managed in the OLTP system, certain restrictions apply.
    There can only ever be one valid initial package for a DataSource. If, for the same DataSource, a separate initialization is scheduled for different selections, for example, and data is posted to the operating concern between the individual initializations, data inconsistencies could occur between SAP BW and OLTP. The reason for this is that, with each initialization, the time stamp of the DataSource in the OLTP system is set to the current value. Consequently, records from a previous selection are no longer selected with the next delta upload if they were posted with a different selection prior to the last initial run.
    See this doc for more info [How to Connect Between CO-PA and SAP BW for a Replication Model|https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/fb07ab90-0201-0010-c489-d527d39cc0c6]
    Also see
    [How to Connect Between CO-PA and SAP BW for Data Retraction|https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/1910ab90-0201-0010-eea3-c4ac84080806]
    [SAP Network Blog: Best Practices for Profitability Management with SAP Business Profitability Management and CO-PA|/people/community.user/blog/2007/09/10/best-practices-for-profitability-management-with-sap-business-profitability-management-and-co-pa]
    [SAP Network Blog: Configuration Characteristics in Profitability Analysis|/people/udo.werner/blog/2007/05/10/configuration-characteristics-in-profitability-analysis]
    Thanks
    Chandran

  • Re Init because descrepancy in data from R/3 to BW

    Due to discrepancy in data I need to re run the init from 2LIS_11_VAITM. Daily deltas are loading in to the cube.
    I have data in the Cube for the last 3 months. It is about 5 million records. I have notified the Key users and I have a down time between 6.00pm and 3.00 in the morning.
    The data flows from 2LIS_11_VAITM to ODS and then to Cube..
    Please give me step by steps.. Do I need to do directly in Production or through development..
    Thanks,
    Sudha..

    Hi,
    Before doing reinit check whether repair full request will help.
    Check the OSS Note 739863 'Repairing data in BW' for all the details !
    "Symptom
    Some data is incorrect or missing in the PSA table or in the ODS object (Enterprise Data Warehouse layer).
    There may be a number of reasons for this problem: Errors in the relevant application, errors in the user exit, errors in the DeltaQueue, handling errors in the customers posting procedure (for example, a change in the extract structure during production operation if the DeltaQueue was not yet empty; postings before the Delta Init was completed, and so on), extractor errors, unplanned system terminations in BW and in R/3, and so on.
    Solution
    Read this note in full BEFORE you start actions that may repair your data in BW. Contact SAP Support for help with troubleshooting before you start to repair data.
    BW offers you the option of a full upload in the form of a repair request (as of BW 3.0B). If you want to use this function, we recommend that you use the ODS object layer.
    Note that you should only use this procedure if you have a small number of incorrect or missing records. Otherwise, we always recommend a reinitialization (possibly after a previous selective deletion, followed by a restriction of the Delta-Init selection to exclude areas that were not changed in the meantime).
    1. Repair request: Definition
    If you flag a request as a repair request with full update as the update mode, it can be updated to all data targets, even if these already contain data from delta initialization runs for this DataSource/source system combination. This means that a repair request can be updated into all ODS objects at any time without a check being performed. The system supports loading by repair request into an ODS object without a check being performed for overlapping data or for the sequence of the requests. This action may therefore result in duplicate data and must thus be prepared very carefully.
    The repair request (of the "Full Upload" type) can be loaded into the same ODS object in which the 'normal' delta requests run. You will find this request under the "Repair Request" option in the InfoPackage (Maintenance) menu.
    2. Prerequisites for using the "Repair Request" function
    2.1. Troubleshooting
    Before you start the repair action, you should carry out a thorough analysis of the possible cause of the error to make sure that the error cannot recur when you execute the repair action. For example, if a key figure has already been updated incorrectly in the OLTP system, it will not change after a reload into BW. Use transaction RSA3 (Extractor Checker) in the source system for help with troubleshooting. Another possible source of the problem may be your user exit. To ensure that the user exit is correct, first load a user exit with a Probe-Full request into the PSA table and check whether the data is correct. If it is not correct: Search for the error in the exit user. If you do not find it, we recommend that you deactivate the user exit for testing purposes and request a new Full Upload. It If the data arrives correctly, it is highly probable that the error is indeed in the user exit.
    We always recommend that you load the data into the PSA table in the first step and check the result there.
    2.2. Analyze the effects on the downstream targets
    Before you start the Repair request into the ODS object, make sure that the incorrect data records are selectively deleted from the ODS object. However, before you decide on selective deletion, you should read the Info Help for the "Selective Deletion" function, which you can access by pressing the extra button on the relevant dialog box. The activation queue and the ChangeLog remain unchanged during the selective deletion of the data from the ODS object, which means that the incorrect data is still in the change log afterwards. After the selective deletion, you therefore must not reconstruct the ODS object if it is reconstructed from the ChangeLog. (Reconstruction is usually from the PSA table but, if the data source is the ODS object itself, the ODS object is reconstructed from its ChangeLog). You MUST read the recommendations and warnings about this (press the "Info" button).
    You MUST also take into account the fact that the delta for the downstream data targets is created from the changelog. If you perform selective deletion and then reload data into the deleted area, this may result in data inconsistencies in the downstream data targets.
    If you only use MOVE and do not use ADD for updates in the ODS object, selective deletion may not be required in some cases (for example, if incorrect records only have to be changed, rather than deleted). In this case, the DataMart delta also remains intact.
    2.3. Analysis of the selections
    You must be very precise when you perform selective deletion: Some applications do not provide the option of selecting individual documents for the load process. Therefore, you must first ensure that you can load the same range of documents into BW as you would delete from the ODS object. This note provides some application-specific recommendations to help you "repair" the incorrect data records.
    If you updated the data from the ODS object into the InfoCube, you can also delete it there using the "Selective deletion" function. However, if it is compressed at document level there and deletion is no longer possible, you must delete the InfoCube content and fill the data in the ODS object again after repair.
    You can only perform this action after a thorough analysis of all effects of selective data deletion. We naturally recommend that you test this first in the test system.
    The procedure generally applies for all SAP applications/extractors. The application determines the selections. For example, if you cannot use the document number for selection but you can select documents for an entire period, then you are forced to delete and then update documents for the entire period in the data target. Therefore, it is important to look first at the selections in the InfoPackage exactly before you delete data from the data target.
    Some applications have additional special features:
    Logistics cockpit: As preparation for the repair request, delete the SetUp table (if you have not already done so) and fill it selectively with concrete document numbers (or other possible groups of documents determined by the selection). Execute the Repair request.
    Caution: You can currently use the transactions that fill SetUp tables with reconstruction data to select individual documents or entire ranges of documents (at present, it is not possible to select several individual documents if they are not numbered in sequence).
    FI: The Repair request for the Full Upload is not required here. The following efficient alternatives are provided: In the FI area, you can select documents that must be reloaded into BW again, make a small change to them (for example, insert a period into the assignment text) and save them -> as a result, the document is placed in the delta queue again and the previously loaded document under the same number in the BW ODS object is overwritten. FI also has an option for sending the documents selectively from the OLTP system to the BW system using correction programs (see note 616331).
    3. Repair request execution
    How do you proceed if you want to load a repair request into the data target? Go to the maintenance screen of the InfoPackage (Scheduler), set the type of data upload to "Full", and select the "Scheduler" option in the menu -> Full Request Repair -> Flag request as repair request -> Confirm. Update the data into the PSA and then check that it is correct. If the data is correct, continue to update into the data targets."
    Hope this helps.
    Regards,
    Amruth

  • IW31 - Modify Basic Finish Date

    Hi Gurus,
    I'm looking for a badi or user exit that allows me to modify the Basic Finish Date in the IW31/IW32 transactions just before saving.
    I've tried the following ways without any succes:
    - IWO10009 - Function Module --> EXIT_SAPLCOIH_009
    This user exit is treggered before saving (as I would like to) but I'm not able to modify any field. It is used to active the automatic order release when saving the order.
    - IWO10012 - Function Module --> EXIT_SAPLCOIH_012
    It allows me to modify the CAUFVD-GLUZP, CAUFVD-GLTRP fields, but this user exits is triggered when the priority field is modified.
    - BAdi WORKORDER_UPDATE --> I've tried to implement this BAdi but all parameters in the methods AT_SAVE, BEFORE_UPDATE are type = IMPORTING so SAP does not allow to modify them. I've tried it using field symbols but I thing that I've done something wrong.
    Is there another way to do it?
    Thanks and regards,
    Sergi.
    P.D.: I've looked for it in the forum threads but I could not find any valid answer.

    You can use function module CO_IH_SET_HEADER in user-exit IWO10009 (at save).
    Search for more details using "CO_IH_SET_HEADER"
    Be very careful when using this technique as it caould cause data inconsistencies.
    PeteA

  • Uploading Time data to HR Clusters B1 and B2

    Hi All,
    My agenda is download Time data ( Hr cluster B1 and B2 ) from One SAP System ( 4.7 ) using Macr RP-IMP-C1-B1, RP-IMP-C2-B2 and upload to a ECC 6.0 SAP System using Macro RP-EXP-C1-B1, RP-EXp-C2-B2. Though uploading can be achieved using report RPTIME00( Provided all the infotypes filled ) , i dont want to do this because time schema may change which will cause data inconsistencies between two system.
    I succesfully imported B1 Cluster from one system but when i try to upload the same data using macro RP-EXP-C1-B1, i am not getting any error( SY-SUBRC = 0 ) but data is not saved in the database even though i give a commit work.
    I filled all the required values. i.e. i filled b1-key with personnel number and i filled PCL1-RELID, PCL1-SRTFD, PCL1-SRTF2
    these fields with values and included includes
    RPC1B100
    RPC2B200
    RPPPXD00
    RPPPXD10
    RPPPXM00
    Macro RP-EXP-C1-B1 Code is
    DEFINE RP-EXP-C1-B1.
    PCL1-VERSN = B1-VERSION-NUMBER.
    B1-VERSION-SAPRL  =                                         "L6BK003229
                   CL_PT_CLUSTER_UTIL=>GET_RELEASE( 'SAP_HR' ). "L6BK003229
    B1-VERSION-UNAME  = SY-UNAME.
    B1-VERSION-DATUM  = SY-DATUM.
    B1-VERSION-UZEIT  = SY-UZEIT.
    B1-VERSION-PGMID  = SY-REPID.
    EXPORT B1-VERSION
           NT1
           NT2
           IFT1
           IFT2
           ERT
           NCT
           QT
           ST
           ITP1
           ITP7
           ITP50
           PDPPM
    TO   DATABASE PCL1(B1)
    ID B1-KEY USING PCL1_EXP_IMP.
    RP-IMP-B1-SUBRC = SY-SUBRC.
    END-OF-DEFINITION.
    In this macro there is a using parameter PCL1_EXP_IMP which is routine used to handle buffers, i belive my data is not storing in the database table because of this why because if i put this code in a routine and call this routine in my program instead of calling the Macro. In this routine if i comment out "USING PCL1_EXP_IMP then everything i working fine but i dont know whether this routine is handling anyother thing.
    Have anybody faced the same issue , how to handle buffers. Have anybody used macro RP-EXP-C1-B1 to upload Cluster B1 data , if so please let me know the steps to do that.
    Thanks in advance .
    Points will be rewarded for useful answers.

    Hi Rajanidhi,
    We are facing same issue, can u please post the solution? Any clue would help us a lot in this regard.
    Thanks,
    Somu

  • Stock inconsistencies due to wrong Unit of measure

    Hi,
    I am currently facing an issue, generated due to master data inconsistencies. The problem is as follows:
    I have a  material with Unit of measure " pack". A PO was supposed to be created for "100 packs" of this material, but somehow Unit of measure in PO was "each", and thus it was created for "100 each". The supplier has sent quantity as 100 packs. GR was done on the basis of PO thus 100 each was entered. Invoice has been created and settled. Stock overview is showing this material as "100 each" whereas actually the material quantity physically available is 300 (100x3, each pack has 3 pcs).
    I need to correct this situation. so that stock and accounts show the correct data.
    Is there any simple way of correcting this issue. like scrapping etc... and how??
    Thank you.

    Dear Priya,
    You have completed the complete purchase cycle.
    One way is to reverse the complete cycle, change the UOM and redo the cycle.
    Ohter way is, if you need to increase the stock you can do that by delivery free of cost (mvt. type 511) or creating a PO with free tick followed by GR.
    In this case you need to check the MAP of the material. If you need to change the MAP also then use transaction MR21.
    Before doing please check the accounting impact of all these (the price difference account or the revaluation account will get hit).
    In second option you can not change the UOM.(Maintain conversion factor between EA and PACK as 1:1)
    Cheers,
    Satish Purandare

  • Table inconsistencies after EHP4 Upgrade

    Hi Experts,
    Weu2019re setting up ALE from Production to Development as well as some manual copy of table content.
    We have completed the ALE setup, but we have an issue with the table content copy.
    Some SAP tables have changed after Development was updated with EHP4.
    To be more specific,we have identified some tables ,which will need to be exported from Production (content only) via a transport, and then imported into Development.
    If we just try to export/import these tables, the transport will most likely fail, and if it doesnu2019t, there may be data inconsistencies
    If we just copy the data as is some fields would be left out unpopulated, potentially causing inconsistencies.
    I need your suggestion for best course of action.

    Hi,
    Is your development and production are on different Support pack level ? If yes, then this is normal and table structure will differ.
    why you need to export tables from production to development ?
    Thanks
    Sunny

Maybe you are looking for

  • Information on Business Process Monitoring

    Hello experts, I want to know if there is a way to monitor spool jobs in business process monitoring or other tool? I assume that I can monitor background jobs with BPMon but I don't find where I can monitor the content of the spool generated by the

  • Image fields in Adobe 8 Pro

    I am making a form in Adobe Acrobat Professional and everything is going great and I put in image fields and activated the users so I could send it out to anyone that needed to use this form to place images. So all they had to do was click and find t

  • IOS 7 no camera and other issues

    I have iphone 5 and just did the iOS 7 upgrade, now I have no camera, no safari, no itunes, no  itunes store and that is only what I have noticed right away.  How do I fix ??????

  • FCP X running too slow

    I am experiencing FCP running too slow, finding it hard to edit footage.

  • Failed to retrieve Analysis Engine Service How to fix

     my Shorepoint is 2013  And i now is search is have proplem On event logs show Failed to retrieve Analysis Engine Service how to fix it please help me  T_T Best Regrads chatchai-netd