Data conversion strategy for new SOB

Dear Viewers
on 11.5.10
We are creating a new SOB with a change in currency from Feb-11 as this is the requirement
For the same, we need to do data conversion.
I have a confusion for Purchase Orders and Sales Orders
Purchase Orders:
Open purchase orders will be converted, means the unfulfilled PO`s i.e the ones not received and are fully open.
The PO`s which have been recieved but not delivered, Requested the users to clear the intransit receipts.
The PO's which are partially received, what to be done for them?
If a PO is fully received and Delivered will not me converted to the new SOB as its not an open PO
but If invoice comes after Feb-11, than how the matching will be done?
What if a return has to be made moving forward in FEB-11 under new SOB
Sales Orders:
Open sales orders will be converted, that is the ones that have been entered and not yet booked.
Users have been requested to clear off the Sales order lines which are already pick confirmed but not yet shipped, hence they will be shipped and interfaced to AR
For the Sales orders that have been booked, those lines that are not yet processed further will also be converted.
Now what if a receipt comes after feb 11, how to handle this as the sales order wiould not have been converted?
Please give your advise on the data migration strategy for PO`s and SO's.
Please do add any point that may have been missed by me
Appreciate your help
Thanks
Emm

Hi David,
for master data conversion you can use LSMW and the RE-FX BAPIs. (please refer to SAP note  [782947|https://service.sap.com/sap/support/notes/782947] ).
Regards, Franz

Similar Messages

  • Data conversion for new sob

    Dear Viewers
    on 11.5.10
    We are creating a new SOB with a change in currency from Feb-11 as this is the requirement
    For the same, we need to do data conversion.
    I have a confusion for Purchase Orders and Sales Orders
    Purchase Orders:
    Open purchase orders will be converted, means the unfulfilled PO`s i.e the ones not received and are fully open.
    The PO`s which have been recieved but not delivered, Requested the users to clear the intransit receipts.
    The PO's which are partially received, what to be done for them?
    If a PO is fully received and Delivered will not me converted to the new SOB as its not an open PO
    but If invoice comes after Feb-11, than how the matching will be done?
    What if a return has to be made moving forward in FEB-11 under new SOB
    Sales Orders:
    Open sales orders will be converted, that is the ones that have been entered and not yet booked.
    Users have been requested to clear off the Sales order lines which are already pick confirmed but not yet shipped, hence they will be shipped and interfaced to AR
    For the Sales orders that have been booked, those lines that are not yet processed further will also be converted.
    Now what if a receipt comes after feb 11, how to handle this as the sales order wiould not have been converted?
    Please give your advise on the data migration strategy for PO`s and SO's.
    Please do add any point that may have been missed by me
    Appreciate your help
    Thanks
    Emm

    emm wrote:
    Purchase Orders:
    Open purchase orders will be converted, means the unfulfilled PO`s i.e the ones not received and are fully open.
    The PO`s which have been recieved but not delivered, Requested the users to clear the intransit receipts.
    The PO's which are partially received, what to be done for them?
    If a PO is fully received and Delivered will not me converted to the new SOB as its not an open PO
    but If invoice comes after Feb-11, than how the matching will be done? <br>Business needs to take a decision whether they are fine with matching the POs manually (i.e. referring the documents and verify), in this case you may capture the PO information in a DFF in the Invoice distribution. Otherwise if it has to be converted identifying the POs under this scenario, you may consider converting those maintaining receipt close tolerance as 100% and matching type as 2 -way (again business approval needed to handle audit issues) in order to avoid the receipts/delivery conversion etc.<br>
    What if a return has to be made moving forward in FEB-11 under new SOB<br>Ideally returns can be done using Miscellaneous/Account Alias Issues specifying the appropriate transaction reasons to clarify the scenario.<br>

  • Data Conversion Errors for the last week

    We've been running a simple Stream Analytics job for little over a month now with a very light workload. Input is Event hub and output SQL Server. We noticed today that we haven't received anything into SQL Server since 2014-12-08 (we don't receive events
    every day so we only know that everything still worked on the 8th of December), so we checked the job's logs. It seems that job is failing to process all the messages: The value of "Data Conversion Errors" is high.
    I wonder what could have happened? We haven't touched the client since we started the job so it's still sending the messages in same format. And we haven't touched the job's query either.
    Has there been an update to either to Stream Analytics or to Events Hub which could cause the issue we're seeing?

    I've followed word for word the TollApp Instructions (except the thing with NamespaceType "Messaging" that has been added to New-AzureSBNamespace).
    I have 0 line in output, and this is the service log:
    Correlation ID:
    e94f5b9e-d755-4160-b49e-c8225ceced0c
    Error:
    Message:
    After deserialization, 0 rows have been found. Possible reasons could be a missing header or malformed CSV input.
    Message Time:
    2015-01-21 10:35:15Z
    Microsoft.Resources/EventNameV2:
    sharedNode92F920DE-290E-4B4C-861A-F85A4EC01D82.entrystream_0_c76f7247_25b7_4ca6_a3b6_c7bf192ba44a#0.output
    Microsoft.Resources/Operation:
    Information
    Microsoft.Resources/ResourceUri:
    /subscriptions/eb880f80-0028-49db-b956-464f8439270f/resourceGroups/StreamAnalytics-Default-West-Europe/providers/Microsoft.StreamAnalytics/streamingjobs/TollData
    Type:
    CsvParserError
    Then I stopped the job, and connected to the event hub with a console app and received that:
    Message received. Partition: '11', Data: 'TollId,EntryTime,LicensePlate,State,Make,Model,VehicleType,VehicleWeight,Toll,Tag
    85,21/01/2015 10:24:56,QBQ 1188,OR,Toyota,4x4,1,0,4,361203677
    Message received. Partition: '11', Data: 'TollId,EntryTime,LicensePlate,State,Make,Model,VehicleType,VehicleWeight,Toll,Tag
    33,21/01/2015 10:25:42,BSE 3166,PA,Toyota,Rav4,1,0,6,603558073
    Message received. Partition: '11', Data: 'TollId,EntryTime,LiMessage received. Partition: '10', Data: 'TollId,EntryTime,LicensePlate,State,Make,Model,VehicleType,VehicleWeight,Toll,Tag
    59,21/01/2015 10:23:59,AXD 1469,CA,Toyota,Camry,1,0,6,150568526
    Message received. Partition: '10', Data: 'TollId,EntryTime,LicensePlate,State,Make,Model,VehicleType,VehicleWeight,Toll,Tag
    25,21/01/2015 10:24:17,OLW 6671,NJ,Honda,Civic,1,0,5,729503344
    Message received. Partition: '10', Data: 'TollId,EntryTime,LicensePlate,State,Make,Model,VehicleType,VehicleWeight,Toll,Tag
    51,21/01/2015 10:24:23,LTV 6699,CA,Honda,CRV,1,0,5,169341662
    Note the bug on the 3rd message. In my opinion it's unrelated, it could be the writeline that can't keep up with the stream in the console application. And at worst it's in the stream, but then I should see at least some lines in output for the correctly
    formatted messages.

  • Data not loading for new Fields InfoObjects from DSO to InfoCube

    Hi Gurus -
                   I have a DataSource that is providing data to existing DSO then to Infocube. My client asked me to added couple fields to DataSource and get the data to DSO and then DSO to InfoCube.
    Here is the Old scenarion:   DataSource -> DSO -> InfoCube.
    Here is the New Scenation:
         DataSource ( Added new fields) -> DSO (added new InfoObjects for Corresponding DataSource FIelds) -> InfoCube( Added new InfoObjects to mapped from DSO)
    I added the new fields to DataSource, added the corresponding InfoObjects to DSO and InfoCube.
    I successfully loaded data from DataSource to DSO. Data is populating for the new Fields/InfoObjects in DSO.
    But when I load data from DSO to InfoCube, I don't see any data for New Fields InfoObjects in the InfoCube.
    Data from DSO to InfoCube is loading fine for the Old InfoObjects Fields but not for the New InfoObjects I added in InfoCube.
    -Sonali

    Hi,
    Why dont u debug the load through DTP debugging and check what happens to the source field and target field once it passes through the transformation. You can easily trace back where the fields are becoming blank.
    The loads which you have mentioned earlier did it have values in Added Records/Transferred Records column for the cube.
    Regards,
    Mani

  • Data Conversion rules for EDI processing (same client IDOC processing)

    Hi,
    I am trying to post IDOCS in same client.Its a PO->SO process.
    ie. there will be 1 outbound and inbound idoc in same client using EDI processing.
    I am using Data Conversion using Rulesfor converting sender fields.
    The LIFNR and PAORG od segment E1EDKA1 has to be converted.
    For ALE processing, the Data conversion is been done correctly.
    But no conversion is done for EDI.
    Can anybody help me with this problem ?
    Thanks in advance.
    Regards
    Megha

    Issue solved

  • Any Video strategy for new iPad?

    I know the video for iPad shhould be a mp4 file, coding by H.264.
    As the new iPad features a retina display, it requires higher resolution video.
    Is there anyone has any idea about what the best setting when I try to compress the video for new iPad?
    Quanlity vs File size ( resolution, bitrate, and any other thing?)

    Thanks to Bob.
    I also read something like this:
    iPad3 supports a maximum video dimension of 1920×1080 which is likely more resolution than you really need on iPad3.
    It seems that the resolutioin/deimension is not the most important thing.
    How about the bitrate for iPad 3? Anyone has any experiences about it?

  • Release Strategy for New Plant

    Dear All,
    I have a Release strategy in my 3 Plants  and it is working fine.
    Now I want to add one more plant under the same release strategy.
    I tried to add the new Plant in CL24N. System allows me to add the same in this but the RS is not working in this new Plant.
    waiting for reply

    hi,
    you follow the following steps to insert new characteristics value in your release procedure.
    1. define the new characteristics value plant4 in your characteristics.
    2. assign the same characteristics value plant4 in classification of release strategy.
    3. run the release simulation and comes out.
    4. now create PO for your requirment and save it.
    5. u can use me22n/23n to view your release status of po.
    hope it helps....
    manoj singh

  • Release Strategy for New & Change PO's - Using characteristic CEKKO-REVNO

    Hi All,
    I have the following requirement/scenario
    1. If PO is created manually no reference to PR, then the PO needs to be approved through a release strategy.
    2. If the PO has been created automatically either from a req or through SRM (for,eg) then the PO needs to be approved only if this PO has been changed. So this would need to go through a second release strategy.
    To distinguish between the two, we are using Version number field CEKKO-REVNO in the communication structure CEKKO to be a characteristic in the release strategy.
    This is what I hoped for would work after doing the necessary config,
    For create PO's, in my classification I have CEKKO-REVNO as >=0. So any PO whose version number is >=0, will trigger relase strategy S1 (for e,g).
    For change PO's in my classification I have CEKKO-REVNO > 0 And whenever you change PO, it will generate versions 1,2,3 and so on, and hence the release strategy will trigger.
    The problem
    Whenever I use CEKKO-REVNO as one of my characteristics, the release strategy is not triggered at all no matter what the conditions are on the PO. When i removed the REVNO from my classification, release strategy is getting triggered as per plan.  I have also activated version management and done all the necessary config. I would like to know why release strategy is not getting triggered when REVNO field is used from communication structure CEKKO. Have any of you experienced this issue before??
    Is their any other identifier I can use to address the requirement above? Let me know.
    Thanks
    Ashvin

    Thanks Charlie for your response.
    The conditions in the PO is not equal.
    On one occasion when I create the PO, I have given the classification as REVNO >=0 with a document type ZB for create (S1). So when I create the PO manually, the version starts with 0, so condition should satisfy here
    On the other when I change the PO (with/without reference to PR), I have given the classification as REVNO as > 0 with document type ZC (S2), so when the PO is created initially it will not satisfy strategy S1 as doc type is different for S1. When I change the PO created with ref to req, S2 should take into effect because when i change the version will change to 1 and S2 should trigger because REVNO is > 0 and doc type matches.
    So I dont know where the release strategy matches in both S1 & S2 for the system to not propose the release startegy.
    Let me know your thoughts.
    Thanks for your feedback.

  • How to insert date column entry for new row from adf bc tester

    Hi,
    JDeveloper version 11.1.1.5.0
    I have a table with a DATE column. I am trying to insert a new row into the table from the ADF BC Tester. While providing value to the DATE column I am hitting the below error:
    (oracle.jbo.domain.DataCreationException) JBO-25009: Cannot create an object of type:oracle.jbo.domain.Date from type:java.lang.String with value:2011/12/06
    Please let me know what should be the format to specify date value while inserting entry using BC Tester.
    Thanks
    Rathnam

    Hi,
    Check
    operation not allowed on java.lang.object

  • PA data migration Strategy for global roll-out

    Hi Guys,
    What is the best practise to load PA data into SAP HCM for a global roll-out? We plan to go-live with 50 odd countries. For country specific infotypes like address, It does not make sense to create country-specific LSMW's for each of the country. We may end up having 100s of LSMWs which does not seem right to me. nearly 20 of these countries have less than 100 employees each. Any ideas what the best practises around this are?
    regards
    Sam

    Create batch input sessions with LSMW as usual.
    Wite a report that reads the batch input session (tables APQI, APQD), gets the country specific dynpro for every PERNR (tables T582A, T588M, feature Pxxxx, ...) and change the dynpro in the APQD. Use debugger to find offset and length of field values in the record.
    Than you can process the batch input.
    I did it that way many times.
    Alternativly you could update the PA tables directly - but difficult to detect errors.

  • GL Legacy Data Conversion

    I have a question for the data conversion strategy.
    We are planning to store 2 yr detailed transactions and 4yr balances in Oracle system. In terms of the data conversion process, we were also going to take the same method. It would, however, cause schedule and workload constraints. Actually, we have not yet discussed this from realistic view points. In other words, we do not insist on storing all of data which are 2yr transaction and 4yr balances.
    Although I believe that we will be able to eliminate a volume of the data to be converted to new chart of accounts from old ones, I am a little bit concerned about a couple of things as follows:
    1. the workload for reporting processes
    Assuming that we will not use Oracle standard reports so much, it would not be a big issue, even if we do not convert all of data I mentioned above. We will store historical report data somewhere, and be able to generate certain reports using both Oracle data and historical data in the repository. If not, it could cause us extra efforts.
    2. Audit trails / Examination trails
    If we give up converting the data to fit new system, and that fiscal year has not been examined yet, how should we handle non-converted data for the examination? I am just wondering whether or not, we have only to prepare the conversion table, that ties Oracle balances back to ABC detailed transactions for their reference.
    If anyone is are aware of anything you can advise me, could you please provide some information or guidance? Thanks to ALL.

    Hi,
    When you talk about GL-Data it is the trial balance to be loaded from legacy to oracle applications.
    You can use Web-ADI to upload it ,the check list are
    The balance for each account comibnation in the legacy system to be mapped to GL-Oracle code combination balance
    With respect to open AR, AP Invoices if the invoices are converted using a control account for migration in GL then
    the the balance transfered from AP,AR need not be reversed in GL from the source receivables and payables.
    In case if the same account combination are for migrating the balances from AP,AR to GL ..Then the balances transferred from AP,AR should be reversed ..so that it does not affect the TB-GL-Balance
    Finally ensure that the balance for the TB-tallys with your legacy system and upload it using Web-ADI ,import and
    review and post it.
    Hope this points helps your GL-Data conversion.
    Regards,
    Ramaa

  • Data conversion program

    can u give idea about data conversion program for inventory management program stocks,which reads legacy extract file and the valuation master file from pc gets issue storage location valuation details and output a file with these details
    regards,
    phani

    Hi Phani,
    Make sure that the data from legacy system is stored in a TXT file in a specific format.
    In SAP, create a program to upload the file using FM GUI_UPLOAD. The structure of the internal table will be same as that of the file.
    Upload the data and update the necessary tables.
    Best regards,
    Prashant

  • EC-CS Data Loading Strategy

    Hello. We are currently in the process of implementing the Enterprise Consolidation Transaction Data InfoCube (0ECCS_C01).
    We are attempting to develop a data loading strategy for the InfoCube. Since this InfoCube does not have a delta process, reloading the entire cube on a daily basis is not feasable due to the length of time. We would like to set it up where it would load the current month for actuals plus into the future for forecast dollars.
    Has anyone established a data loading process for their consolidated accounting InfoCube that works well and keeps data loading time to a minimum?
    Best regards,
    Lynn

    Hi,
    You could prepare packages:
    one which you upload all data from previous years/months
    second with OLAP variables (for example 0DAT) which you upload data only from present day/month/year - depends which variable you select (add this package to the chain).
    When the second package will crash, you have to repeat procedure.
    Regards,
    Dominik

  • SLO and data conversion

    I read thru the guides for SLO and get the impression that SLO can be used to do data conversion.
    For example, it can convert an Oracle DB based non-SAP system into an SAP client.
    Is this true?
    Our company hires tons of high level consultants to convert non SAP HR systems into SAP HR
    systems.
    With the magic box SLO, all consultants are no longer needed?
    Thanks!

    SLO helps us to convert Object's Values which are no longer required or needs to be modified to harmonies the system.
    For e.g. in 2012 we had Sales Orgs BBS, BBK, JJG etc. and now we want to merge these orgs to meaningful Org like BB01 and BB02, then this can be achieved by SLO.
    SAP will help you to achieve this. SLO team will run the Programs on your production systems, and it will change the values on table level.
    We on the other hand need to adept the custom conversions if any. And validate the data after conversion.
    In your case, I don't think SLO is helpful.
    Thanks,
    Jaydip Patel

  • XI for new data conversion?

    Hello experts,
    You recommendations please on data conversion with XI versus using the LSMW tool?
    We are working on a new implementation, planning to use XI for the interfaces anyway. Now the idea is to use XI for all dataconversion from the legacy systems.
    Good idea, not suitable, better to use LSMW?
    Your thoughts please!
    Many thanks,
    Albert

    Hi again.
    The response to this question cannot be a general answer, because it depends on several aspects. I'll try to order by importance.
    1 - What kind of files do you have for material and BOM creation? XML files?
    XI is a powerfull tool for transfer that kind of files. With an ABAP proxy, you can easily decompose a XML file into a strucuture in ABAP, so it's easy to import files. To send, is also easy. You fill a structure and XI will convert it to a XML file and sent to the respective receiver, using it's communication channel. So, if it's XML, I would say YES, use XI. If not, XI only can copy those flat files and that way it's not easy to get it's data in a proxy. You can copy those file to a directory and import files with a program, but, if so, why using XI for your purpose? Or you can do some mappings to convert flat file into XML file ... the mapping can be easy or very difficult.
    2 - Experience in XI or LSMW? And Urgence?
    This question is important. If you have experience in one of those tools, you should use it. As far as I can catch from your post, XI is a new tool for you, right? To master XI is a process that will take you some time ... not too much ... but consider this part too in your decision. However, since you will start using XI for interfaces, this could be a good start!
    3 - Many other reasons can be found in the link that Murthy provided, but this two should be the most important (in my opinion).
    Hope this helps.
    Regards,
    Valter Oliveira.

Maybe you are looking for