Design issue  - Historical data - referencial integrity

Hi,
I dont know if this is the right place to post this question, but
I would to like know about the design issues in storing historical data .
I have a historical table about events ( notifications, alarms, etc ).
In historical table would I store fields with foreign key to the master table ( device table ) ?
In this design , how handle then updates ( remove record ) to the master table ?
would I have a trigger to update the historical table too ? Or I mustn' t permit
updates in master table primary Keys ?
And about no store The foreign keys in the historical data ?
What are the disvantages of this model ?
Thank you,
Faria

Faria,
The answer depends on why you want historical data.
Is the historical data to be used for auditing or legal purposes?
Are you trying to move historical data for performance reasons?
Is there some other purpose?
If you're doing this for auditing purposes, I don't recommend using foreign keys since you won't easily be able to capture delete activity.
I mostly use Designer for all of our designs and typically will turn on server side journaling for this purpose. Designer will then generate the journaling tables and write the triggers to manage the whole process. You don't even need to be a programmer to figure it out.
Let me know back if this isn't your purpose or you need clarification.
Thanks, George

Similar Messages

  • Best design for historical data

    I'm searching a way to design some historical data.
    Here my case:
    I have millions of rows defining a population, each on 20-30 different tables.
    Those rows are related to one master subject area, let say it is a person.
    The details table are defining status in a period of time (marital status, address, gender (yes that may change in some cases!, program, phase, status, support, healthcare, etc). They have all two attributes that define the time period (dateFrom, dateTo), and one or many attributes that define the status of the person (the measure).
    I know we need a weekly situation for this population since 1998 to yet.
    Problems are those:
    1) the population we will analyze will count some 20 different criteria's (measures), may be we will divide those in some different datamarts to avoid to much complexity
    2) we will drill down from year to week, and will accomplish comparison like year to year, year to previous year, year to date, etc at each level of time hierarchy (year, quarter, month, week).
    The question is :
    1) do we need to transform our data at a week level for each person to determine the status at this level (the data may be updated and will cause this transformation to be refreshed) ?
    1) do we need to aggregate for each level because mixed situation exists due to fact that more than one status may exists for same person at a given time period (more than one status may exists in march by example, how to interpret it ?), that will cause to some logic to be applied, isn't ?
    We will be glad to hear some recommendation/idea about this !
    Thank You

    I would try to get some exact user requirements, with the dataset you have described there are millions of combinations of answers that can be defined and it will be difficult for you to fulfill them all in one place, especially if there are slowly changing dimensions involved.
    The user requirements will determine what levels you transform the data at.

  • Inventory 0IC_C03, issue with historical data (data before Stock initializa

    Hi Experts,
    Inventory Management implementation we followed as bellow.
    Initailization data and delta records data is showing correctly with ECC MB5B data, but historical data (2007 and 2008 till date before initialization) data is not showing correctly (stock on ECC side) but showing only difference Qunatity from Stock initialization data to date of Query.
    we have done all the initial setting at BF11, Process keys and filed setup table for BX abd BF datasources, we are not using UM datasource.
    1 we loaded BX data and compressed request (without tick mark at "No Marker Update)
    2. initialization BF data and compressed request (with tick mark at "No Marker Update)
    3 for deltas we are comperessing request on daily (without tick mark at "No Marker Update).
    is this correct process
    in as you mentioned for BX no need to compress ( should not compress BX request ? )
    and do we need to compress delta requets ?
    we have issue for historial data validation,
    here is the example:
    we have initilaized on may 5th 2009.
    we have loaded BX data from 2007 (historical data)
    for data when we see the data on january 1st 2007, on BI side it is showing value in negative sign.
    on ECC it is showing different value.
    for example ECC Stock on january 1st 2007 : 1500 KG
    stock on Initialization may 5th 2009 : 2200 KG
    on BI side it is showing as: - 700 KG
    2200 - (-700) = 1500 ,
    but on BI side it is not showing as 1500 KG.
    (it is showing values in negative with refence to initialization stock)
    can you please tell, is this the process is correct, or we did worng in data loading.
    in validity table (L table) 2 records are there with SID values 0 and -1, is this correct
    thanks in advance.
    Regards,
    Daya Sagar
    Edited by: Daya Sagar on May 18, 2009 2:49 PM

    Hi Anil,
    Thanks for your reply.
    1. You have performed the initialization on 15th May 2009.
    yes
    2. For the data after the stock initialization, I believe that you have either performed a full load from BF data source for the data 16th May 2009 onwards or you have not loaded any data after 15th May 2009.
    for BF after stock initialization delta data, this compressed with marker update option unchecked.
    If this is the case, then I think you need to
    1. Load the data on 15th May (from BF data source) separately.
    do you mean BF ( Material movements) 15th May data to be compressed with No Marker Update option unchecked. which we do for BX datasource ?
    2. Compress it with the No Marker Update option unchecked.
    3. Check the report for data on 1st Jan 2007 after this. If this is correct, then all the history data will also be correct.
    After this you can perform a full load till date
    here till date means May 15 th not included ?
    for the data after stock initialization and then start the delta process. The data after the stock initialization(after 15th May 2009) should also be correct.
    can you please clarify these doubts?
    Thanks
    Edited by: Daya Sagar on May 20, 2009 10:20 AM

  • Data mart from two DSOs to one - Loosing values - Design issue

    Dear BW experts,
    I´m dealing with a design issue for which I would really appreciate any help and suggestions.
    I will be as briefly as possible, and explain further based on the doubts , questions I received in order to make it easier go through this problem.
    I have two standard DSOs (DSO #1 and #2) feeding a third DSO (DSO #3), also standard.
    Each transformation DOES NOT include all fields, but only some of them.
    One of the source DSO (let´s call it DSO #1) is uploaded with a datasource that allows reverse type of records  (Record Mode = 'R'). Therefore some updates on DSO #1 comes with one entry with record mode 'R' and a 2nd entry with record mode = 'N' (new).
    Both feeds are delta mode, and not the same entries are updated through each of them, but the entries that are updated can differ (means an specific entry (unique key values)  could be update by one of the feeds, but no updates on the 2nd feed for that entry).
    Issue we have:  When a 'R' and 'N' entries happen in DSO #1 for any entry, that entry is also reversed and re created in the target DSO #3 (even being that not ALL fields are mapped in the transformation), and thefore we loose ALL the values that are exclusively updated through DSO #2, becoming blank.
    I don´t know it we are missing something in our design, or how should we fix this issue we have.
    Hope I was more or less clear with the description.
    ´d really appreciatted your feedback.
    Thanks!!
    Gustavo

    Hi Gustavo
    Two things I need to know.
    1. Do you have any End Routine in your DSO? If yes, what is the setting under "Update behavior of End Routine Display"....Option available right side of Delete Button ater End Rouine.
    2. Did you try with Full Load from DSO1 and DSO2 to DSO3? Do you face the same problem?
    Regards
    Anindya

  • Report painter.....PCA reports & historical data issue

    I need to create a report painter PCA report for P&L that will do the foll:
    --> Consolidated P&L for 2 company codes
    --> Company code 1 - YTD figures to include full year
    --> Company code 2 - YTD numbers to include only 2nd half year as company 2 was acquired in middle of year
    Problem is we are migrating the historical data for the last two years. I do not want the report to pick up this as part of the report.
    Suggestions?

    anyone?
    Thanks

  • Please Help PI Data Dependent Integration Builder Authorizations NOT Workng

    Dear Friends / Experts,
    I had spend many days and explored all Weblog  and links on this website and implemented all the steps required to acheive Data Dependent Integration Builder Security and I am not successful so far. I am just giving up now - Please Help Me ---
    As I said, I already read all the important Forum Links and SAP Web links and Followed Each and Every Step - service.sap.com/instguidesNW04 ® Installation ® SAP XI
    Security Requirement - Data Dependent/Object Level Authorizations in XI / PI
    In distributed teams or in a shared PI environment it might be necessary to limit authorization for a developer or a group of developers to only one Software Component or objects within a Software Component or to specific Configuration Objects.
    Our Environment - PI 7.0 SP 16
    Created a new role in the Integration Builder Design
    u2013Add Object Types of any Software Component and Namespace
    - Enable usage of Integration Builder roles in Exchange Profile
    Integration Builder u2013Integration Builder RepositoryParameter com.sap.aii.util.server.auth.activation to true
    Assign users to the newly created Integration Builder roles
    u2013Create dummy roles in Web AS ABAP, these roles are then available as groups in Web AS Java
    u2013Assign users to these roles
    u2013Assign the Integration Builder roles to the above groups in Web AS Java
    u2013Assign unrestricted roles to Super Users
    Please help - How to validate whether Data Dependent Authorizations are Activated?
    I am working with XI Developers and Basis Team and we did updated all the Required Exchange profile parameters.
    Per this Document - User Authorizations in Integration Builder Tools - Do we need to update the server.lockauth.activation in Exchange Profile. When We updated, It removed Edit Access from all XI Developers in PI
    In both the Integration Repository and the Integration Directory, you can define more detailed authorizations that restrict access to design and configuration objects.
    In both tools, you define such authorizations by choosing Tools ® User Roles from the menu bar. The authorization for this menu option is provided by role SAP_XI_ADMINISTRATOR_J2EE. Of course, this role should only be granted to a very restricted number of administrators. To activate these more detailed authorizations, you must set exchange profile parameter com.sap.aii.ib.server.lockauth.activation to true.
    The access authorizations themselves can be defined at the object-type level only (possibly restricted by a selection path), where you can specify each access action either individually as Create, Modify, or Delete for each object type, or as an overall access granting all three access actions.
    http://help.sap.com/saphelp_nw04/helpdata/en/f7/c2953fc405330ee10000000a114084/frameset.htm
    I was able to control display and maintain access from ABAP Roles, but completely failed to implement Integration Builder Security?
    Are there any ways to check Whether Data Dependent authorization or J2EE Authorizations are activated?
    Thanks a lot
    Satish

    Hello,
    so to give you status of our issue.
    We were able to export missing business component .
    But we also exported some interfaces after that and we had some return code 8, due  to objects still present in change list on quality system (seems after previous failed transports , the change list was not cleared completley...).
    So now we have checked that no objects is present in the change list of quality system and we plan to export again our devs on quality system.
    Hope after that no more return code 8 during imports and all devs transported correctly on quality system.
    Also recommending to read that, which is pretty good.
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/7078566c-72e0-2e10-2b8a-e10fcf8e1a3d?overridelayout=t…
    Thanks all,
    S.N

  • Design issue with the multiprovider

    Design issue with the multiprovider :
    I have the following problem when using my multiprovider.
    The data flow is like this. I have the info-objects IobjectA, IobjectB, IobjectCin my Cube.(Source for this data is s-systemA)
    And from another s-system I am also loading the masterdata for IobjectA
    Now I have created the multiprovider based on the cube and IobjectA.
    However, surprisingly join in not workign in multiprovider correctly.
    Scenario :
    Record from the Cube.
    IObjectA= 1AAA
    IObjectB = 2BBB
    IObjectC = 3CCC
    Records from IobjectA =1AAA.
    I expect the record should be like this :
    IObjectA : IObjectB: IObjectC
    1AAA       :2BBB       :3CCC
    However, I am getting the record like this:
    IObjectA : IObjectB: IObjectC
    1AAA       :2BBB       :3CCC
    1AAA         : #             :#
    In the Identification section I have selected both the entries for IobjectA still I am getting this error.
    My BW Version is 3.0B and the SP is 31.
    Thanks in advance for your suggestion.

    May be I was not clear enough in my first explanation, Let me try again to explain my scenario:
    My Expectation from Multi Provider is :
    IObjectA
    1AAA
    (From InfoObject)
    Union
    IObjectA     IObjectB     IObjectC
    1AAA     2BBB     3CCC
    (From Cube)
    The record in the multiprovider should be :
    IObjectA     IObjectB     IObjectC
    1AAA     2BBB     3CCC
    Because, this is what the Union says .. and the Definition of the multiprovider also says the same thing :
    http://help.sap.com/saphelp_bw30b/helpdata/EN/ad/6b023b6069d22ee10000000a11402f/frameset.htm
    Do you still think this is how the behaviour of the multiprovider.. if that is the case what would be the purpose of having an infoobject in the multiprovider.
    Thank you very much in advance for your responses.
    Best Regards.,
    Praveen.

  • Design issue with sharing LV2 style global between run-time executables

    Hi,
    Just when I though that I had everything figured out, I ran into this design issue.
    The application that I wrote is pretty much a client-server application where the server publishes data and the client subscribes data using data sockets. Once the client gets all the data in the mainClient.vi program, I use LV2 style (using shift registers) to make the data global to all the other sub-vi’s. So the LV2 is in initialize mode in the mainClient.vi program and then in the sub-vi’s the LV2 is in read mode. Also, I had built the run time menu for each sub-vi that when an item is selected from the menu, I would use the get menu selection to get the item tag which will be the file nam
    e of the sub-vi and open the selected sub-vi using vi server. This all worked great on my workstation where I have labVIEW 7.0 Express installed. But the final goal is to make exe’s for each of these sub-vi’s and install runtime on the PC’s that do not have labVIEW installed. Of course when I did that only the mainClient.exe program was getting the updated data from the server but the sub-vi’s were not getting the data from the mainClient.exe. I did realize that the reason for this is due to the fact that I had compiled all the sub-vi’s separately and so the LV2 vi is now local to each executable (i.e. all executables have their own memory location). Also, the run-time menu did not work because now I am trying to open an executable using vi server properties.
    To summarize, is there a way to share LV2 style global's between executables without compiling all of the sub-vi’s at one time? I tried using data-sockets (local-host) instead of LV2 st
    yle gloabls to communicate between the sub-vi’s but I ran into performance issues due to the large volume of data.
    I would really appreciate it if anyone can suggest a solution/alternative to this problem.
    Thanks
    Nish

    > 1)   How would I create a wrap-around for the LV2.vi which is
    > initialized in my mainClient.vi and then how would I use vi server in
    > my sub-vi to refer to that LV2.vi?
    > You mentioned that each sub-vi when opened will first connect to the
    > LV2.vi via via-server and will keep the connection in the shift
    > register of that sub-vi. Does this mean that the sub-vi is accessing
    > (pass-by-reference) the shared memory of the mainClient.vi? If this
    > is what you meant I think that this might work for my application.
    >
    If the LV2 global is loaded statically into your mainClient.vi, then any
    other application can connect to the exe and get a reference to the VI
    using the VI name. This gives you a VI reference you can use to call
    the VI. Ye
    s, the values will be copied between applications. That is
    why you need to add access operations to the global that returns just
    the info needed. If you need the average, do that in the global. If
    you need the array size, do that in the global. Returning the entire
    array shouldn't be a common operation on the LV2 style global anyway.
    > 2) Just to elaborate on my application, the data is
    > transferred via DataSockets from the mainServer.vi on another PC to
    > the client’s PC where the mainClient.vi program subscribes the
    > data (i.e. 5 arrays of double type and each arrays has about 50,000
    > elements). The sub-vi’s will have to access these arrays
    > located on the mainClient.vi every scan. Is there any limitation on
    > referencing the mainClient.vi data via vi-server from each sub-vi?
    Your app does need to watch both the amount of data being passed across
    the network, and the amount being shared between the apps. You might
    want to consider puttin
    g the VIs back into the main app. What is the
    reason you are breaking them apart for?
    Greg McKaskle

  • Remote historical data is not retrieved completely viewing it in MAX4

    Hi,
    since I installed LabVIEW 8 I have some problems retrieving historical data from another computer. Sometimes not all data is retrieved (if I zoom in or out or move back in time) and this missing data won't be retrieved ever.
    I already deleted the Citadel cache once, but after this even less data was retrieved... What's really weird, is, that for channels which weren't retrieved correctly, the data gets not updated anymore!
    On the remote computer I have a LabVIEW DSC Runtime 7.1 running (MAX 3.1.1.3003) on my local computer MAX 4.0.0.3010 and LabVIEW 8 DSC (development system) is installed parallel to LV DSC 7.1.1 (dev system). LV 8 is installed for testing purposes (I doubt we'll switch soon) and overall I like MAX 4. The HyperTrend.dll on my local computer is version  3.2.1017.
    This is really a quite annoying bug!
    So long,
        Carsten
    Message Edited by cs42 on 02-02-2006 09:18 AM

    Hi,
    > We've been unable to reproduce this issue. If you could provide some additional information, it might help us out.
    I did fear this, as even on my computer it is happening just sometimes...
    > 1) How many traces are you viewing?
    The views I observed this in had 2 to 13 traces.
    > 2) How often are the traces being updated?
    For some it's pretty often (about once a second), for some it's very infrequent (no change in data, that means updated because of max time between logs). I more often see this for traces that are updated very infrequently. But I think I've seen this for frequent traces as well (for these it does work currently).
    > 3) Are the traces being updated by a tag value change, or by the "maximum time between logs" setting in the engine?
    It happened for both types.
    > 4) What is the frequency of the "maximum time between logs" setting?
    Max time between logs is 10 minutes.
    > 5) Is the Hypertrend running in live mode when you zoom out/pan?
    I think it happened in both modes, but it defenitely did in live mode.
    > 6) If you disable/re-enable live mode in the Hypertrend, does the data re-appear?
    I couldn't trigger the loading of the data. All I did is wait and work with MAX (zooming, panning, looking at data) and after quite a while (some hours), the data appeared.
    Just tested this on a view where data is missing (for some days now!), and it didn't trigger data reloading. Zooming and panning don't as well. There's a gap of up to 3 days now for some traces. 7 of the 13 traces of this view are incompletely shown. All stopping at the same time but reappearing at different ones.
    AFAIR from the laboratory computer (these are temperatures and it's very plausable that these didn't change), there wasn't any change in these traces so they all got logged because of max time...
    I just created a new view and added these traces: the gap is there as well.
    (Sorry to put this all in this entry even if it is related to you other questions, but I started this live test with disable/re-enable live mode. )
    > 7)
    Are the clocks on the client and server computers synchronized? If not
    synchronized, how far apart are the times on the two computers?
    They should be (Windows 2000 Domain synchronized to ADS), but are 5 seconds apart.
    One thing I remember now: I have installed DIAdem 10 beta 2 (10.0.0b2530, USI + DataFinder 1.3.0.2526). There I had (and reported) some problems with data loading from a Citadel Database of a remote machine as well. This was accounted to some cache problem. Maybe a component is interfering?
    Thanks for investigating.
    Cheers,
        Carsten

  • How to fill a new single field in a Infocube with historical data

    Hello Everybody,
    We have an SD infocube with historical data since 1997.
    Some of the infoobjects (fields) of the infocube were empty during all this time.
    Now we require to fill a single field of the infocube with historical data from R/3.
    We were thinking that an option could be to upload data from the PSA in order to fill only the required field (infoobject).
    Is it possible? is there any problem doing an upload from the PSA requests directly to the infocube.
    Some people of our team are thinking that the data may be duplicated... are they right?
    Which other solutions can we adopt to solve this issue?
    We will appreciate all your valuable help.
    Thanks in advance.
    Regards.
    Julio Cordero.

    Remodeling in BI 7:
    /people/mallikarjuna.reddy7/blog/2007/02/06/remodeling-in-nw-bi-2004s
    http://www.bridgeport.edu/sed/projects/cs597/Fall_2003/vijaykse/step_by_step.htm
    Hope it helps..

  • Reloading Historical Data into Cube: 0IC_C03...

    Dear All,
    I'm working on SAP BW 7.3 and I have five years of historical data being loaded into InfoCube: 0IC_C03 (Material Stocks/Movements (as of 3.0B)) and the deltas are also scheduled (DataSources: 2LIS_03_BF & 2LIS_03_UM) for the InfoCube. I have new business requirement to reload the entire historical data into InfoCube with addition of "Fiscal Quarter" and I do not have a DSO in the data flow. I know its a bit tricky InfoCube (Inventory Management) to work on and reloading the entire historical data can be challenging.
    1. How to approach this task, what steps should I take?
    2. Drop/Delete entire data from InfoCube and then delete Setup Tables and refill these and then run "Repair Full Load", is this they way?
    3. What key points should I keep in mind, as different BO reports are already running and using this InfoCube?
    4. Should I run "Repair Full Load" requests for all DataSources( 2LIS_03_BX &2LIS_03_BF & 2LIS_03_UM) ?
    I will appreciate any input.
    Many thanks!
    Tariq Ashraf

    Hi Tariq,
    Unfortunately, you will need some downtime to execute a stock initialization in a live production system. Otherwise you cannot guarantee the data integrity. There are however certainly ways to minimize it. Please see SAP Note 753654 - How can downtime be reduced for setup table update. You can dramatically reduce the downtime to distinguish between closed and open periods. The closed periods can be updated retrospectively.
    To make it more concrete, you could consider e.g. the periods prior to 2014 as "closed". You can then do the initialization starting from January 1, 2014. This will save you a lot of downtime. Closed periods can be uploaded retrospectively and do not require downtime.
    Re. the marker update,please have a look at the document Re-initialization of the Material Stocks/Movements cube (0IC_C03) with 2LIS_03_BF, 2LIS_03_BX and 2LIS_03_UM in BW 7.x, step 10, 11, 12 and 13 contain important information.
    Re. the steps, it looks OK for me but please double check with the document Re-initialization of the Material Stocks/Movements cube (0IC_C03) with 2LIS_03_BF, 2LIS_03_BX and 2LIS_03_UM in BW 7.x.
    Please have a look at the following document for more background information around inventory management scenarios:
    How to Handle Inventory Management Scenarios in BW (NW2004)
    Last but not least, you might want to have a look at the following SAP Notes:
    SAP Note 436393 - Performance improvement for filling the setup tables;
    SAP Note 602260 - Procedure for reconstructing data for BW.
    Best regards,
    Sander

  • Issue with Data Provider name in variable screen for BEx Analyzer

    Hello all,
    We got an issue with Data Provider name in Variable screen in BEx Analayzer.
    We want to change the DataProvider name there to Description of the report instead of its Technical name.
    Any inputs are appreciated.
    Thanks
    Kumar

    You have to create a workbook to do this.
    Refresh your query/report. In Bex analyser, there is one toolbar named BEx design toolbox, If you are not able to see it in analyser, right click on the toolbar space of BEx analyser and click on BEx design toolbox. Here, goto to design mode, by clicking on a sysbol like 'A'. after that place the curser where you want to see the Query description. and click on insert text (T) in BEx toolbox. click on it and check "Query description" in constant tab. in the general tab you need to assign a dataprovider, for that assign your query name in workbook settings (in Bex design toolbox). also check the "display caption" in general tab.
    Pravender

  • Historic data migration (forms 6i to forms 11g)

    Hello,
    We have done a migration from Forms 6i to Forms 11g. We are facing a problem with the historic data for a download/upload file
    utility. In forms 6i the upload/download was done using OLE Container which has become obsolete, the new technology being webutil.
    We had converted the historic data from Long RAW to BLOB (by export/import & by the TO_LOB) and while opening them it throws a
    message or not able to open. This issue exists for all types of documents like .doc, .docx, .html, .pdf. We are unable
    to open the documents after downloading to local client machines.
    One option which works is to manually download the documents (pdf, doc etc) from the older version of forms 6i (OLE) and
    upload it to the forms 11g (Webutil). Is there any way this can be automated?
    Thanks
    Ram

    Are you colleagues?
    OLE Containers in Oracle Forms 6i

  • How to recreate EBS user and keep all his historical data.

    Hi all
    We have a user that is having an issue seeing any of his scheduled Discoverer reports within the Schedule Manager window of Discoverer Plus; Discoverer Desktop works fine.
    The solution for it's to recreate the EBS user. The problem with this is that, if we recreate the EBS user, he will lose all historical data connected to that user, including the results of the scheduled Discoverer reports as well as all of the EBS created/last updated information.
    There is a way to recreate an EBS user and preserve the historical references.
    Thanks

    We have a user that is having an issue seeing any of his scheduled Discoverer reports within the Schedule Manager window of Discoverer Plus; Discoverer Desktop works fine.
    The solution for it's to recreate the EBS user. The problem with this is that, if we recreate the EBS user, he will lose all historical data connected to that user, including the results of the scheduled Discoverer reports as well as all of the EBS created/last updated information.Why do you need to recreate the user?
    Are you saying you are going to create a new username for the same user and end-date the old one?
    There is a way to recreate an EBS user and preserve the historical references.I believe there is no such a way to find all records/tables with the old user_id. Even if you find the list and update them manually, I believe this approach is not supported.
    Please log a SR to confirm the same with Oracle support.
    Thanks,
    Hussein

  • AP historical data

    Dear experts,
    My client wants to bring at least one year of historical data from legacy system to SAP to prvent creating duplicate invoices. ExL an invoice may have already been created in paid in legacy and would like to have the visibility on that.
    Also, for 1099 reporting purposes. The client is going in mid of the year and they wanted to bring the data from legacy to sap so that they can do their 1099s in one single place.
    can someone help how the other clients are doing this and what is the best practice to bring the historical data?

    Hi,
    If you client want 1 year data to be entered in SAP then you have to upload all the invoices and payment separately in SAP with the corresponding GL will be initial upload account, and after uploading you have to clear the document manually which are already cleared in legacy system.
    Example
    Invoice
    Purchase Ac Dr
      TO initial upload Ac Cr
    initial upload Ac Dr
       To Vendor Ac Cr
    Payment
       Vendor Ac Dr
         To initial upload Ac Cr
       initial upload Ac Dr
         To Bank  Ac Cr
    Now you have to clear all the document manually.
    Hope this will solve you issue.
    Regards,
    Shayam
    Edited by: Shayam_210 on Aug 25, 2011 6:35 AM

Maybe you are looking for

  • How I 'fixed' my iTunes 5.0 download...

    I too got some errors whilst installing iTunes 5.0. Windows gave me an error during the download. Well, needless to say, iTunes didn't work anymore. Whenever I tried to uninstall iTunes, the uninstaller wouldn't work completely (again, an error). The

  • IWork Apps Quitting when asking for "help"

    All iWork Apps quit when I tap the Help Icon. Anyone have any idea what's up? - Thanks

  • File 2 Proxy Process in PI 7.1

    working on File to Proxy Scenario and I gave the File content conversion as below. Incoming file contains 3 record structures Header, Record & Trailer  File is getting processes only where there is only one record in the record structure The error wh

  • Cannot commit during managed transaction

    Using Kodo 2.5.2, JBoss 3.2.1, and the DefaultDS datasource (HSQL) provided by JBoss, I'm getting the following exception when calling PersistenceManager.makePersistent: java.sql.SQLException: You cannot commit during a managed transaction! at com.so

  • What happend to the save option for tabs?

    in version 3.x, there was an option to save the open tabs and the prompt to save would come up when shutting FF, where is this now in 4.x?? Put it back please..